
The second that green “before/after” bar slid across Capcom’s latest Resident Evil footage in Nvidia’s DLSS 5 trailer, I felt my stomach drop. Not in the good, horror-game way either. I’ve been playing Resident Evil since the PS1 days, when pre-rendered backgrounds and tank controls were part of the charm. I’ve watched this series evolve from muddy polygons to some of the best facial animation and lighting in the entire industry. So when I saw this new AI tech roll over a carefully crafted character and spit back a waxy, yassified impostor, my first thought was brutally simple:
“This looks like shit, and worse, it disrespects the people who actually made the game.”
Capcom already knows what the hell they’re doing. The Resident Evil engine has been flexing on half the industry for years. Faces that look human but still a bit uncanny in that perfect horror way, grime that feels sticky, lighting that makes every hallway look like a crime scene photograph. I never once played a modern Resident Evil and thought, “Yeah, but what if her face looked more like an AI TikTok filter?”
Yet that’s exactly what DLSS 5’s demo did. The “after” shot of the protagonist didn’t just tweak sharpness or clean up aliasing. It changed her. Fuller lips. Sharper, more symmetrical nose. Smoother skin lit with this bizarre Instagram-spotlight glow that didn’t match the mood of the scene at all. She went from looking like a person surviving hell to someone about to shill skin cream in a YouTube pre-roll.
That’s not “graphics enhancement.” That’s overwriting the damn character.
I need to make something crystal clear: I’m not some crank who wants us to go back to CRTs and fixed cameras. I love tech when it serves the work. DLSS 2 was a game-changer for me on PC – getting more frames in big open worlds without turning everything into a blurry mess felt like cheating in the best way. HDR done right? Chef’s kiss. Physically based rendering? Fantastic when artists use it to push mood and realism.
I’ve been through this evolution personally. I grew up on Dreamcast and Shenmue, where a static skybox and a few carefully placed lights could tell you everything about how a character felt walking home in the rain. Later, I dumped hundreds of hours into games like Uncharted 4, staring at tiny changes in Nathan Drake’s face during quiet scenes and thinking, “Someone slaved over that wrinkle. That freshly healed cut. That bruise fading over three chapters.”
That’s art direction. That’s authorship. Someone decided, on purpose, that this person should look worn down, imperfect, specific.
DLSS 5 isn’t just touching up pixels. Nvidia literally describes it as a “3D guided neural rendering model” that understands “characters, hair, fabric, translucent skin” and relights scenes based on what it thinks they should look like. Not what the game shipped with. Not what the art director signed off on. What the model, trained on god-knows-what, has decided looks “better.”
That’s the line for me. The moment a tool stops being a way to present the art and starts quietly rewriting the art itself, we’ve left “graphics tech” and walked into “AI filter” territory. And filters come with baggage: beauty standards, homogenization, and an endless urge to smooth, polish, and erase the very flaws that make characters feel human.
Since the backlash hit, Nvidia has scrambled to reassure everyone. CEO Jensen Huang and company keep repeating the same line: developers “have artistic control over DLSS 5’s effects to ensure they maintain their game’s aesthetic.” Bethesda echoed that about Starfield, saying their DLSS 5 lighting pass will be under artist control and optional for players.
On paper, fine. If DLSS 5 is just another slider in the dev toolbox, and they can tune it so it doesn’t bulldoze their vision, maybe there’s a world where this doesn’t suck.
But if that’s the case, why the hell does Nvidia’s own showcase reel look like an AI beauty pageant?

Look at the Resident Evil clip. Look at Hogwarts Legacy in that same trailer – every student suddenly looks like they’ve been hit with a studio key light from dead center, no matter what the actual environment is doing. Look at Starfield, where Bethesda had to come out and say they’d adjust the implementation after people noticed characters suddenly morphing into plasticky mannequins.
If this is Nvidia’s carefully curated “best foot forward,” showing off hand-picked scenes with partner studios, and the result is still faces that look like they’ve been cloned from the same AI beauty model… I don’t buy the “don’t worry, devs will tame it” line. The tech is revealing its biases in its first five minutes on stage.
And here’s the part that really pisses me off: everyone keeps talking as if “better” visuals are some objective thing. As if a smoother, more symmetrical face with more subsurface scattering is inherently an upgrade. It’s not. Sometimes “better” is rougher. Sometimes “better” is sickly, asymmetrical, scarred, underlit, or just plain weird.
Resident Evil in particular lives in that space. These games are full of people who look exhausted, haunted, or outright grotesque. When your AI model quietly nudges them toward looking like glossy Netflix stars, that isn’t a neutral “enhancement.” That’s a value judgement baked into the tech.
Let’s pretend for a second that the faces didn’t look so damn uncanny. That DLSS 5’s character changes were more subtle, the lighting less obnoxiously “photography 101.” Would I still be this annoyed?
Yeah, I would. Because the core problem isn’t just the look. It’s who gets to decide what that look is.
When Capcom’s artists sculpt a protagonist for Resident Evil, or Avalanche’s team decides how a Hogwarts student should appear under candlelight in the Great Hall, those choices are tied to story and tone. This person is a survivor, so their eyes are tired. This corridor feels unsafe, so the lighting is harsh and angular. This NPC has a crooked nose or acne because the writers wanted them to feel grounded, not idealized.

Now add an AI layer trained on a mountain of images scraped from the internet: fashion shoots, cinema close-ups, influencer selfies, photo mode posts, whatever. Its job is to “improve” the frame. What does “improve” mean in that context? You already know the answer. Smooth the skin. Balance the features. Add flattering light. Nudge everyone a little closer to the same magic average face.
We’ve seen this movie before with smartphone cameras. Remember when every manufacturer quietly started adding “beauty” filters that slimmed faces and brightened eyes by default? We’re about to relive that in games, but with far more sophisticated tools, applied in real time across entire worlds.
The horror isn’t just that characters look weird. It’s that, if DLSS 5 becomes standard, we risk quietly erasing all the tiny, specific decisions that make a game’s visual identity its own – especially in cross-platform titles that have to run on PC, PlayStation, and Xbox. On PC with an RTX card, you don’t just get higher frames anymore; you get a slightly different game. A different face. A different mood. A different author looking over the original artist’s shoulder, painting on top of their work in real time.
And no, I don’t care that this “author” is a bundle of weights and matrices instead of a person. That almost makes it worse. At least if some new art director came in and “remastered” all the faces, I’d know the decisions came from a human brain I could yell at.
Right now, it’s easy for everyone to say, “Relax, it’s optional. You can turn DLSS 5 off. Devs don’t have to use it.” Nvidia says it. Bethesda says their Starfield implementation will be tweakable and under artist control. Publishers nod along in interviews.
I’ve been watching this industry long enough to know how that story usually ends.
First, it’s a fancy toggle for enthusiasts with Nvidia GPUs on PC. Then marketing teams realize they can throw “Now With DLSS 5” on the box and make it sound like a huge leap, even if it’s just a filter. Platform partners start “encouraging” studios to support the tech. The more games use it, the more pressure there is on everyone else to follow. A few years down the line, the question isn’t “Should we use DLSS 5?” It’s “Why didn’t you use DLSS 5?”
That’s when the real danger kicks in. Because once that expectation sets, I can absolutely see studios planning art direction under the assumption that AI will do clean-up. Maybe faces are modeled a little more generically because the neural renderer will “add detail.” Maybe lighting is blocked in more roughly because “the AI pass will make it pop.” Why spend extra time nailing bespoke mood in every shot when an algorithm will slap on whatever it thinks looks vaguely cinematic?
Meanwhile, the training data keeps pushing everything toward the same visual center of gravity. If you think big-budget games already look too similar, imagine what happens when the last step in the pipeline for every “cinematic” scene is the same Nvidia AI dressed up in a different UI.
I don’t play Resident Evil for “generic horror protagonist #54 rendered to resemble a promo still.” I play it because Capcom weirds things up. Because faces in that series have always had this slightly off-putting, hyperreal texture to them that screams “Resident Evil” the moment you see a screenshot. I want more of that energy in games, not less.

The worst-case scenario here isn’t just ugly screenshots. It’s creative complacency, encouraged by a tech stack that says, “Don’t worry about the last 10% of the image – the AI will handle it.” That’s exactly where the magic lives. That last 10% is the difference between a screenshot that could be any AAA game and something you can identify at a glance as this game and nothing else.
So what does all this ranting actually mean for how I play games? I’m not just shouting into the void here; I’m changing how I approach anything shipping with DLSS 5 slapped onto it.
First: if a game gives me the option, DLSS 5’s “neural rendering” or whatever they want to call the face-and-lighting pass is getting turned off immediately. I’ll still happily use old-school DLSS for pure upscaling and frame generation if it doesn’t mess with art direction, but the second it starts rewriting characters, I’m out.
Second: I’m paying way more attention to how studios talk about this stuff. If a developer comes out and says, “We used DLSS 5 but only for environmental materials, and our artists locked down character faces and key lighting,” cool, I’ll give that a fair shot. If instead the pitch is basically, “Nvidia’s tech made our game more realistic!” with no acknowledgment of art direction, that’s a red flag the size of an EA Sports FC billboard.
And yeah, I’m going to judge platform partners too. If Nvidia, or any GPU vendor, keeps pushing AI systems that bulldoze over handcrafted visuals, I’m going to think twice when I upgrade my hardware. I’m not naive; performance still matters, and so does ecosystem support. But I’m tired of watching companies sell us “innovation” that quietly makes games look more samey and soulless.
I’ve spent my whole life falling in love with the weird, specific ways different studios see the world – from the surreal dream-logic of old Resident Evil pre-rendered backgrounds to the lived-in streets of Shenmue to the bruised knuckles and cracked lips on my favorite fighting game characters. I don’t need an AI model stepping in at the last second to tell them, “Actually, here’s what your game should look like.”
DLSS 5 might impress some people with its “incredibly lifelike” marketing buzzwords. It might sell a lot of GPUs. It will definitely get patched, tweaked, and dressed up in nicer menus. But unless it fundamentally changes what it’s doing – unless it becomes a tool that respects art direction instead of quietly rewriting it – I’m treating it for what it looks like right now:
An AI filter that turns carefully crafted horror into AI-slop glamour shots. And no matter how good the tech underlying it is, that’s not the future of game graphics I want any part of.
Get access to exclusive strategies, hidden tips, and pro-level insights that we don't share publicly.
Ultimate Gaming Strategy Guide + Weekly Pro Tips