
Every time I’ve seen DLSS 5 kick in over the last week, something in my brain has twitched. Not because the frame rate jumps – that part is great – but because I keep recognising things I’ve never actually seen before.
A character’s face in a Cyberpunk 2077 update looks like someone I knew at university… but not quite. A hallway in a DLSS 5 tech demo feels like an airport I trudged through in 2004… except those lights, that carpet pattern, those reflections are too clean, too deliberate. It’s like my GPU is rummaging in my head, stitching together a “best of” compilation of my memories and then quietly overwriting the original footage.
This has been colliding, hard, with the time I’ve been spending in the Fatal Frame II: Crimson Butterfly remake and reading through Rock Paper Shotgun’s latest Sunday Papers column on AI aesthetics, DLSS 5, Backrooms liminality, localisation history, and Pathologic 3. It all circles one uncomfortable question for me:
When games chase AI-driven photorealism, whose memories are we actually looking at – ours, or the machine’s approximation of what our memories “should” look like?
Nvidia launched DLSS 5 in March 2026 boasting “Neural Rendering” – AI upscaling and frame generation that don’t just sharpen edges, but supposedly generate richer lighting, materials, even “neural atmospheres.” The marketing pitch leans hard on emotion: photoreal rain on glass, reflections in puddles, the kind of buzzwords that promise not just better pixels, but deeper feelings.
Digital Foundry immediately poked holes in that narrative, calling some of it “soulless upscaling” – more like a technical trick than an artistic leap. And honestly, I’m with them. I’ve seen those viral comparison clips where faces get subtly “fixed” – pores more detailed, lighting corrected, eyes just a bit brighter. Technically impressive, sure. Emotionally? It’s like someone autoclaved the soul out of the scene.
This lines up uncomfortably well with the whole Backrooms aesthetic that’s been everywhere the last few years: endless yellow office corridors, buzzing strip lights, damp carpet, the sense that you’re trapped inside an abandoned insurance company in 1997.
The origin story there is simple: access to 3D tools got cheaper, and indie artists discovered you could get incredibly evocative results just by rendering eerily empty interiors. No characters, no elaborate props – just brutally lit spaces that feel like they’ve slipped loose from time. AI image generators like Midjourney v7 then grabbed that vibe and turbocharged it, vomiting out infinite not-quite-real office parks and hotel basements that somehow feel more like our late-night childhood memories than our actual memories do.
Media theorists have already started slapping words on this. Lev Manovich’s 2025 book on AI aesthetics talks about “affective realism” – images that feel emotionally real because of their hyper-detailed imperfections, even when they’re obviously synthetic. Other writers have folded in ideas like “prosthetic memory,” where media grafts simulated memories onto us.
You can see the same logic in Google Pixel 9’s ads for Magic Editor: remove that annoying stranger from the background, swap in a better smiling face from another shot, “create that moment the way you remember it.” Except, no – it’s the way you want to remember it. These are wish-photos, not memories.
DLSS 5 is doing something eerily similar for games. If the card thinks your corridor should look more like a nostalgia-optimized Backrooms shot, it’ll quietly shove it in that direction: cleaner reflections, more legible lighting, coherent detail everywhere. It’s trying to make every scene align with how AAA photorealism “ought” to look in 2026.
The problem is, my favourite game memories don’t look like that at all.
I came to the original Fatal Frame II: Crimson Butterfly on PS2 at exactly the wrong age: old enough to understand that trauma and loss were the real monsters, young enough that the blur of that muddy image could plausibly hide anything in the dark.
The village was a smudge of fog and low-res textures. The Camera Obscura – that cursed analog camera used to exorcise ghosts – turned every encounter into a nerve-shredding timing puzzle. You wait until the threat is close enough, its face dissolving in the grain, then click the shutter and pray you got there first.

The remake on PS5 and PC brings that world into modern fidelity: higher resolutions, restored fog, improved lighting, more detailed models. Critics have rightly praised how much atmosphere survived the jump. The village still feels rotten. The dolls are still wrong. The soundscape still crawls under your skin.
But here’s the key difference that keeps nagging at me: the game’s core mechanic is photography. It’s about capturing ghosts on film. It’s about framing, timing, and the terrifying ambiguity of what the camera reveals. When Fatal Frame gets clearer, that clarity is always in service of the horror.
When Nvidia’s DLSS 5 gets clearer, that clarity is in service of a benchmark chart.
Playing the remake, I can feel the developers struggling with exactly the tension I wish GPU vendors cared about: how do you modernise visuals without erasing the specific texture of fear that came from PS2 limitations? The fog isn’t just a performance hack, it’s a storytelling tool. The slight lag on your shutter press isn’t just input delay, it’s dread.
Now imagine slapping aggressive DLSS 5 neural rendering over that. Extra “detail” in the grain, extra “clarity” in the shadows, temporal tricks that reproject what the AI thinks you should have seen there. Suddenly the ghosts stop flickering and start looking like they’ve walked out of a prestige Netflix horror show. Technically impressive. Emotionally wrong.
The debate around the remake’s 4K upscales has already kicked off in horror circles. Some purists argue the new fidelity scrubs away the original’s “grainy authenticity.” I think that’s a bit romantic – the remake does a solid job preserving the mood – but I get where they’re coming from. When your horror is about half-seen shapes and emotional fragility, every extra pixel is a potential liability.
That’s the crux of my beef with AI-powered photorealism in horror: it’s relentlessly biased toward legibility, when fear is usually born from the opposite. DLSS 5 wants every texture to be coherent. Fatal Frame wants every hallway to feel like a barely-remembered nightmare, where you’re never sure what your eyes are doing to you.
This whole conversation about AI changing how games look keeps bumping me back to another, older layer of mediation we don’t talk about enough: localisation.
There’s a brilliant anthology called Translated Realms that digs into how games like Final Fantasy IV and Earthbound were transformed for Western audiences. Not just translated, but rewritten, recontextualised, bent into shapes that fit the expectations of kids who’d never seen the original scripts.
There’s a brilliant anthology called Translated Realms that digs into how games like Final Fantasy IV and Earthbound were transformed for Western audiences. Not just translated, but rewritten, recontextualised, bent into shapes that fit the expectations of kids who’d never seen the original scripts.
Compare prices instantly and save up to 80% on Steam keys with Kinguin — trusted by 15+ million gamers worldwide.
*Affiliate link — supports our independent coverage at no extra cost to you
Take the legendary “You spoony bard!” line from Final Fantasy IV. The localiser has talked about it: she didn’t want the character using crude curse words, so she invented this bizarre, almost nonsensical insult instead. It’s wrong, in a literal sense. But it’s also perfect. It became a meme, a shared point of nostalgia, a glitch in the matrix of RPG language that players cling to decades later.

That’s affective realism before anyone coined the phrase. It doesn’t accurately represent the Japanese script, but it nails a feeling – of exasperated, theatrical scolding – in a way nobody would have arrived at by playing it safe.
Fast forward to 2026 and we’re watching studios flirt with AI-assisted localisation for games like Pathologic 3. On paper, it’s a dream: cheaper, faster, broader reach. In practice, it risks sanding off exactly the kind of beautifully odd moments that give games their texture across cultures.
Machine translation optimises for plausibility and fluency. It wants every sentence to look like something a native speaker might say, not something they would never say but instantly remember forever. It’s DLSS for language: taking messy, weird source data and coercing it into the smoothest possible version of itself.
The reason this bothers me is the same reason DLSS 5’s “neural atmospheres” give me the creeps. Both are built on a quiet assumption: that the ideal version of a game is the most legible, photoreal, globally readable version. The sharpest textures, the clearest language, the highest frame rate.
But a massive chunk of what I love about games – especially horror, especially weird indies – lives in the opposite direction: grain, fog, mistranslation, gaps in understanding that your brain rushes to fill.
Get access to exclusive strategies, hidden tips, and pro-level insights that we don't share publicly.
Ultimate Gaming Strategy Guide + Weekly Pro Tips
If there’s a series that understands the terror of ambiguity, it’s Pathologic. The first time I wandered through its plague-ridden town, I felt like I’d fallen into a fever dream that didn’t particularly care whether I followed along. NPCs mumbled cryptic philosophy, time slipped away, systems refused to explain themselves.
Pathologic 3 in early access has started leaning into AI-enhanced procedural events – touted by the devs as a kind of “living memory” for the town. The idea is seductive: the city responds dynamically, remembers what you’ve done, recombines events in surprising ways. A plague that never quite plays the same way twice.
But reading coverage of it – and dipping into it myself – I keep feeling the same unease as with DLSS 5. The more the town behaves like a clever system, the less it feels like a cruel, uncaring place. You can see the strings. You start spotting the logic. The unknowable becomes… a bit knowable.
Old-school Pathologic was hand-authored bleakness. Every unfair encounter was placed there by a human being who wanted you to suffer in a very particular way. That intention radiates off the screen. It’s not just a list of events; it’s a mood, a thesis about the futility of trying to “solve” a plague.
When AI starts remixing those events on the fly, I worry we end up with something closer to Backrooms horror: spooky, yes, but generic, placeless, optimised for endless scrolling rather than a specific, authored dread. The town becomes a content feed with a plague skin, instead of a singular, horrible memory you can’t quite shake.
Again, it comes back to who’s in charge of the vibe. If the procedural layer is tightly constrained by human authorship, nudging things in interesting ways, great. If it’s there because someone in a meeting said “we need more replayability and systemic depth,” then it’s just AI doing the same thing it does to screenshots: pushing everything toward a median of acceptable, marketable tension.

I’m not anti-tech. I love higher frame rates. I love being able to run something like Cyberpunk 2077 at 4K without my PC melting. DLSS 2 and 3 were genuinely transformative for performance. I’m not sitting here longing for the return of jaggies and 480p.
But I am absolutely done pretending that AI-driven photorealism, by itself, is some kind of emotional upgrade. It isn’t. It’s a style filter with delusions of grandeur.
When Nvidia starts throwing around phrases like “neural atmospheres,” I hear the same bullshit that ad agencies use for phone cameras. The tech is real; the feelings are smuggled in later by clever copywriting. Atmosphere is not a side effect of more accurate reflections. Nostalgia is not something you get for free by approximating the lighting model of a PS2-era fog bank.
Games like the Fatal Frame II remake prove that you can use modern fidelity to strengthen an atmosphere if you respect what made it work in the first place. Keep the fog. Keep the awkward pauses. Keep the sense that the image might break at the edges.
Older localisation work – “You spoony bard!” and all its gloriously weird cousins – proves that imperfect, human mediation can accidentally create moments that define entire generations of players. AI translation might be “better” in a dictionary sense, but it has no idea how to trip over its own feet in a way that makes kids in 1992 giggle and remember a line for thirty years.
Pathologic 3 shows how dangerously attractive “living systems” can be when you’re dealing with horror and memory. A town that moves on its own sounds alive, but if it moves in ways that reflect machine logic instead of human cruelty, something vital gets lost.
Backrooms imagery, meanwhile, is the warning label nobody in marketing wants to read. We already live in a world where AI can spin up infinite uncanny nostalgia on demand – places that look more like forgotten childhood afternoons than any actual building you ever walked through. It feels profound for about five minutes. After that, it’s just another feed of content you scroll past, chasing the next little hit of fake déjà vu.
I don’t want my games to become that. I want them to remember that horror comes from not knowing what you’re looking at, from words landing just slightly wrong, from towns that refuse to behave like simulations. I want remakes that respect their own grain. I want localisation that occasionally misfires in ways only a human could. I want my PC to render the game I’m playing, not the one some neural network thinks I’ll be more impressed by in a side-by-side comparison video.
So yeah, I’ll use DLSS 5 when I’m grinding through some open-world blockbuster that’s already aiming for that glossy, showroom-floor look. Fine. Whatever. But when I boot up something like the Fatal Frame II remake, or creep back into Pathologic 3, or revisit an old PS2 horror classic on PC, those AI toggles are staying off.
Those games don’t need neural atmospheres. They already have ghosts. And I’d rather see those ghosts through a slightly broken lens than let a machine decide how my memories are supposed to look.