This caught my eye because AMD is basically promising DLSS-style magic without the green tax. The next FidelityFX Super Resolution, codename “Redstone” (likely FSR 5), will reportedly convert its machine learning models into optimized compute shaders using a system AMD calls ML2CODE. Translation: no Tensor cores, no vendor lock-in-just shader cores doing neural work on virtually any GPU, including older GeForce cards and current consoles. If the image quality really matches what AMD has shown with FSR 4 and it scales across hardware, that’s a genuine industry shake-up.
Chris Hall, AMD’s senior director of software development, told Japan’s 4Gamer that Redstone remains machine-learning based (like FSR 4), but the “core part of the neural rendering technology is converted into optimized compute shader code.” Rather than running the trained network on dedicated AI cores at runtime, ML2CODE turns it into native shader kernels. That’s a clever shift: the heavy lifting of training still happens offline, but the game only needs standard shader hardware to execute the result.
On paper, that unlocks a universal feature suite that mirrors what Nvidia currently gates behind Tensor cores: temporal upscaling, frame generation, and even neural radiance cache and ray generation assists. If AMD can package this inside the FidelityFX SDK, with solid Unreal and Unity plugins, developers could target one solution that reaches Radeon, GeForce, and even integrated graphics. That’s the dream AMD has chased since FSR 1-only this time they’re saying “neural” without needing AI blocks.
There’s no magic compute. If you run neural models on shader cores, you’re carving time out of the same pipeline that handles lighting, post-process, and often ray tracing denoisers. Nvidia gets around that by offloading to Tensor cores and specialized hardware like the Optical Flow Accelerator for frame gen. AMD’s bet is that modern shader arrays have enough headroom—and that ML2CODE can schedule and optimize kernels tightly enough—to make the hit negligible in practice.
That’s the make-or-break. On a beefy card with unused compute, fine. On older GPUs already compute-bound, adding neural upscaling plus frame gen could be a zero-sum trade: a higher “fps” number but more hitching or reduced RT quality because you had to drop effects to pay the shader bill. AMD will need to prove this works across the Steam hardware survey’s middle class—think GTX 1660/RTX 2060 up through RX 5700 and console-class RDNA 2/3—without turning every game’s settings menu into a compromise simulator.
Quality still matters more than any slide. FSR’s early versions were easy to enable but struggled with shimmering, ghosting, and temporal instability. FSR 4 (in AMD’s telling) apparently fixes a lot of that. If Redstone matches or beats that image quality while running anywhere, then the shader tax is a price many players will happily pay. If it regresses into edge crawl and disocclusion smearing, no amount of universality will save it.
DLSS has been Nvidia’s stickiest moat: it’s good, and the best parts (frame gen) are limited to newer RTX parts. Intel’s XeSS already demonstrated that AI upscaling can fall back to shaders (DP4a) and still look decent, but adoption lags DLSS. AMD going all-in on a shader-based neural pipeline feels like a second attempt at the “works everywhere” promise—only now with ML-driven components beyond upscaling, including neural radiance cache that could make global illumination and RT more viable on mid-range hardware.
The console angle is massive. PS5 and Xbox Series systems run AMD silicon without dedicated AI blocks exposed for games. If Redstone’s compute path delivers stable image quality and sensible performance on those boxes, developers get a unified feature set across PC and console. That could accelerate adoption more than any PC-only tech ever could—and it would undercut the notion that you need Tensor cores to ship modern reconstruction and frame gen.
First: real footage. Side-by-sides with motion stress tests—fine foliage, specular highlights, motion vectors during rapid camera pans—will tell us whether Redstone stabilizes the classic FSR artifacts. Second: latency. Frame generation lives and dies by input response; if shader-based FG adds delay without good frame pacing, competitive players will nuke it from orbit. Third: developer adoption. Clean Unreal/Unity plugins and minimal integration labor are crucial. FSR 3 frame gen arrived, but it wasn’t anywhere near “in every game” fast.
Finally, timeline. “By end of 2025” means this is AMD planting a flag, not something you’ll toggle next quarter. Nvidia won’t sit still, and Intel will keep iterating on XeSS. By the time Redstone ships, DLSS may have new tricks and broader support. The bar keeps rising.
If AMD really turns trained neural renderers into fast compute shaders, Redstone could bring quality upscaling, frame gen, and more to almost any GPU—GeForce included. The idea is huge; the proof will be motion stability, latency, and how painful the shader tax is on mid-tier hardware. End of 2025 can’t come fast enough.
Get access to exclusive strategies, hidden tips, and pro-level insights that we don't share publicly.
Ultimate Gaming Strategy Guide + Weekly Pro Tips