
When Mark Cerny sat down alongside AMD engineers and dropped three new keywords—Neural Arrays, Radiance Cores, Universal Compression—my first thought wasn’t “Great, more ray tracing.” It was: “Finally, they’re tackling bandwidth and ditching brute-force hacks.” After a PS5 generation spent toggling between 60 fps Performance modes and 30 fps Quality modes (with timid RT reflections and FSR shimmering on my 65″ OLED), this is the fight I’ve been waiting for.
It took me a few minutes to see why this announcement feels different from the usual marketing spin. The trigger was the trio: a smarter light pipeline (Radiance Cores), a cooperative compute fabric (Neural Arrays), and aggressive data compression/decompression (Universal Compression). It’s the holy trinity of this generation: credible lighting, practical AI, and memory efficiency. If you’ve ever tweaked your upscaler settings to hide aliasing, you get the idea.
*Specs tentative, final details TBD. Focus is on memory efficiency, smarter RT, and real-time AI.
Everyone will gush about Radiance Cores enabling more path tracing on a console. That’s expected. Oddly, the sleeper hit for me is Universal Compression. The ugly truth of PS5/Series bottlenecks isn’t lack of raw TFLOPS—it’s data bandwidth and locality. Developers scramble with meshlets, aggressive streaming, BVH hierarchies, custom denoisers, and upscalers like FSR just to keep frames from tanking. When VRAM traffic spikes, frame-times spike. L2/L3 cache misses blow budgets. When 4K textures can’t fit, we slash anisotropy or cull draw distance, and RT budgets go out the window.
If Universal Compression delivers, it means stripping out “fat” from textures, normals, G-buffers, maybe even BVH trees or vertex streams—and decompressing them in dedicated hardware without stealing GPU cycles. The payoff: higher “effective” bandwidth without the usual energy penalty. Invisible magic that can flip a “near-stable” 40 fps mode into locked 60 fps.
I game on a 4K 120 Hz TV with VRR. I crave 60 fps, but I still switch to 30 fps RT modes for atmosphere, then back to 60 when micro-hitches and blur kill immersion. It’s not that we can’t trace rays—it’s that we can’t trace them fast enough with enough samples, then denoise cleanly without destroying detail or input responsiveness. It’s a chain: RT acceleration, temporal accumulation, denoising, upscaling—any weak link, and the whole pipeline wobbles.
PS5 Pro boosted GPU time with PSSR upscaling and a higher GHz, but without fresh RT acceleration and no leap in memory efficiency, you hit diminishing returns. That’s why this announced trinity resonates: Radiance Cores (next-gen RT far beyond AMD’s previous Ray Accelerators), Neural Arrays (16 ms-budgeted ML denoisers/upscalers/animation), and Universal Compression (to stop choking the data pipes).

“Ray tracing” alone means little; the crux is BVH traversal, shader handling (miss, hit, any-hit), intersection tests, and amortizing cost across complex scenes. AMD’s original console RT hardware did okay for first-gen, but lagged PC competition in perf/watt. Radiance Cores hint at a deeper redesign—more than just extra units.
Here’s what to watch for:
The real payoff? Hybrid path tracing: no, not 4–5 brute-force bounces at 4K 60 fps. But a clean direct bounce, one or two importance-sampled bounces, plus ML-powered gap-filling. Feasible if Radiance Cores slash raw cost and denoisers deliver true real-time latency.
“AI” is an umbrella term. The clue is in “arrays” and “cooperative.” I interpret this as a woven fabric of specialized compute blocks (matrix-multiply, convolutions) that work shoulder-to-shoulder with the GPU and share data without round-tripping through main memory. The standout use cases: denoising and upscaling.
On PC, ML upscalers and denoisers hog GPU cycles or force a CPU offload. On PS6, Neural Arrays are peer-level compute resources. They can:
The magic is low latency and high bandwidth between the GPU and Neural Arrays. No more stalling the GPU while an ML kernel crunches numbers in an isolated island.

Think of Universal Compression as a universal ZIP for GPU assets: textures, vertex data, BVH nodes, maybe even G-buffer layers. But it’s loss-tuned for visual fidelity and handled by dedicated logic, so decompression costs a few cycles while cutting memory traffic by a big factor.
Practical gains include:
Given modern GPUs struggle with uncompressed 4K assets and denoiser weight, this compression layer could be the single biggest efficiency boost under the hood.
Here’s a practical rundown for teams gearing up to target PS6:
These guidelines assume a 4K 60 fps target. For 120 Hz modes, reduce RT rays or drop to 1440p internal, pushing upscaler from Neural Arrays.
Project Amethyst’s reveal isn’t just another GPU clock-speed bump. Radiance Cores promise hardware-level ray tracing tuned for common console paths. Neural Arrays add real-time AI horsepower without stalling the GPU. Universal Compression slashes data bloat, boosting effective bandwidth. Together, they target the true weakness of modern consoles: data starvation. For players, that means stable 4K @60 fps (even with RT), sharper visuals, and faster load/stream experiences. For developers, it’s a new set of pipelines to balance RT, AI, and compression budgets. If these pillars deliver on their promises, PS6 could feel like a quantum leap in console graphics efficiency—finally chasing the right bottleneck instead of just raw brute force.
Get access to exclusive strategies, hidden tips, and pro-level insights that we don't share publicly.
Ultimate Tech Strategy Guide + Weekly Pro Tips