Gaming GPUs Graphics Reviews

AMD FSR Redstone in detail – milestone and real counterpoint to NVIDIA’s DLSS?

Radiance Caching is one of the most ambitious elements within AMD’s FSR Redstone technology and the most forward-looking component of the entire render pipeline. It is not just an image enhancement tool like upscaling or frame generation, but a deep intervention in the calculation of global illumination. Radiance Caching addresses the most costly component of path tracing, namely the repetition of complex lighting calculations across multiple bounces. This is AMD’s first foray into an area that has traditionally been considered one of the main bottlenecks in computer graphics.

Functional principle of radiance caching

The basic principle of a radiance cache is not new, but the implementation using ML is a decisive step. In a classic path tracer, indirect light contributions are calculated per pixel by emitting new rays for each bounce and collecting their interactions with the scene. The more bounces and the more complex the scene, the more exponentially the computational effort increases. Radiance caching serves as an acceleration layer here. Instead of fully calculating each indirect contribution, representative light information is stored in a cache, which can serve as an approximation for many similar pixels.

AMD now replaces the heuristic radiance cache with a neural model. This model learns the spatial and light-physical relationships of a scene during training and is therefore able to predict correct or at least very plausible radiance values after just a few ray intersections. While a classic radiance cache requires rules, filters and mean values, the ML approach relies exclusively on the statistically learned relationships between material, geometry and light distribution.

AMD describes that the cache can already be activated after the second intersection point. This means that the ML model recognizes at an early stage what type of indirect light is to be expected without having to calculate the complete path. The model uses input variables such as normals, depth and relevant material channels for this purpose. They are combined by the ML inference into a global illumination value that would otherwise only be available after several bounces.

The main advantage is the drastic reduction in the number of ray tracing samples required. Pathtracing with fully calculated bounces is still cost-intensive, even on the latest hardware. If an ML model reliably estimates the light flow of a scene, entire calculation stages can be saved. The benefit is not only a higher frame rate, but also more stable image reproduction, as the cache can be consistent over many frames.

Another aspect is the standardization of the lighting data. As the ML model has already learned all the relevant structures, temporal fluctuations and noise can be better absorbed. The radiance cache therefore not only serves as a performance optimizer, but also as a quality stabilizer.

Comparison with NVIDIA’s technologies

NVIDIA pursues similar goals, but with different strategies. The relevant counterparts are

  1. NVIDIA RTX Direct Illumination (RTXDI)

    RTXDI uses Reservoir Sampling and Importance Sampling to calculate direct illumination with many light sources more efficiently. It is not a radiance cache in the strict sense, but an acceleration method that optimizes the lower part of the light chain.

  2. NVIDIA ReSTIR GI

    This is the actual point of comparison. ReSTIR GI improves pathtracing by reusing previous samples and drastically improving Importance Sampling. This also results in an approximation of global light without a complete recalculation of all rays. However, ReSTIR GI remains an algorithmic, non-ML-based method. This means that the calculations follow deterministic rules based on stochastic optimization instead of learned structures.

  3. NVIDIA DLSS Ray Reconstruction

    Even though Ray Reconstruction is a denoising and detail reconstruction model and does not simulate lighting, there are functional overlaps. Both methods work with learned knowledge about light distribution. However, ray reconstruction is limited to restoring missing pixel information and does not actively intervene in the calculation of the actual light path.

What they have in common is that both manufacturers are trying to bring ray tracing to performance levels suitable for mass production. The difference lies in the approach. NVIDIA optimizes and reconstructs existing data. AMD models light before it has been fully calculated.

NVIDIA collects real data from the frame and reconstructs the missing parts.

AMD replaces parts of the calculation with a learned prediction.

This means that NVIDIA acts reactively, whereas AMD acts proactively.

This difference leads to different challenges. In addition to geometric and material-related consistency, an ML radiance cache must also guarantee physical plausibility in the long term. NVIDIA, on the other hand, must ensure that the generalization of the denoising models does not lead to overdrawn results.

Comparable or not?

AMD Radiance Caching is a potentially more profound intervention in the render pipeline than anything NVIDIA has publicly offered to date as an ML-based GI process. While NVIDIA uses ReSTIR GI, an extremely optimized variant of classic pathtracing, and DLSS Ray Reconstruction refines the final output, AMD targets an earlier stage of the pipeline. The approach could be more efficient in the long term, but requires a very robust training model that has been generalized for a wide range of scenes.

The state of comparability can therefore be categorized as follows:

NVIDIA has a mature GI acceleration that is based on mathematical and stochastic methods and has already been tested in many real-world scenes. AMD, on the other hand, uses an ML-based approach that can theoretically enable greater acceleration, but must first prove its efficiency in practice.

Radiance Caching is therefore a pioneering technology and possibly the prelude to a new generation of fully ML-based lighting models. Whether AMD can establish a long-term advantage here depends on how well the models work in different scene types and how closely they are later integrated into engines. NVIDIA currently has the greater experience in GI acceleration and ML inference, while AMD is taking a more experimental but potentially more disruptive route with radiance caching.

And where are the benchmarks?

Radiance Caching cannot currently be tested in a game like FSR or Ray Regeneration because AMD has released this feature exclusively for developers and the feature is not scheduled to appear in games until 2026. The current Redstone generation contains the architecture for this, but no publicly usable implementation in a real title. AMD confirms that Radiance Caching is prepared in the SDK, but does not yet have an end user path. Radiance Caching can currently only be tested in an engine context, which means that you have to integrate the FidelityFX SDK v2 into your own engine or a UE fork and activate the planned Radiance Caching levels there as soon as AMD releases the corresponding ML modules. The current SDK version is “Redstone-ready”, but does not yet provide a finished GI sample that end users can test.

This means that there is no game, no driver function and no benchmark scene that already uses Radiance Caching today.

Radiance Caching sits deep in the GI path and replaces parts of the indirect light after the first bounces. This is not a post-processing stage. Games must be explicitly adapted so that the renderer queries radiance cache data instead of calculating real bounces. The feature does not work without engine customization. What you can therefore realistically do at the moment: As a player or hardware tester, nothing at the moment. Engine developers can use the FidelityFX SDK v2 or the UE-FSR plugin to prepare projects for Radiance Caching. However, it will only be testable as soon as AMD rolls out a public sample or an official ML-GI module.

Kommentar

Lade neue Kommentare

ipat66

Urgestein

1,791 Kommentare 1,993 Likes

Schön zu sehen, dass AMD mit Redstone einen großen Schritt gemacht hat und gleichzeitig, wenn auch nicht auf Augenhöhe mit Nvidia’s DLLS 4, aber auf Schlagdistanz herangekommen ist.

Die Herausforderung wird sein diese Technologie in möglichst viele Spiele zu implementieren, denn da hinkt AMD beträchtlich hinter Nvidia hinterher …

Bis zu der nächsten AMD GPU- Generation bleibt ja noch ein wenig Zeit , um dann gegenüber Nvidia ein vollständiges Paket anbieten zu können…

Nvidia bekommt härtere Konkurrenz und das ist gut so :)

Antwort 3 Likes

LEIV

Urgestein

1,650 Kommentare 705 Likes

Läuft das jetzt eigentlich auch auf rdna 3.5 ?
Gibts ein kostenloses Spiel bzw Demo mit der man das Testen kann?

Antwort 2 Likes

Onkel.Tom

Veteran

135 Kommentare 42 Likes

Wie sieht es denn mit dem Input Lag aus? Mausbewegung zu aktion auf dem Bildschirm?

Meiner Erfahrung nach sind FrameGeneration Augenwischerei. Was nutzen mir auf dem Bildschirm 99fps, wenn das Spiel weiterhin intern mit einem InputLag läuft, als würde es mit 33fps laufen?

Antwort 2 Likes

Igor Wallossek

1

13,159 Kommentare 26,153 Likes

Der Inputlag KANN sich ja eigentlich nicht ändern :D NV hat Reflex 2...

Antwort Gefällt mir

Igor Wallossek

1

13,159 Kommentare 26,153 Likes

Cyberpunk z.B. aber das geht nur mit RDNA4, also 90xxx

Antwort 4 Likes

e
eastcoast_pete

Urgestein

3,083 Kommentare 2,046 Likes

Da Redstone im Moment auf RDNA4 beschränkt ist, wär es doch interessant, ob eine RDNA 3 oder 3.5 GPU in Cyberpunk von XeSS2 profitieren könnte. Vor allem wenn die GPU genug Compute Leistung und VRAM hat. Cyberpunk und ein paar andere Spiele haben ja offiziell Unterstützung für "XeSS2".
FSR hat zB meiner Intel GPUs ausgeholfen, vielleicht geht's auch umgekehrt?

Antwort Gefällt mir

Igor Wallossek

1

13,159 Kommentare 26,153 Likes

Es sieht stellenweise zwar besser aus, ist aber bei weitem nicht so performant.

Einige kritisieren ja das Frame Pacing, aber ich denke mal. das lässt sich noch schnell beheben. Dazu muss man mal meine Einzelvergleiche der Frame Times anschauen (blaue Kurve)

View image at the forums

View image at the forums

Antwort Gefällt mir

Karsten Rabeneck-Ketme

Moderator

317 Kommentare 136 Likes
olligo

Veteran

401 Kommentare 154 Likes

Die Entwicklung ist hier deutlich erkennbar auf den Vergleichsbildern in Cyberschrank, bin gespannt wie viele Jahre es wirklich noch dauert, bis selbst DLSS und FSR keinerlei Artefakte/Ghosting oder Unscharfe Texturen mehr aufweisen.
Ich könnte mir durchaus vorstellen, dass dieses Level erreichbar sein könnte, wo beide Varianten von NVIDIA und AMD auf einem exakt gleichen Qualitätsniveau arbeiten und man nachher nicht einmal mehr große Unterschiede bei Vergleichsbildern erkennen wird.
Meine Prognose: In den nächsten 5 Jahren ist das perfektioniert :) (y)

Antwort Gefällt mir

T
The_Invisible

Mitglied

36 Kommentare 8 Likes

Kannst so gar nicht vergleichen. Bei dlss-fg ist Reflex immer implizit aktiv, bei fsr-fg fehlt das. Reflex senkt nicht nur die Latenz sondern dient auch noch als automatischer FPS Limiter und optimiert die GPU Auslastung (nie 100%) um die frametimes zu optimieren.

AMD hätte hier bei ml-fg auch einfach al2 zwingend machen müssen.

Antwort Gefällt mir

d
donjotzloch

Neuling

1 Kommentare 0 Likes

Kann mir einer weiterhelfen,
ich habe den neueste AMD Treiber installiert und wollte Redstone in Blackop7 testen.
In Blackops habe ich FSR4 aktiviert, allerdings in den Treiber erkennt er es nicht und stellt dementsprechend das Upscaling nicht an.
Muss ich irgendetwas beachten, oder liegt es daran dass ich BO7 über Xbox Gamepass spiele?
Im Treiber ekennt er es auch als Bo6 und nicht als Bo7.

Antwort Gefällt mir

Danke für die Spende



Du fandest, der Beitrag war interessant und möchtest uns unterstützen? Klasse!

Hier erfährst Du, wie: Hier spenden.

Hier kannst Du per PayPal spenden.

About the author

Igor Wallossek

Editor-in-chief and name-giver of igor'sLAB as the content successor of Tom's Hardware Germany, whose license was returned in June 2019 in order to better meet the qualitative demands of web content and challenges of new media such as YouTube with its own channel.

Computer nerd since 1983, audio freak since 1979 and pretty much open to anything with a plug or battery for over 50 years.

Follow Igor:
YouTube Facebook Instagram Twitter

Werbung

Werbung