During the second quarter of 2022, AMD will roll out its FidelityFX Super Resolution 2.0, which is an evolution of the existing one and not a radical departure. So once again we are facing a solution to be able to play at higher resolutions without sacrificing frames and without needing specific hardware units as is the case with the NVIDIA solution. What’s more, this algorithm works on any GPU and not just the ones with the red mark.
However, AMD FSR 2.0 adds a number of interesting elements that take the algorithm further and make it a much more powerful alternative to NVIDIA’s almighty DLSS 2.0, which is closely tied to Jensen Huang’s company and is about fully proprietary technology. The bet of those of Lisa Su, on the other hand, is that of open source and, therefore, not that anyone can use it, but to add changes to the algorithm that go beyond what was proposed by them.
What is AMD FSR 2.o?
Like its predecessor, this is a super resolution algorithm, which is based on taking a finished image and generating it, but with a greater number of pixels. It must be taken into account that when we do this, the number of points on the screen without color information increases and that is why it is necessary to use algorithms that can fill these spaces. FSR 2.0 is one of them, but oriented towards video games, so at the end of each frame, the graphics card that applies it performs a series of processes that allow it to guess the information that is missing from the image.
However, it is those based on deep learning algorithms, a discipline of artificial intelligence, that have ended up becoming popular in the market, in the case of NVIDIA DLSS, which uses so-called convolutional networks. We are not going to go into what they are, but the fact that they require units capable of executing mathematical operations with matrices very quickly, for which the RTX Tensor Cores are used. Instead, AMD with FSR 2.0 has opted for a solution more in line with the internal composition of its graphics hardware, since they lack these units.
Thus, in the previous version, the creators of the Radeon opted for the use of the Lanczos algorithm as part of the process to generate better resolution versions of already rendered images. The problem comes with the fact that despite achieving a higher frame rate per second, image quality was sacrificed in the process. What has led them to the creation of a more advanced version.
Temporality is key in AMD FSR 2.0
First of all we have to define what we mean by temporality. In AMD FSR 1.0 we had the problem that all the information that is used to generate the higher resolution frame comes exclusively from the frame that was just generated, which is not enough information to generate an image that is as similar as possible to the original frame. GPU will render the output image natively.
But where can the information be obtained? Well, from the image buffers of the previous frame that are still in the video memory used by the graphics card. Specifically, AMD has defined three of them in a very vague way, so we are going to define them for you so that you have a much better understanding of how it works.
FSR 2.0 in particular is based on movement vectors to obtain the new information that allows the algorithm to be more visually precise, since it allows us to know the position of each object with respect to the previous frame.
The term can be complex, but it is explained very easily with the following steps:
- Each object on the screen is given an ID or identification in variable mode.
- One of the image buffers that are generated in each frame does not store the values of color, depth, albedo and other graphic information, if not the identification has each element on the screen.
- The position of each ID in the current and previous frame is compared. The objective is to generate the derivative of the distance with respect to time, that is, the vector of velocity or movement. Those that do not have a Buffer ID on both frames are not taken into account, as they have either been scrolled out of view or have just been rendered.
- With this information, the graphics card or GPU can predict the exact position of the object in both frames, so it can retrieve the visual information to perform the reconstruction.
However, there is a catch to this, which is that during the process of generating each frame and in the middle of the 3D pipeline, these motion vectors are automatically generated. Because they are common in many post-processing algorithms, many games these days have no problem adapting for FSR 2.0, but instead it is an additional task for many others. In other words, they require deeper code changes.
Depth and color data
The two other frame-related buffers that FSR 2.0 takes information from are the color buffer and the depth buffer. We do not need to talk about the first one, since it defines the color value of each pixel, but it is important due to the fact that said value does not change from one frame to another.
The other is the depth buffer, which tells us the distance on that axis of the object with respect to the camera. Normally, it is used to say if one pixel has drawing preference over another. In this case, it is used to triangulate the movement vectors with respect to the camera and for the algorithm to more accurately generate the corrected frame with respect to the camera.
Quality mode and higher technical requirements for AMD FSR 2.0
The additional data related to temporality that AMD FSR 2.0 adds over its previous version means having to work with a much larger data set. This means that despite the fact that the visual quality obtained is much higher, we are going to need a more powerful graphic. Let’s not forget that these algorithms take milliseconds of time to render the scene to work. In exchange for generating an image in less time than rendering it from scratch at the output resolution.
At the moment AMD has presented only the Quality mode and in a single game in Deathloop, which makes us think that the rest of the modes will continue to work as in FSR 1.0 and most of them will not use movement vectors. In other words, the catch is that a good number of titles will be compatible with FSR, but very few will be compatible with the second version of the algorithm, which will be related to quality mode.
To finish, the requirements on the games will make the list of compatible games much lower than with the current FidelityFX Super Resolution and as with the NVIDIA DLSS, from AMD we will be announcing new games compatible with the FidelityFX Super Resolution 2.0 with each new update of its drivers.