Computer

If you have an NVIDIA GPU you will be able to see the games better on your PC

The Temporal Anti-Aliasing

TAA

In order to understand DLAA, we must first understand how Temporal Anti-Aliasing or TAA works as it evolves from it. And how it works? Well, in a very similar way to the interpolation of textures, but only for the lines that suffer from the jagged problem. To do this, they look for the color values ​​of the nearby pixels and look for a transition to be created that makes the affected line look smoother and thus makes the saw teeth disappear, at least apparently.

The problem is that doing this only with the information of the current frame is not very precise and that is why the information of the previous frame is used, where a temporary buffer is used, which consists of giving an ID to each object on the screen that then it will help the GPU to know the speed and movement of each of them. You also need to be able to extract information from previous frames to perform the Anti-Aliasing process more accurately.

So Temporal Anti-Aliasing is the most efficient way to avoid saw teeth so far, but NVIDIA wanted to give it a twist with DLAA.

What is the NVIDIA DLAA?

NVIDIA DLAA

As the name suggests, DLAA is Anti-Aliasing through deep learning, which makes use of the advanced capabilities for these algorithms provided by the Tensor Cores of RTX 2000 and RTX 3000 gaming graphics cards.

The first advantage is the ability to recognize which pixels have changed from one frame to another, in such a way that the GPU wastes less time performing an algorithm equivalent to TAA. This translates into fewer milliseconds to generate a frame with the same quality and therefore a higher FPS rate in games. Something that is similar to DLSS, although it has its differences as we will see later.

But the biggest advantage of DLAA is the fact that being a deep learning algorithm it can be trained to see nuances in images with higher quality anti-aliasing. If in the DLSS we train the algorithm with higher resolution images, with the DLAA the GPU is trained with sawtooth removal techniques that the algorithm learns to observe and then apply via DLAA with a fraction of the necessary power.

DLAA derives from DLSS, but it is not the same

NVIDIA DLAA TAA DLSS comparison

The big difference between DLSS and DLAA is that the latter is not intended to generate higher resolution images, but rather keep resolution compared to the original sample and is based on improving its image quality. At the moment the DLAA has not been applied in many games and is totally green, but not all games require increasing the resolution and for many users, image quality is preferable over resolution.

The question here would be: what do you prefer, more pixels or more “beautiful” pixels? Many games make use of image post-processing techniques that are based on taking the final buffer before sending it to the monitor and adding a series of filters and graphical techniques. The DLAA can learn from the existence of these and apply them to improve the appearance of the final image that we see on the monitor.

Today’s post-processing effects are performed in games through Compute Shaders, but deep learning algorithms have long been used in graphic design and video editing programs. Anti-Aliasing is a post-processing effect and therefore it is not surprising that NVIDIA has developed this technique.

DLAA requires training

NVIDIA DLAA 2018

Being a deep learning algorithm, the system has to learn a series of visual patterns from each game to make the inference and apply the DLAA correctly. Let’s not forget that each video game has its own visual style and the application of the same inference algorithm for all games can cause visual problems that are larger than those it can solve.

However, most games have a number of common visual problems that the DLAA could solve by learning to locate and fix them. In that case, the algorithm would not learn to copy the visual aspect, but to correct said inherited errors by the use of certain graphic techniques, this being one of the advantages of the training.

The second advantage is the enormous computing power of the Tensor Cores, which is almost an order of magnitude compared to the ALU SIMD or CUDA cores, so the speed at which these types of algorithms are solved is very fast and as we have said before the idea is to achieve the highest image quality and frame rate at the same time.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *