NVIDIA has been the first to make a move with the GeForce RTX 40, a generation of high-performance graphics cards that represents a very important advance compared to the GeForce RTX 30, and that, as we have told you before, goes beyond the simple increase in raw power. AMD will compete with them by launching the Radeon RX 7000, a new generation that will be based on the RDNA3 architecture, and that could reach the market in November of this year.
During the last few months we have seen numerous information and rumors with quite reasonable credibility, and thanks to this we have a pretty clear idea of what we can expect from the Radeon RX 7000. On the other hand, after seeing the launch of the GeForce RTX 40, captained at the moment by the GeForce RTX 4090, I am also fully aware of what AMD’s new graphics cards will need to truly compete with NVIDIA’s Ada Lovelaceand in this article I am going to explain the five things that are, in my opinion, the most important.
In each of the points that you are going to read below, I will explain what the Radeon RX 7000 will need to compete with the GeForce RTX 40, and I will also tell you why since the latter is essential to provide the necessary context to each of these points. As always, if you have any kind of doubt you can leave it in the comments and I will help you solve it. Get comfortable, let’s start.
1.-The Radeon RX 7000 have to be very fresh and efficient
NVIDIA has done an excellent job with the GeForce RTX 4090. I remember reading rumors that its consumption was going to be a disaster, since it was going to exceed 600 watts in theory, and that its working temperatures were going to be very high. In the end none of that has been fulfilledsaid graphics card has excellent consumption and fantastic temperatures, since it maintains between 65 and 67 degrees on average, and its maximum peaks are 70 degrees. It’s also much more powerful than the GeForce RTX 3090 Ti, and consumes less power than it.
With all that in mind, it is easy to understand why I tell you that AMD needs a major advance in terms of efficiency and heat generated with the Radeon RX 7000. the Radeon RX 6000 have been a round generation both for temperatures and for efficiency (performance per watt consumed), but in the new generation NVIDIA has not made it easy for him. For a GeForce RTX 4090 to be able to stay below 70 degrees with the Founders Edition design while generating very low noise is a hard feat to beat.
I imagine that the improvements that AMD has introduced at the architectural level, and the jump to an MCD (Multi-chiplet Die) design in 5 nm, will allow AMD to fine-tune the consumption and temperature values to the maximum of the Radeon RX 7000, but these have to be really competitive with the numbers of the GeForce RTX 40, and it is not a simple issue as we have seen. We will see what AMD achieves in this regard, but the precedent we have with the Radeon RX 6000 invites us to be positive.
2.-A significant leap in performance with ray tracing
It is, without a doubt, one of the great pending accounts of AMD. I understand and respect that there are people who keep saying they don’t care about ray tracing, but the industry does care, and it has ceased to be the future of the sector to become the present of the same. More and more games use this technology, and each time they do it in a more complete and attractive way.
I can give many examples, but I’m going to limit myself to my favorite ray tracing games: Cyberpunk 2077, Metro Exodus Enhanced Edition, Ghostwire Tokyo, and Dying Light 2. In those four games, ray tracing makes a huge difference, and lets of course to activate that technology in lighting, shadows and reflections produces what we might consider a complete generational jump.
The Radeon RX 6000 were the first AMD graphics cards to feature ray tracing acceleration, but this was done in a very limited way, sharing resources with the texturing units and not completely freeing the shaders from the workload. what this technology entails. You already know the result the Radeon RX 6000 perform a lot but than the GeForce RTX 30 in ray tracing, and even the Intel Arc Alchemist has a superior architecture in this regard, as it performs better than the Radeon RX 6000 with ray tracing.
AMD has to implement some ray tracing acceleration units in the Radeon RX 7000 that take care of everything and completely free the shaders. This implies that these specialized nuclei must perform all calculations associated with intersectionsboth the box delimiters and the traverses, and the collisionsand they must be able to work asynchronously to avoid waiting times that end up generating a bottleneck.
3.-A technology capable of competing, at least, with DLSS 2
I’m the first to acknowledge that AMD did a good job with FSR 2.0, but let’s face it, this technology not up to DLSS 2 levelIn fact, it is not even up to Intel XeSS. Upscaling technologies have been working “miracles” on consoles for a long time, in fact the “checkerboard” was what allowed PS4 Pro to offer 4K upscaling in numerous games.
Said rescaling technology was very simple, since it basically consisted of render half the pixels and “stretch” them to fill in the gaps voids by subsequently applying temporary edge smoothing. It was normal to find artifacts and graphical glitches, which were especially visible in elements that are difficult to reconstruct using spatial rescaling, like hair, for example, but the result was more than acceptable for a system like PS4 Pro.
FSR 1.0 was a version of that technology used in PS4 Pro, and with FSR 2.0 AMD added the use of temporal elements (pre-frames) to improve the quality of upscaling, albeit at the cost of a minor performance improvement. Both NVIDIA and Intel use artificial intelligence and specialized hardware to achieve higher quality image reconstruction and upscaling without sacrificing performance, and with the arrival of DLSS 3 and frame generation NVIDIA has made a significant qualitative and quantitative leap.
Thanks to the reconstruction and intelligent rescaling of the image we can greatly improve performance, and without sacrificing good image quality. I have had the opportunity to try DLSS 2 in many games under different resolutions, and recently I have also tried DLSS 3, and I can confirm that these are technologies that really represent great value for the user.
AMD needs to offer that value to its users as well, and to keep up needs an FSR 3.0 as soon as possible that can really compete with, at the very least, NVIDIA’s DLSS 2. Rumors arose not long ago that the company could release such technology as exclusive to the Radeon RX 7000, and that it would use hardware-accelerated AI to make an exponential leap in both upscaling quality and performance. I sincerely hope that rumor comes true.
4.-Greater support in triple A games
It is useless to have good technology if it is not implemented in a large number of games. This is what NVIDIA was thrown in the face when it released the first generation DLSS, and also when it bet on ray tracing. In 2018 both technologies had minimal support, but this began to take off in 2019 and today over 250 games and apps they support, according to NVIDIA, ray tracing and/or DLSS.
If we look at the official list of games with FSR 1.0 support, we will see that there is a huge distance compared to NVIDIA’s DLSS, and if we focus on FSR 2.0, that difference between the two, at the level of support, becomes abysmal. I’ve already talked a lot about this topic, and some keep telling me that it’s up to the developers. If you think this way, I ask you why when we talk about AMD it’s the fault of the developers, and when we talk about DLSS it’s the fault of NVIDIA.
I think there is no need to answer that question, because in the end it is a simple justification used by the closed fans of AMD. The Sunnyvale Company has to work to motivate developers to implement the FSR in its different generations, just as NVIDIA did with DLSS, there is no more. To establish differences in favor of one or the other is to commit favoritism, and this is something that is bad for everyone.
If AMD releases an FSR 3.0 that will undoubtedly be a very positive move, but since it will not be enough for it to be at the level of DLSS 2, it must also have a high degree of implementation in current games so that it can compete with that one, and it is that, as I told you at the beginning of the article, it is useless to have a very competitive technology if it is not used, or if it has a minimum degree of use and is limited to a dozen games .
5.-Faster memories and wider buses for less dependency on infinite cache
It is something essential so that the Radeon RX 7000 can develop its maximum performance, and so that they are not penalized when using high resolutions. As many of our regular readers will remember, AMD used GDDR6 memory with the Radeon RX 6000 running at a maximum of 16GHzalthough it later launched models with 18 GHz memories. It also used buses between 64 bit and 256 bit.
NVIDIA, by contrast, used GDDR6X memory up to 21GHz of speed and buses of between 128 bit and 384 bit. This caused the GeForce RTX 30 to have much higher maximum bandwidth values than the Radeon RX 6000. To compensate, AMD used infinite cache, one block of L3 memory added together with the GPU that ranged from 16 to 128 MB, and that worked in the same way as the Xbox One eSRAM, giving a peak of bandwidth to store graphic elements that take up little and change frequently.
The “invention” worked, but that infinite cache presents two problems. On the one hand takes up valuable space at the silicon level that could be exploited in another way (by introducing specialized hardware, for example), and on the other hand loses performance when resolution is raised. This makes the Radeon RX 6600 perform very well at 1080p, but lose more performance than usual when upscaling to 1440p, for example.
AMD should consider abandoning the infinite cache on its graphics cards, although we already know that this will not happen with the Radeon RX 7000, since said generation going to use a GCD design precisely to externalize the infinite cache and make room on the GPU for other graphics elements. I think it is a success on the part of AMD, although we still do not know how it is going to take advantage of that extra space. It may surprise us with new ray tracing and AI cores, in fact that would be ideal.
Final Notes: AMD Needs to Really Compete with NVIDIA
And it is because in the end the beneficiaries are us, the consumers. We have already seen how good the arrival of Alder Lake-S was on the market, and we have also seen on other occasions how bad it is to have a single dominant generation. That there is competition means that we can enjoy better products at more attractive pricesand this in the end is good for everyone.
If NVIDIA, or AMD, release a graphics generation that completely dominate to that of its rival, its prices will rise, and consumers will only have two options: pay much more to access the most powerful models or pay less but settle for clearly inferior graphics solutions that will probably age very badly.
Personally I hope that the confrontation between the GeForce RTX 40 and the Radeon RX 7000 is epicand that AMD manages to compete with NVIDIA on all fronts, and not only in raw power under rasterization, which was what happened with the Radeon RX 6000, as I have already told you in previous articles.