Tech

The GeForce RTX 50 will continue to use a monolithic GPU

The jump to the chiplet in the GPU sector seems inevitable, that is, it is something that will happen sooner or later. AMD has already made that leap in the professional sector, while NVIDIA has continued to bet on the monolithic core, something that, according to the most recent rumors we have seen, it will not change with the GeForce RTX 50.

The GeForce RTX 50 family will be the successor to the GeForce RTX 40 series, a generation that has not yet been announced, but that we know quite well thanks to the different leaks that have been produced. We know that they will use a monolithic core design, and that the most powerful chip for the general consumer market, provisionally known as the AD102, will have 18,432 shaders in its full version.

Well, the GeForce RTX 50 would keep the monolithic core design that we will see in the GeForce RTX 40. These, in turn, will be an evolution of the core that NVIDIA has used in the GeForce RTX 30, based on the Ampere architecture, which tells us many things if we know how to read between the lines:

  • The division of specialized nuclei (tensor and RT) will continue to be present.
  • Process reductions will be key to being able to increase the number of shaders.
  • We do not expect profound changes at the architecture level. Blackwell would be the culmination of the design that NVIDIA first used in Turing, matured in Ampere, and improved with Ada Lovelace.

Why does it make sense to keep the monolithic GPU in the GeForce RTX 50?

The most important key is very easy to understand, because it is not so easy to connect two or more GPUs and make them work as one. Jump to an MCM design in the graphic sector will open the doors to a new stagewhere development tools and games must be prepared to take optimal advantage of these designs.

On the other hand, to all of the above we must add the problems that can arise when interconnecting chips, especially in terms of latency, workload distribution, and resource management and utilization. It is not a trivial issue, and it requires a deep work by developers and chip designers so that everything works as it should.

Monolithic GeForce RTX

I know what you’re thinking, if an MCM design can give you all these problems, why is abandoning the monolithic core design inevitable? Well, it’s very simple, because the increasing complexity of GPUs means that they have an increasing number of shaders, which, together with the constant reduction of manufacturing processes makes transferring these designs to wafer increasingly difficult.

In the end, it is easier and more cost-effective to make 10,000 shader GPUs and join two together to create a 20,000 shader GPU than to directly make a 20,000 shader GPU, as there is more risk of something going wrong with the second due to its higher complexity. It is more efficient in terms of cost and wafer success rate.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *