Computer

“processors are very inefficient”

A modern computer needs, among other things, a processor and a graphics card to function. The opening of this year’s Computer Show 2023 held in Taipei, the CEO of NVIDIA has started to hand out sticks. Specifically, he has hit a big stick to intelsaying that the GPU throttle remarkably the cost and consumption of energy for artificial intelligence.

For those who don’t know, NVIDIA has clearly and forcefully positioned itself in the AI ​​market. Jensen Huang, CEO of NVIDIA, has been a pioneer in this sector, seeing the possibilities of the market. He offers very specialized solutions that bring great computing power with great efficiency.

CPUs are highly inefficient

During his speech, Huang has directly challenged the processor industry. He has stressed that heGenerative artificial intelligence and accelerated computing are the future of computing.

The big stick has arrived when he affirms that the Moore’s Law is outdated. He stresses that future performance improvements will come from generative AI and accelerated computing. Come on, quite a blow to Intel, since Gordon E. Moore (who postulated Moore’s Law) is one of the founders of the company.

In order to consolidate their position, they have shown a analysis of the cost of accelerated computing (AC). They have given data on the total cost of a cluster of servers with 960 processors, which would be used for CA. The entire infrastructure has been taken into account, such as chassis, interconnections and other elements.

According to the data they have given, all these elements cost about 10 million dollars. In addition, it is highlighted that the consumption it would be about 11 gigawatts/hour.

cluster ia 960 cpu

Huang compares these data with a 10 million GPU cluster of dollars that would allow train up to 44 models accelerated computing, for the same price. Not only that, they ensure that the consumption it would be about 3.2 gigawatts/hour, almost 4 times less power consumption.

According to Huang, a system that consume 11 gigawatts/hour could train up to 150 models accelerated computing. But beware, this system would have a cost of 34 million of dollars.

But, in addition, it is established that training just a model accelerated computing, a system of $400,000 that consume just 0.13 gigawatts/hour.

He wants to make it clear that training an accelerated computing model costs 4% compared to a CPU-based system and reduces power consumption by 98.8%.

cluster ia 48 gpu

So, do processors have their days numbered?

No the truth. They are still needed today, because they perform management functions, to put it in a very simplified way. What Huang tells us is that a processor-based system is more inefficient and expensive than a GPU-based one.

Despite this, any server, cluster or advanced system requires a processor (or several). The great advantage of GPUs is massively parallel computing, something that is beneficial for AI, which requires millions of operations per second.

What Huang points out is quite evident, that x86 processors are quite inefficient. This is causing many to explore RISC and RISC-V based solutions. They are more efficient architectures, but also much less powerful.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *