Computer

NVIDIA and the battle of the bastards for the future of AI chips

Why does NVIDIA dominate the AI ​​market? Is a new architecture or competitors necessary to drive this booming sector? There are many questions in an industry that is relatively new and that began to take shape approximately 10 years ago. Are we perhaps facing a market that, like cryptocurrencies, will leave users without graphics cards?

NVIDIA and AI, or how to get there first is a sign of clear advantage

NVIDIA-EGX

NVIDIA was founded in 1993 and at the end of 2010 with only 17 years of life and after becoming a leader in the GPU market, its CEO had an idea as brilliant as possibly absurd for 99% of worldlings: to bet on artificial intelligence as a weapon that would dominate a new sector of data.

Huang was clear at that time, since he saw clearly that the number of data and its complexity as well as its weight would grow exponentially and there would come a time where making sense of said data would be impossible. Today even governments allocate money for deep learning with China and the US again at the forefront of this.

Analysts believe that it is the scenario where after semiconductors, everyone will compete to analyze as much information as possible and have algorithms working 24 hours a day without rest to analyze any type of threat or future technology. Therefore, more and more and faster chips are needed, something that Google slipped in 2015 with its AI project, Amazon already has inference chips with Alexa, even Baidu has Kunlum, where AMD moved late buying Xilinx for acceleration. of AI in GPU, something similar to what Intel did with its Xeon in 2019 and now getting into ASICs and GPUs for this sector and gaming.

The future of chips is having the best ones for AI

NVIDIA IA

Therefore, the war of the future will be to have the best chips for artificial intelligence and thus dominate the largest market in the world in just 20 years from now. The most obvious question is why is it going to grow so large and why exponentially faster chips are needed. The answer, data volume aside, is simple: math. HPCs perform mathematical operations with so-called high precision simulations, which are normally 64 bits (we will reach 128 bits in not too long) and therefore processors are necessary due to their greater capacity to handle such complex operations.

But the AI ​​does not need so much complexity in terms of bits and 16-bit or 8-bit instructions can be used to work the data, although the complex ones continue to fall on the CPU, a GPU currently accelerates the simple load in many times compared to a processor , so it’s complementary and it all depends on software like Google’s TensorFlow or Facebook’s PyTorch.

For this reason, the sector was not open to everyone, it is not only a process where creating the best chip will give you an advantage, and that is where NVIDIA also stands out with its entire arsenal of software for its GPUs. Its software libraries abstract the programmer and simplify the way of compiling code so that it hardly interferes with the GPU itself since it doesn’t need it.

NVIDIA-IA

So how important will it be to have the best chip? It doesn’t have to be a GPU, it can be a specific ASIC, IPU or TPU, of any size or function as long as it is designed for deep learning. And it is that the software is tending to the centralization, as it happens with the APIs or the OS and here everything seems to flow towards the options of Google and NVIDIA itself, and therefore designing chips for AI is just as important as the software or software family to be used.

The latest movement in this sector has a lot to say, and it is nothing more than the purchase of ARM by Huang’s, where if completed we could talk about new chips signed by NVIDIA of low consumption and high modularity that are specific for AI and DL, especially for inference, which is the simplest task to do and we may see GPUs with ARM SoCs on the same PCB to complement various tasks, speeding up the whole system much better than the CPU + GPU + FPGA options. And is that the inference is not normally calculated within the current Deep Learning servers such as DGX, but is intended for smaller external servers.

This approach would change all sectors, since with ARM and its chips from NVIDIA it would change to IoT, allowing work not to be done in the cloud, but on the device, for example. Therefore and in summary, it is an untapped market that will design the world of information and with it everything related to it, shaping the rest of the devices that we use daily, hence the importance of having the most complete chip, the best software and platform for AI and DL, a cake that many want, including smaller companies born of the movement of NVIDIA itself, the so-called “bastards”, which are focusing exclusively on this market.

Related Articles