Computer

Origin and future of NVIDIA’s battle for Artificial Intelligence

Today, GPUs are architectures where the G of its acronym in many cases is somewhat atavistic. Since their architecture in general is designed for parallel calculation and generation after generation, they have been able to execute code that is less and less specific in graphics and more general-purpose. This evolution has allowed them to be in the most powerful supercomputers and also become ideal architectures for AI.

What began in mid-2006 with the appearance of the GeForce GTX 8800 and the first CUDA cores, has evolved to the point where NVIDIA GPUs are found in more advanced supercomputers and not for rendering games, but for carrying out the more complex calculations, both in high-performance computing and for artificial intelligence.

NVIDIA dominates the AI ​​market with an iron fist

NVIDIA RTX Smartphones

To date, NVIDIA has created an ecosystem between software and hardware for artificial intelligence, which until now has not been able to be matched by the competition. Not surprisingly, artificial intelligence algorithms are used by high-caliber companies such as Google, Alibaba, AWS and Microsoft with their Azure. In the case of Google they have created their own hardware with their TPUs, but almost 70% of supercomputers and data centers use NVIDIA hardware inside.

Such is the domain of Jen-Hsun Huang’s company in this field of computing that most algorithms and new software that are based on artificial intelligence have been developed using NVIDIA hardware as support. But NVIDIA’s presence in the AI ​​market can be seen as trying to sell its GPUs to a different market than gaming to get higher margins, but the history of NVIDIA’s commitment to AI goes beyond that. .

It all started looking for cats on the internet

Portable Gate

It must be taken into account that the deployment of a new semiconductor technology takes years to complete. In the case of NVIDIA betting on AI, the story, according to WIRED, begins when NVIDIA’s chief scientist, Bill Dally, was having a chat with a former co-worker from Stanford University, Andrew NG. Since Dally was a professor of it before joining NVIDIA, and of reputed prestige as well.

Andrew Ng was creating an algorithm that would visually search for cats online through artificial intelligence, but his system had the problem of requiring extremely complex and expensive hardware. With thousands of CPUs working in parallel. Dally told Ng that he could solve that problem by using a few GPUs and therefore at a much lower total cost.

To fix Andrew’s problem, Bill contacted Bryan Catanzaro, who using 12 NVIDIA GPUs was able to demonstrate to Dally and NG that GPUs were better suited for AI than CPUs. This was something Catanzaro already knew, since before joining NVIDIA in 2008 he had been designing GPU architectures for artificial intelligence. So NVIDIA’s path to embracing AI was not an accidental overnight, but rather a planned evolution.

How does the future look for NVIDIA in AI?

NVIDIA IA

The growth of the importance of AI in the market has been such that the different companies in the sector have turned their motivations towards it. If there is something that defines any company, it is its resources, processes and motivations. It is the latter that decide which are the new resources and processes to seek and develop. The situation? The different companies in the sector are creating their own solutions and starting to do without NVIDIA, so their customers are becoming their rivals.

Google has been creating its own chips since 2015 for its different services, not only TPUs, but also video transcoding systems that make NVIDIA or AMD GPUs no longer necessary for this type of task. Amazon has created its own Inferentia chips for Alexa after buying from Annapurna Labs. AMD’s purchase of Xilinx also points in the same direction and we cannot forget Intel’s advances in AI after buying Nervana and Habana Labs. All of them mean a threat to NVIDIA’s dominance in the AI ​​market. Especially those developed by NVIDIA’s own customers.

However, not only in data centers and servers AI is going to be important for NVIDIA, it is also important in the domestic market where DLSS has been a great success that has forced AMD to launch a patch in the form of FidelityFX Super Resolution. AI capabilities are expected to improve on the Lovelace and Hopper architectures, with a special emphasis on Denoising for Ray Tracing.

Related Articles