News

Nvidia puts AI chips on sale that will help improve chatbots and break into AWS and Google Cloud

If anything has shown over the past few years nvidia, fabless and software company that designs graphics processing units, is its ability to adapt to circumstances and use market opportunities to its advantage to launch new lines of products and services.

This has been how in its annual conference of developers, the GPU Technology Conference 2023, has presented its new advances for the artificial intelligence sector, which will represent a turning point in computing. The CEO of the company, Jensen Huangensures that Nvidia is the engine behind ‘iPhone AI moment’. Only then will it demonstrate how its technology is capable of contributing to the progress of AI.

And this is possible thanks to its new series of chips, supercomputing services and a series of high-profile partnerships that companies and institutions around the world are incorporating into their respective corporate applications to offer a more efficient servicejust like chatbots and flashier graphic generators.

In fact, Microsoft has invested hundreds of millions of dollars to buy tens of thousands of Nvidia A100 graphics chips so that its partner, OpenAI, could train the natural language models (LLM) in their new AI chatbots both from Bing like ChatGPT.

A technology within everyone’s reach

A new horizon opens, because if Nvidia starts renting AI chip packages through its new unit Nvidia DGX Cloudanyone will be able to have remote web access to an extra space to host our own LLM project.

This virtual version rental service through its packages DGX Server is offered with eight Nvidia H100 GPUs either A100 and 640 GB memory each. In the same way, interconnections are included that scale up to the 32,000 storage GPUs, software and direct access to optimize the code. Although prices fluctuate considerably, we can find them from $36,999 per month in the case of level A100.

For its part, a physical DGX server box can cost over $200,000 for the same hardware if purchased directly. All of this goes to show that companies like Microsoft went to great lengths to build their own data centers around AI technology at the service of chatbots and other efficient software.

its origin

The question now is to know where those GPUs that Nvidia will use will come from so that any user or company could set up your own chatbots through the use of AI. Some of those GPUs may be the same ones that Microsoft used to train OpenAI models.

We already know that microsoft azure is one of the groups that will offer optimal coverage to DGX Cloud, although according to Nvidia, customers will have full-time reserved access to those GPUs that they are renting, not being necessary for them to be shared with other users.

Along with Microsoft, other partners are expected to join such as Oracle and Google clouds, which will house the platform. Added to this is the power of Amgenwhich would be using the DGX Cloud to discover new drugs more quickly, while the company CCC insurance and the IT provider ServiceNow they could be using it to train their AI models for claims processing and code generation.

All this shows us that the irruption of AI chips is part of the present, and that Nvidia has known how to see an unparalleled opportunity here. The company’s latest advances have focused on standardizing the path tracing as the technology of the future in computers, although it is still prohibitive. With this, software is created, through AI, that allows development teams to improve the lighting of the games, something that has been seen in QuakeRTX and RTX-gateway.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *