Meta prepares the future of AI with a titanic supercomputer

To build the future of AI and exploit its new Data2vec platform, Meta has unveiled the Research SuperCluster, a supercomputer that should soon rival the world elite.

A few days ago, we told you about Data2vec, Meta’s revolutionary new AI. With the algorithmic foundations of this revolutionary technology now well in place, Meta tackles the hardware part; the firm has just unveiled its AI Research SuperCluster (RSC), a supercomputer specially designed to push the limits of artificial intelligence.

In terms of hardware, it’s a chunky brute based on the must-have DGX Nvidia A100. If you’re unfamiliar with high performance computing (HPC), this is simply one of the flagships of today’s HPC, and each unit still sells for tens of thousands of dollars. Meta thought big, and his RSC is currently equipped with 760 of these modules within the same cluster, for a mind-blowing total of 6,080 GPUs.

All these beautiful people communicate thanks to the Quantum technology ofInfiniBand, a mesh transfer platform whose throughput can reach 1.6 Tbps. And when it comes to storage, the beast is entitled to 175 petabytes of living space, i.e. 175,000,000 GB of FlashArray memory. The whole will be served by a completely incredible cache of… 46 petabytes.

The Nvidia A100s are small monsters dedicated to high performance computing. © Nvidia

A monster that will continue to grow

This machine would already be capable of impressive performance that places it among the best supercomputers of the moment. Cnet, for example, draws a comparison with the Perlmutter, currently fifth in the world rankings, which has an arsenal comparable to that of the RSC.

And yet, it is still only the beginning. This already impressive machine, Zuckerberg intends to transform it into a real behemoth. By the end of the year, Meta will continue to gradually add DGX until we reach a total of 16,000 GPUs. That is a power gain of around +150%. The whole will be connected by a transfer and caching system capable of serving 16 TB (16,000 GB) of data per second to a storage system of the order of theexabyte (one billion GB).

Mark Zuckerber explains that his RCS will then become the most powerful supercomputer in the world. For that, everything will have to go as planned. This also assumes that no other machine will come to grill him politeness. And even if all those stars line up, he’ll still have to jostle to claim that title. In particular, he will have to face the Fugaku, the Japanese monster who reigns supreme today in the kingdom of HPC, and perhaps even the future titan of the University of Florida who could be ready by then.

A physical medium of choice for an AI revolution

But at the end of the day, it doesn’t matter if the RSC inherits this crown or not; the latter remains above all symbolic, given the speed at which this sector is evolving. What is really important for Zuckerber and the ecosystem of Facebook is above all to occupy the ground very aggressively in the field of AI.

It is difficult to draw up a precise inventory of the leaders in this area. Indeed, we are sorely lacking in information on the projects of some of the main players – China in the lead. But with its Data2vec platform which already promises to be revolutionary, it seems certain that Meta will be one of the key players. This system presented as the “first self-supervised multimodal AI” (more details in the dedicated article) could well sign the beginning of a new era in artificial intelligence. It now only lacks a solid physical support; and that is precisely what Meta will have by the end of the year. A case to follow, because according to all the combination of these two factors is likely to generate potentially spectacular advances in the field.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *