News

Meta has created MTIA, a chip for bespoke AI tasks

Meta has presented its first Artificial Intelligence chip, custom designed for the AI ​​program process. Is called MTIA (Meta Training and Inference Accelerator) and is made up of a mesh of circuit blocks that work in parallel. In addition, it runs software that optimizes programs that use the open source development framework PyTorch, also from Meta.

As stated by the company, the chip is configured to measure for a specific type of Artificial Intelligence program: recommendation models through deep learning. These programs can identify a pattern of activity, and predict related material, and recommend it to you, on the basis that it is likely to be relevant to the user performing the actions identified as a pattern.

Of course, this chip is a first version of what Meta believes will be a family of chips, and whose development began in 2020. The company has not provided details on how many more chip models it will launch in the future, and has not given dates either. about its release.

The MTIA has aspects similar to the chips that various startups are developing. At its core is a mala of 64 process components, lined up in an 8×8 grid, resembling many AI chip designs that adopt this format so that data can move through the components as quickly as possible. .

It is not a chip with a very common design, since it has been manufactured so that it can manage two of the main phases of Artificial Intelligence programs: training and inference. Both have different computational process requirements, and are managed by specific chip designs.

This chip can be up to three times more efficient than GPUs in number of floating point operations per second and per watt of power consumed. But when the chip is given more complex neural network tasks, it still lags behind GPUs, indicating that Meta is going to work on improving its handling of complex tasks in future releases.

In his presentation, he highlighted the advantages that the MTIA has achieved from the use of a joint design between hardware and software, which consists of hardware engineers exchanging ideas, in constant communication and collaboration with the company’s PyTorch developers. .

Developers, in addition to being able to write code to run on-chip in PyTorch or C++, can also do so in a language specific to the chip, knyfewhich Meta points out that «takes a short, high-level description of a machine learning operator cm as input and generates low-level, optimized C++ kernel code that is the MTIA implementation of that operator«.

But this has not been the only launch of Meta in relation to Artificial Intelligence this week. Its managers have also commented, in the same presentation in which they have announced the MTIA, at the event AI Infra@Scalethe development and construction of a next-generation data center, which will be designed and optimized for AI, with hardware for Artificial Intelligence equipped with liquid cooling systems and a high-performance AI network that connects thousands of AI chips to training AI clusters at the data center scale.

In addition, they have also unveiled a new custom chip for encoding video, which they have called Meta Scalable Video Processor. It is designed to compress and decompress video with a higher level of efficiency, in addition to allowing it to be encoded in different formats so that Facebook users can upload and view them.

This chip, according to Meta, “can deliver peak transcode performance of 4K at 15 frames per second at its highest quality setting with one stream in and five streams out, and scale up to 4K at 60 frames per second at standard quality settings«.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *