News

Google manages to design smaller AI chips thanks to deep learning

A group of academics from the University of California at Berkeley and researchers from Google claim to have found a system to use deep learning to design smaller, faster chips capable of work on artificial intelligence. The researchers say they have developed a deep learning approach, which they have called Primewhich generates Artificial Intelligence chip architectures from existing performance figures and patterns.

The researchers say their approach can generate designs with lower latency and take up less space than the EdgeTPU accelerator that Google has in production, and other designs made using traditional tools. In addition to faster and more efficient designs, the Prime approach is relevant since it would serve to solve two of the main problems of the design of chips based on simulations, which is the one that is usually used: the amount of time that is invested in its design and the amount of computing resources they require.

In addition, as they have stated, the design of chips through the use of simulation software can lead to unfeasible patterns when trying to improve in specific aspects, such as lower power consumption or lower latency. But those made with the Prime system have 50% less latency than generators using simulation-based methods. In addition, the Prime approach reduced the time required for pattern development by 99%.

On the other hand, the researchers, both those from Berkeley and Google, compared the performance of chip designs generated using Prime with EdgeTPUs produced from simulations working with nine Artificial Intelligence applications, among which were the classification models. MobileNetV2 and MobileNetEdge imaging. To begin with, Prime designs were optimized for each app, and according to the researchers, improved latency 2.7 timesand reduced the use of die-cut zones by 1.5 times.

This last surprised the researchers, because they had not trained Prime to reduce the size of the dies, something that can reduce the costs of their manufacture, in addition to lowering their energy consumption. Other tested models achieved even better results in terms of latency and punch area.

Apart from this, the researchers used Prime to design optimized chips to work well with various applications at the same time, and found that the Primed designs also had better latency performance than those designed by simulation. This happened even when used with applications for which there was no training data. In addition, performance improved using more applications.

The group of researchers also used Prime to design a chip that could offer the best possible performance with the nine applications mentioned. And in the tests there were only three builds made with Prime with higher latency than the simulator-based builds. They discovered that this was because Prime favors designs that have more on-chip memory. As a consequence, the chips have less processor power.

To develop Prime, the researchers created what’s known as a robust prediction model, which learns to generate optimized chip designs from AI chip pattern data fed to it. Learn even from chip designs that don’t work. Of course, to avoid the obstacles associated with the use of supervised machine learning, the researchers designed Prime to avoid confusion by what are known as antagonistic examples.

In view of the results, the researchers assure that the approach allows you to optimize the model for specific applications and thus obtain better results. Prime can also be optimized for applications for which there is no training data. To achieve this, it is enough to train a large model with design data in applications for which data is available.

Prime won’t change Google’s chip engineering overnight, but the researchers say it holds promise for a number of applications. These include creating chips for applications that require solving complex optimization problems, as well as using low-performance chip patterns to train data to help drive hardware design. They also hope to be able to use Prime for co-design of hardware and software.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *