It should be borne in mind that this objective that AMD has set affects its EPYC CPUs and AMD Instinct accelerators but they only mention that it would be in Artificial Intelligence and High Performance Computing training applications (HPC). According to the manufacturer itself, achieving this goal will require AMD to increase the efficiency of a compute node 2.5 times more than the entire industry has improved it in the last 5 years.
Efficiency and AMD’s ambitious goal
What we have previously called a “compute node” refers to the world’s most advanced and powerful computer systems (sometimes known as supercomputers), currently powering systems for large-scale scientific research and simulators (eg financial). These computing systems or nodes are essential because they are the ones that allow great research advances in many fields, such as in the investigation of new materials, climate predictions, genetics, pharmacy, medicine and a long etcetera.
In this regard, accelerators like AMD Instinct are also an important part, since nowadays Artificial Intelligence neural networks are used for almost everything; The objective of multiplying the efficiency of these nodes by 30 would mean saving billions of kWh, greatly reducing electricity costs and reducing the carbon footprint that generate.
How does AMD intend to reach this milestone?
In addition to calculating node performance by measuring the watts it consumes and its theoretical performance, AMD will begin to use a new methodology called PUE (Power Utilization Effectiveness, something like effectiveness of energy use) using specific measuring equipment. The power consumption baseline uses the same improvement rates that the rest of the industry used between 2015 and 2020, but to arrive at that estimate (multiplying efficiency by 30), AMD has extrapolated the data to what they think they can get by 2025.
The measure of energy improvement per operation (which is how the efficiency in compute nodes is calculated, measuring what they spend for each operation they perform) is weighted by the projected global volumes (that is, estimated) multiplied by the typical consumption of energy (TEC) of each one of the computing segments, arriving with it to a significant metric of the real improvement of the use of the energy around the world.
In other words, AMD has announced its objective but has only explained that they will change the way they perform the calculations, without mentioning at any time how they will modify hardware of your EPYC CPUs or your Instinct accelerators to make such an improvement.
What do industry experts say about it?
Dr. Jonathan Koomey, President of Koomey Analyticssaid the following: “AMD’s energy efficiency goal for AI-accelerated computing nodes and high-performance computing fully reflects the latest workloads, the most representative operational behaviors, and the most accurate benchmarking methodology.”
For his part, Mark Papermaster (executive vice president and AMD CTO) said: Achieving gains in processor power efficiency is a long-term design priority for AMD. Focused on these important segments and on the value proposition for leading companies to improve their environmental stewardship, AMD’s goal of increasing the industry’s energy efficiency performance by 30 times is 150% higher than what we set out to do during the preceding five-year period. ‘
Addison Snell, CEO of Intersect360 Research, said: “With computing becoming more ubiquitous from Edge Computing to Cloud Computing, AMD has taken a very bold position on the energy efficiency of its processors and accelerators. Future earnings are difficult to predict now, as the historical advantages that come with Moore’s Law have greatly diminished. A 30-fold improvement in efficiency five years from now will be an impressive technical achievement that demonstrates the strength of AMD’s technology and its emphasis on environmental sustainability. “