Intel includes HBM memory in its new CPUs and threatens AMD’s throne

With the growth of cloud computing and the adoption of more and more supercomputers, it is when server processors have to evolve to the needs of the moment. Which are not limited only to having a large number of cores, but to be able to offer a series of technologies that make them competitive and functional with the needs of the software of the moment.

Sapphire Rapids, Intel Xeon CPU designed for AI

Through a press release written by Lisa Spelman, the corporate vice president and general manager of the Xeon and Memory Group, Intel has given new details about its fourth generation Intel Xeon Scalable Processors, which confirms the rumors about this processor.

Intel’s strategy for its Intel Xeon futures against AMD EPYC is clear, to go for a market that AMD has not bet on, such as artificial intelligence. And yes, Intel has confirmed the rumors about Sapphire Rapids, starting with the implementation of the AMX or Advanced Matrix Extensions units. What the addition of Tensor units for AI in CPU cores which represent a qualitative leap in this regard with respect to the AVX-512 instructions that Intel has been using until now. On the other hand, given that AI requires bandwidth, the memory usage HBM, which will be used in conjunction with the DDR5 memory which will also be supported by this server CPU.

Intel Sapphire Rapids Die

But this is not the only novelty of the Intel Sapphire Rapids, since Intel has announced the implementation of the DSA or Data Streaming Accelerator. A unit in charge of moving data between the different components of the processor. Which in plain language means that it is a smart network adapter or SmartNIC built into the processor. A piece of hardware that is going to be common in hardware designs in the coming years and not only by Intel and not in the world of CPUs exclusively.

Apart from the new technologies described, the Sapphire Rapids will have PCI Express 5.0 support with CXL technology, which also represents a paradigm shift in the intercommunication of the CPU with the components that use this bus. Allowing the CPU and graphics card to have fully consistent memory access automatically.

We will have to wait until well into 2022

Super Computer Aurora

Sapphire Rapids will not launch until well into 2022, as production of this server CPU will begin in Q1 2022, peaking in Q2. So the first servers and data centers with Sapphire Rapids will not be ready until well into 2022. Although most of them are already assigned, since it is common to contract in that market 12 months in advance.

We have known since 2019 that Sapphire Rapids will be the CPU of the Intel Aurora supercomputer, doubled with the Intel Xe-HPC model Ponte Vecchio. Being the first supercomputer in years with purely Intel hardware in terms of CPU and GPU. The delay in manufacturing the Sapphire Rapids may be due to Intel wanting to drop its 10nm SuperFin node wafers for its Alder Lake desktop and laptop CPUs for now.

Related Articles