Computer

More than 14000 cores and with 128 GB of memory, this is the AMD MI200 GPU

Last year AMD launched its first AMD Instinct, with which they abandoned the Radeon name to show that there is no relationship with their gaming graphics cards. So much so that AMD based them on its CDNA architecture, which is an evolved version of its Vega architecture and which at the same time is an evolution of its GCN architecture. While in gaming they decided to opt for a new architecture, RDNA.

The next generation of AMD Instinct are called AMD Instinct Mi200 and they bring with them a Dual GPU configuration inside, where things have leaked from it such as having a 128 GB configuration in 8 HBM2e batteries, but until now we did not know the amount of Compute Units that each of the GPU in MCM configuration.

This is the number of Compute Units the MI200 has

Compute Units MI200 AMD

The Instinct Mi200 brings with it a Dual MCM GPU and it seems that each of the chips will have 110 Compute Units inside. This is down from the 120 CDNA Compute Units of the AMD Instinct Mi100 based on the first generation CDNA, codenamed Arcturus (the MI200 makes use of the Aldebaran architecture). However, this does not seem to be the complete configuration of the GPU, rather it would be 112 Compute Units per chip and 224 in total, but one of them (CU) is deactivated to increase the efficiency when manufacturing the chips.

Being a GPU HPC, the CDNA architecture and its successors completely lack fixed function units for graphics and are computationally based. They are not graphics cards for games despite the fact that their architecture derives from these GPUs, rather they are accelerator cards for parallel computing. That is why they also add very important units for this purpose: on the one hand, the Matrix Cores which are the equivalent of NVIDIA’s Tensor Core and Intel’s XMX, on the other we have the use of FP64 units, which is not necessary in gaming, but in the scientific world.

AMD Instinct MI200

The release date for the AMD Instinct MI200 will be for the end of this year and if they have not made any changes to the Compute Units compared to the MI100, then we would be talking about 42.2 TFLOPS in 32-bit floating point Thank you to your 220 Compute Units. But the key, as we have said, is in the floating point power, which is very important in the sectors where this card is directed and is where it seems that CDNA2 is going to take an important leap compared to the first generation.

AMD is going to anticipate the launch of its AMD Instinct MI200 to both Intel and NVIDIA, since Ponte Vecchio is not expected until well into 2022 and NVIDIA’s response does not appear anywhere, at least for now (Hopper). In any case, the war in the market for HPC GPUs is going to be interesting in 2022, since we are going to stop having NVIDIA alone to have the big three and with it see who manages to place more of these GPUs in the supercomputers around the world.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *