News

HPE and AMD: The Present of Supercomputing

Despite not being as well known by the general public, supercomputing plays a key role in multiple human activities, and all kinds of research depend on its performance. From medical research to scientific exploration, all those fields in which it is necessary to perform calculations that, due to their volume, are beyond the reach of standard systems and infrastructures, depend on these electronic brains to obtain results.

And the choice of the word brain is not accidental. Little by little we are entering the era of exascale computing, with systems capable of performing from 10^18 floating point operations per second, a figure that is considered to eventually exceed the processing power of the human brain at the neuronal level.

HPE has been and is a key player in the development of supercomputing, and this does not refer only to its evolution, but also to the search for solutions so that companies and research centers can access this valuable resource and benefit from the countless advantages it offers for certain activities.

A key factor for this model is the strategic collaboration between HPE and AMD and that, since the arrival of the first generation of AMD EPYC, has allowed both companies to advance in the search for maximum performance.

Adriano Galano, HPE AI & HPC Sales Specialist Southern Europe details the advantages of supercomputing for companies that want to be at the forefront of their sectors.

As a result of this collaboration, and the idea of ​​bringing supercomputing to many more potential clients, HPE has deployed HPE GreenLake for HPC, HPE’s infrastructure-as-a-service model that allows companies to have the IT infrastructure they need, without having to deal with the enormous costs of having to provision and deploy all those resources themselves. With HPE GreenLake for HPC replicates the model that has been so successful in other types of infrastructures, bringing supercomputing to where it is needed… no matter how far.

HPE Supercomputing with AMD Processors

AMD and supercomputing

amd, for some time now, It has been crowned as the manufacturer capable of developing the most powerful chips on the market. The evolutions of its ZEN architecture have put all its families of chips at the forefront, and the expected and increasingly closer ZEN 4, already on the 5 nanometer platform, promises to be a huge qualitative leap in this regard, with some improvements of performance that will make a big difference, such as an increase in IPC (instructions per cycle) of 25%.

The third generation of AMD EPYC, with up to 64 cores and 128 threads, and which has been the first to use a chiplet architecture, does not focus only on performance, which is obviously a key factor in supercomputing, it also focuses on security, a key factor in these times.

For certain classes of operations, and not only graphical ones, GPUs also play a key role. Its ability to process certain data at high speed makes its performance superior to that offered by processors. Thus, AMD’s proposal would not be complete without CDNA 2 and the third generation of AMD Infinity, with a design that is already adapted to the needs of exascale supercomputing.

AMD EPYC performance is sustained by the use of innovative technologies that are also complementary to each other. For example, thanks to their chiplet architecture, third-generation EPYC processors employ 3D V-Cache, which can triple the amount of L3 cache compared to its predecessors, up to a total of 804 megabytes per socket.

And this is just the beginning the future of EPYC is called Genoa, is its fourth generation, based on ZEN 4, and according to the data that is known so far (they will hit the market in 2022) they will be able to reach 96 cores with 192 threads. For this, they will be based on 12 DIEs with 8 cores and 16 threads each. And of course they will have support for DDR5 and PCIe 5.

HPE Supercomputing with AMD Processors

Public Cloud vs GreenLake on HPC

When talking about supercomputing, it is impossible not to think of extremely complex systems, in which any mismatch, no matter how small, can lead to performance drops, calculation errors or even the total system crash. And, of course, we are talking about systems in which losing minutes or even hours of work can have huge costs, both economic and of other types, for example, if we need our infrastructure for short or very short-term calculations.

The public cloud has been shown to be an excellent option for a large number of usage scenarios, but when we talk about supercomputing, the data tells us that HPE GreenLake offering for HPC with AMD EPYC processors shows far superior performance to various public cloud services. In that regard we can review the report published by InsideHPC, in which we see that, in an OpenFoam test running on single node servers, the HPE GreenLake Platform for HPC delivered more than twice the performance of cloud solutions. AWS and Oracle with very similar costs.

Thus, for supercomputing needs, HPE GreenLake builds on the superb performance of 3rd generation EPYC processors, and in this way it can offer a platform that allows a comprehensive and much simpler management of the resources used and, what is even more important, a relationship between the investment and the result that is totally unbeatable by public cloud services.

And this while waiting for the imminent arrival of the fourth generation EPYC processors, in which AMD will take even more advantage of the 3D possibilities offered by chiplet designs. Undoubtedly, the present and future of accessible and efficient exascale supercomputing are in the convergence of HPE and AMD in the pursuit of maximum performance.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *