AMD this week shocked everyone when it published detailed performance numbers of its Instinct MI250 accelerator compared to Nvidia’s A100 compute GPU. And as predicted, AMD’s offering outperforms Nvidia’s board in all cases by two or three times, and this development is beyond our expectations.
We might agree that it’s a common exercise for hardware companies to demonstrate their advantages, however, posting detailed performance numbers versus competition is rarely seen on the company’s official websites.
But, AMD doing the same thing means only one and that’s AMD is pretty darn confident in what it has produced.
AMD has aimed its Instinct MI200 primarily at the HPC and AI workloads and obviously, AMD tailored it’s CDNA 2 more for HPC and supercomputers rather than for AI. The chip manufacturer has tested the competing accelerators in various HPC applications and benchmarks that deal with algebra, physics, cosmology, molecular dynamics, and particle interaction.
There are some physics and molecular dynamics HPC applications which are being used widely and all of them have industry-recognized tests, such as LAMMPS and OpenMM. These were all equivalent to real-world workloads and in these tests, AMD’s MI250X can outperform Nvidia’s A100 easily by 1.4 – 2.4 times.
AMD’s Instinct MI200 accelerators are powered by the company’s latest CDNA 2 architecture which is optimized for high-performance computing (HPC) and will power the upcoming Frontier supercomputer which promises to deliver about 1.5 FP64 TFLOPS of sustained performance.
The MI200-series comes with two graphics compute dies (GCDs) with each of them packing 29.1 billion transistors, which is slightly more compared to 26.8 billion transistors inside the Navi 21. AMD’s flagship Instinct MI250X accelerator also features 14,080 stream processors and comes equipped with 128GB of HBM2E memory.
AMD has produced a true beast, however, it’s time to see what NVIDIA can produce as an answer to AMD’s taunts.