Nvidia unveiled a redesigned version of its next-generation Grace Hopper Superchip platform on Tuesday, complete with HBM3e memory for artificial intelligence and high-performance computing. The new GH200 Grace Hopper has the same Grace CPU and GH100 Hopper computing GPU as the original, but it has HBM3e memory with increased capacity and bandwidth.
The new GH200 Grace Hopper Superchip is built around a 72-core Grace CPU with 480 GB of ECC LPDDR5X memory and a GH100 computing GPU with 141 GB of HBM3E memory in six 24 GB stacks and a 6,144-bit memory interface. Despite the fact that Nvidia physically installs 144 GB of memory, only 141 GB is accessible for improved yields.
The current GH200 Grace Hopper Superchip from Nvidia has 96 GB of HBM3 memory and a bandwidth of less than 4 TB/s.
In comparison, the new model improves memory capacity by roughly 50% and bandwidth by more than 25%. Such significant advancements allow the new platform to run larger AI models than the original and deliver demonstrable performance advantages.
Nvidia’s GH200 Grace Hopper platform with HBM3 is presently in production and will be commercially accessible next month, according to Nvidia. The GH200 Grace Hopper platform with HBM3e, on the other hand, is presently sampling and is projected to be available in the second quarter of 2024. Nvidia emphasised that the new GH200 Grace Hopper uses the same Grace CPU and GH100 GPU technology as the original version, thus no new revisions or steppings are required.
The original GH200 with HBM3 and the enhanced GH200 with HBM3E will coexist on the market, implying that the latter will be sold at a premium due to the better performance afforded by the more modern memory.
Nvidia’s next-generation Grace Hopper Superchip platform with HBM3e is completely compatible with Nvidia’s MGX server specification, making it a drop-in replacement for existing server designs.
Also Read: