35.6 C
Delhi

Nvidia will soon be bringing its Dual GPU for Data Centers

During its GTC Spring 2023 keynote, Nvidia announced a new dual-GPU product, the H100 NVL. This isn’t going to bring back SLI or multi-GPU gaming, and it’s not going to be one of the best graphics cards for gaming, but it is aimed at the growing AI market. According to Nvidia’s information and images, the H100 NVL (H100 NVLink) will have three NVLink connectors on the top, with the two adjacent cards slotting into separate PCIe slots.

Although the three NVLink options were already available with the H100 PCIe, the H100 NVL introduces some additional changes and will only be offered as a paired card solution. It’s an intriguing change of pace, with a focus on inference performance rather than training, perhaps to accommodate servers that don’t support Nvidia’s SXM option. There are some other obvious differences as well, and the NVLink connections should help fill in the missing bandwidth that NVSwitch provides on the SXM solutions.

Previous H100 SXM and PCIe solutions included 80GB of memory (HBM3 for SXM, HBM2e for PCIe), but the actual package includes six stacks, each with 16GB of memory. It’s unclear whether one stack is completely disabled, or if it’s used for ECC or something else.

What we do know is that the Nvidia H100 NVL will have 94GB per GPU and a total of 188GB HBM3.

Nvidia
credit: tomshardware

We assume the “missing” 2GB per GPU is for ECC or has something to do with yields, though the latter seems a little strange. Power is slightly higher than the H100 PCIe, at 350-400 watts per GPU (configurable), representing a 50W increase. Meanwhile, total performance is effectively double that of the H100 SXM: 134 teraflops of FP64, 1,979 teraflops of TF32, and 7,916 teraflops of FP8 (as well as 7,916 teraops INT8).

- Advertisement -TechnoSports-Ad

Essentially, this appears to be the same core design as the H100 PCIe, which also supports NVLink, but with more GPU cores enabled and 17.5% more memory. Because of the switch to HBM3, the memory bandwidth is also significantly higher than on the H100 PCIe. H100 NVL has 3.9 TB/s per GPU and a total of 7.8 TB/s.

For partner and certified systems, Nvidia only supports 2 to 4 pairs of H100 NVL cards due to the fact that this is a dual-card solution and each card takes up two slots. Though a single H100 PCIe can occasionally be found for about $28,000, that remains to be seen.

Also Read:

- Advertisement -TechnoSports-Ad

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Nivedita Bangari
Nivedita Bangari
I am a software engineer by profession and technology is my love, learning and playing with new technologies is my passion.
TechnoSports-Ad

Popular

TechnoSports-Ad

Related Stories

More from author

The list of Airtel SMS packs as of June 12, 2024

Check out the list of Airtel SMS packs, including costs and validity information. We have shared a list of Airtel SMS recharge plans that...

My Jio Recharge Plans as of June 12, 2024: Top trending plans from Jio

My Jio Recharge Plans: Since its establishment in 2016, Reliance Jio has made a remarkable impact on the Indian te­lecommunications industry. The company has...

The Best Recharge Plan for Jio as of 12th June 2024

Best Recharge Plan for Jio in 2024: The Ultimate Guide In the past few months, Jio has introduced and tweaked a slew of new...

The best sites for JPEG compression Online as of 2024 (June 10)

Best sites for JPEG compression: In a nutshell, image optimisation can improve on-page SEO by increasing page load speed. In contrast, a media-heavy web...