26.7 C
Delhi

Nvidia will soon be bringing its Dual GPU for Data Centers

During its GTC Spring 2023 keynote, Nvidia announced a new dual-GPU product, the H100 NVL. This isn’t going to bring back SLI or multi-GPU gaming, and it’s not going to be one of the best graphics cards for gaming, but it is aimed at the growing AI market. According to Nvidia’s information and images, the H100 NVL (H100 NVLink) will have three NVLink connectors on the top, with the two adjacent cards slotting into separate PCIe slots.

Although the three NVLink options were already available with the H100 PCIe, the H100 NVL introduces some additional changes and will only be offered as a paired card solution. It’s an intriguing change of pace, with a focus on inference performance rather than training, perhaps to accommodate servers that don’t support Nvidia’s SXM option. There are some other obvious differences as well, and the NVLink connections should help fill in the missing bandwidth that NVSwitch provides on the SXM solutions.

Previous H100 SXM and PCIe solutions included 80GB of memory (HBM3 for SXM, HBM2e for PCIe), but the actual package includes six stacks, each with 16GB of memory. It’s unclear whether one stack is completely disabled, or if it’s used for ECC or something else.

What we do know is that the Nvidia H100 NVL will have 94GB per GPU and a total of 188GB HBM3.

Nvidia
credit: tomshardware

We assume the “missing” 2GB per GPU is for ECC or has something to do with yields, though the latter seems a little strange. Power is slightly higher than the H100 PCIe, at 350-400 watts per GPU (configurable), representing a 50W increase. Meanwhile, total performance is effectively double that of the H100 SXM: 134 teraflops of FP64, 1,979 teraflops of TF32, and 7,916 teraflops of FP8 (as well as 7,916 teraops INT8).

- Advertisement -TechnoSports-Ad

Essentially, this appears to be the same core design as the H100 PCIe, which also supports NVLink, but with more GPU cores enabled and 17.5% more memory. Because of the switch to HBM3, the memory bandwidth is also significantly higher than on the H100 PCIe. H100 NVL has 3.9 TB/s per GPU and a total of 7.8 TB/s.

For partner and certified systems, Nvidia only supports 2 to 4 pairs of H100 NVL cards due to the fact that this is a dual-card solution and each card takes up two slots. Though a single H100 PCIe can occasionally be found for about $28,000, that remains to be seen.

Also Read:

- Advertisement -TechnoSports-Ad

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Nivedita Bangari
Nivedita Bangari
I am a software engineer by profession and technology is my love, learning and playing with new technologies is my passion.
TechnoSports-Ad

Popular

TechnoSports-Ad

Related Stories

More from author

Best RTX 4070 Gaming Laptops in India as of 2024

The top-performing RTX 4070 Gaming Laptops available in India in 2024 are equipped with highly capable CPUs, graphics cards, and memory. These laptops not...

HBO Max in India: Here’s how you can watch the service using VPN (April 29)

HBO Max in India might launch soon but still, we cannot deny that we want to enjoy our favourite HBO shows as soon as...

Top 10 IT Companies in World: Leading IT companies in the World (April 29)

Top 10 IT company in world: Over the last two years, there has been an increase in IT expenditure, which has resulted in the...

How To Enable Flags on Google Chrome in 2024?

How To Enable Flags on Google Chrome: The Ultimate Guide Google Chrome flags are experimental features and tools in Chrome and other software that...