Exclusive Content:

The No-Compromise Gaming PC Build under ₹30,000 in 2023

In search of a powerful yet efficient and budget-friendly...

Nvidia will soon be bringing its Dual GPU for Data Centers

During its GTC Spring 2023 keynote, Nvidia announced a new dual-GPU product, the H100 NVL. This isn’t going to bring back SLI or multi-GPU gaming, and it’s not going to be one of the best graphics cards for gaming, but it is aimed at the growing AI market. According to Nvidia’s information and images, the H100 NVL (H100 NVLink) will have three NVLink connectors on the top, with the two adjacent cards slotting into separate PCIe slots.

Although the three NVLink options were already available with the H100 PCIe, the H100 NVL introduces some additional changes and will only be offered as a paired card solution. It’s an intriguing change of pace, with a focus on inference performance rather than training, perhaps to accommodate servers that don’t support Nvidia’s SXM option. There are some other obvious differences as well, and the NVLink connections should help fill in the missing bandwidth that NVSwitch provides on the SXM solutions.

Previous H100 SXM and PCIe solutions included 80GB of memory (HBM3 for SXM, HBM2e for PCIe), but the actual package includes six stacks, each with 16GB of memory. It’s unclear whether one stack is completely disabled, or if it’s used for ECC or something else.

What we do know is that the Nvidia H100 NVL will have 94GB per GPU and a total of 188GB HBM3.

Nvidia
credit: tomshardware

We assume the “missing” 2GB per GPU is for ECC or has something to do with yields, though the latter seems a little strange. Power is slightly higher than the H100 PCIe, at 350-400 watts per GPU (configurable), representing a 50W increase. Meanwhile, total performance is effectively double that of the H100 SXM: 134 teraflops of FP64, 1,979 teraflops of TF32, and 7,916 teraflops of FP8 (as well as 7,916 teraops INT8).

Essentially, this appears to be the same core design as the H100 PCIe, which also supports NVLink, but with more GPU cores enabled and 17.5% more memory. Because of the switch to HBM3, the memory bandwidth is also significantly higher than on the H100 PCIe. H100 NVL has 3.9 TB/s per GPU and a total of 7.8 TB/s.

For partner and certified systems, Nvidia only supports 2 to 4 pairs of H100 NVL cards due to the fact that this is a dual-card solution and each card takes up two slots. Though a single H100 PCIe can occasionally be found for about $28,000, that remains to be seen.

Also Read:

Source

Latest

IPL 2023 Final: CSK defeats GT in a nail-biter to bag their 5th IPL trophy

Once again, the Chennai Super Kings emerge victoriously in...

Purple Cap in IPL 2023: Top 10 players with the most wickets in IPL 2023

Purple Cap in IPL 2023: The Indian Premier League...

Orange Cap in IPL 2023: Top 10 players with the most runs in IPL 2023

The Indian Premier League (IPL) is the most popular...

Top 10 richest cricket boards in the world in 2023

Know the Top 10 Richest Cricket Boards in the...

Don't miss

The Sleep Company [CPS] IN

Featured

Nivedita Bangari
Nivedita Bangari
I am a software engineer by profession and technology is my love, learning and playing with new technologies is my passion.
Follow-Google-News

NVIDIA becomes the only second company in history to increase its market cap by $200 billion

NVIDIA appears to have fully adapted itself as the face of the ongoing AI revolution based on the company's blazing-hot stock price growth. For...

Sunil Mittal said India will become a $5 trillion economy by 2027

By 2027, India will have a GDP of $5 trillion, according to Sunil Mittal, founder and chairman of Bharti Enterprises. Speaking at a conference...

ASRock unveils its new DeskMeet and DeskSlim Mini PC’s powered by AMD’s Ryzen 7000 CPUs

The latest DeskMeet and DeskSlim Mini PCs from ASRock, which use AMD's Ryzen 7000 Desktop CPUs, will be on display. Future models of ASRock's...

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.