31.7 C
Delhi

NVIDIA H100 80 GB PCIe Accelerator is selling for over $30,000 US In Japan

The H100 80 GB PCIe accelerator from NVIDIA, which is based on the Hopper GPU architecture, has been offered for sale in Japan. This is the second accelerator to be listed in the Japanese market along with its pricing, the first being the AMD MI210 PCIe, which was also listed just a few days ago.

The H100 PCIe, unlike the H100 SXM5, has a reduced set of specifications, with 114 SMs enabled instead of the full 144 SMs on the GH100 GPU and 132 SMs on the H100 SXM. The chip’s computing horsepower is 3200 FP8, 1600 TF16, 800 FP32, and 48 TFLOPs for FP64. There are additionally 456 Tensor and Texture Units.

NVIDIA
credit: source

The H100 PCIe should operate at lower frequencies because of its lower peak computational horsepower, and so has a TDP of 350W vs the double 700W TDP of the SXM5 model. However, the PCIe card will keep its 80 GB memory and 5120-bit bus interface, but with HBM2e (>2 TB/s bandwidth).

NVIDIA H100 80GB PCIe prices touching sky high due to more Memory

According to gdm-or-jp, a Japanese distribution company, gdep-co-jp, has listed the NVIDIA H100 80 GB PCIe accelerator for $4313,000 ($33,120 US) and a total cost of $4745,950 ($36,445 US) including sales tax. The accelerator will be available in the regular dual-slot passively cooled configuration in the second half of 2022. It is also said that NVLINK bridges will be provided free of charge to individuals who purchase numerous cards, however, these may arrive at a later date.

- Advertisement -TechnoSports-Ad
NVIDIA
credit: source

The NVIDIA H100 costs more than twice as much as the AMD Instinct MI210, which costs roughly $16,500 US in the same market. In comparison to the AMD HPC accelerator, which requires 50W more, the NVIDIA solution has some extremely high GPU performance ratings. The H100 has a non-tensor FP32 compute power of 48 TFLOPs, whereas the MI210 has a peak FP32 compute power of 45.3 TFLOPs.

The H100 can provide up to 800 TFLOPs of FP32 horsepower using Sparsity and Tensor operations. The H100 also has a larger memory capacity of 80 GB compared to the MI210’s 64 GB. NVIDIA appears to be charging a premium for its superior AI/ML capabilities.

also read:

- Advertisement -TechnoSports-Ad

Qualcomm announces delay in their Nuvia Arm Chips deployment


source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Nivedita Bangari
Nivedita Bangari
I am a software engineer by profession and technology is my love, learning and playing with new technologies is my passion.
TechnoSports-Ad

Popular

TechnoSports-Ad

Related Stories

More from author

HBO Max in India: Here’s how you can watch the service using VPN (April 25)

HBO Max in India might launch soon but still, we cannot deny that we want to enjoy our favourite HBO shows as soon as...

Top 10 IT Companies in World: Leading IT companies in the World (April 25)

Top 10 IT company in world: Over the last two years, there has been an increase in IT expenditure, which has resulted in the...

How To Enable Flags on Google Chrome in 2024?

How To Enable Flags on Google Chrome: The Ultimate Guide Google Chrome flags are experimental features and tools in Chrome and other software that...

Free games on Steam 2024 Edition: All We Know

Free games on Steam 2024 - Exclusive list Free games on Steam 2024: Steam is the go-to platform for players who want some good games...