Sunday, May 22, 2022

NVIDIA H100 80 GB PCIe Accelerator is selling for over $30,000 US In Japan

- Advertisement -

The H100 80 GB PCIe accelerator from NVIDIA, which is based on the Hopper GPU architecture, has been offered for sale in Japan. This is the second accelerator to be listed in the Japanese market along with its pricing, the first being the AMD MI210 PCIe, which was also listed just a few days ago.

The H100 PCIe, unlike the H100 SXM5, has a reduced set of specifications, with 114 SMs enabled instead of the full 144 SMs on the GH100 GPU and 132 SMs on the H100 SXM. The chip’s computing horsepower is 3200 FP8, 1600 TF16, 800 FP32, and 48 TFLOPs for FP64. There are additionally 456 Tensor and Texture Units.

credit: source

The H100 PCIe should operate at lower frequencies because of its lower peak computational horsepower, and so has a TDP of 350W vs the double 700W TDP of the SXM5 model. However, the PCIe card will keep its 80 GB memory and 5120-bit bus interface, but with HBM2e (>2 TB/s bandwidth).

NVIDIA H100 80GB PCIe prices touching sky high due to more Memory


According to gdm-or-jp, a Japanese distribution company, gdep-co-jp, has listed the NVIDIA H100 80 GB PCIe accelerator for $4313,000 ($33,120 US) and a total cost of $4745,950 ($36,445 US) including sales tax. The accelerator will be available in the regular dual-slot passively cooled configuration in the second half of 2022. It is also said that NVLINK bridges will be provided free of charge to individuals who purchase numerous cards, however, these may arrive at a later date.

credit: source

The NVIDIA H100 costs more than twice as much as the AMD Instinct MI210, which costs roughly $16,500 US in the same market. In comparison to the AMD HPC accelerator, which requires 50W more, the NVIDIA solution has some extremely high GPU performance ratings. The H100 has a non-tensor FP32 compute power of 48 TFLOPs, whereas the MI210 has a peak FP32 compute power of 45.3 TFLOPs.

The H100 can provide up to 800 TFLOPs of FP32 horsepower using Sparsity and Tensor operations. The H100 also has a larger memory capacity of 80 GB compared to the MI210’s 64 GB. NVIDIA appears to be charging a premium for its superior AI/ML capabilities.

- Advertisement -

also read:

Qualcomm announces delay in their Nuvia Arm Chips deployment


- Advertisement -
Nivedita Bangari
Nivedita Bangari
I am a software engineer by profession and technology is my love, learning and playing with new technologies is my passion.


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

More To Consider

Stay Connected

Boat Lifestyle [CPS] IN

Hot Topics


Latest Articles



Adblocker detected! Please consider reading this notice.

We've detected that you are using AdBlock Plus or some other adblocking software which is preventing the page from fully loading.

We don't have any banner, Flash, animation, obnoxious sound, or popup ad. We do not implement these annoying types of ads!

We need money to operate the site, and almost all of it comes from our online advertising.

Please add to your ad blocking whitelist or disable your adblocking software.