NVIDIA Hopper H100 GPU broke every record on MLPerf AI Benchmark

More From Author

See more articles

Myntra Upcoming Sales 2025: Your Fashion Calendar for Maximum...

Myntra Upcoming Sales 2025 In the ever-evolving world of fashion e-commerce, Myntra continues to be India's go-to destination...

Dimensity 6020 vs Snapdragon 695: Mid-Range Chipset Battle

Dimensity 6020 vs Snapdragon 695: Qualcomm Snapdragon 695 5G (SD695) is a fast mid-range ARM-based SoC found...

My Jio Recharge Plans as of January 4,...

My Jio Recharge Plans: Since its establishment in 2016, Reliance Jio has made a remarkable impact on...

In its first appearance on the MLPerf AI Benchmark list, NVIDIA’s Hopper H100 GPU broke every record set by Ampere A100. The Ampere A100 GPUs continue to exhibit leadership performance in the mainstream AI application suite while Jetson AGX Orin leads in edge computing, even as Hopper Tensor Core GPUs pave the path for the next major AI revolution.

Also continuing to exhibit overall leading inference performance across all MLPerf tests: image and audio recognition, natural language processing, and recommender systems, NVIDIA A100 Tensor Core GPUs and the NVIDIA Jetson AGX Orin module for AI-powered robotics.

All six of the neural networks in the round’s per-accelerator performance were surpassed by the H100, also known as Hopper. In separate server and offline settings, it showed superior throughput and performance. NVIDIA Ampere architecture GPUs, which continue to lead in overall MLPerf performances, were outperformed by the NVIDIA Hopper architecture by up to 4.5 times.

NVIDIA
credit: wccftech

Hopper performed exceptionally well on the well-liked BERT model for natural language processing, in part because of its Transformer Engine. It is one of the MLPerf AI models that is the largest and demands the most processing power. The H100 GPUs, which will be released later this year, are being publicly shown off for the first time with these inference benchmarks. The H100 GPUs will take part in next training MLPerf rounds.

The most recent testing showed that NVIDIA A100 GPUs, which are now offered by important cloud service providers and system manufacturers, maintained their overall performance advantage in the mainstream category for AI inference.

More tests involving data centres and edge computing scenarios were won by A100 GPUs than by any other submission. The A100 also demonstrated overall leadership in the MLPerf training benchmarks in June, showcasing its capabilities throughout the AI workflow.

NVIDIA Orin ran every MLPerf benchmark for edge computing and outperformed every other low-power system-on-a-chip in terms of test results. And compared to its April launch on MLPerf, it demonstrated an increase in energy efficiency of up to 50%. In the previous round, Orin delivered an average of 2x higher energy efficiency while operating up to 5x quicker than the Jetson AGX Xavier module of an earlier generation.

NVIDIA
credit: wccftech

Orin combines a cluster of powerful Arm CPU cores with an NVIDIA Ampere architecture GPU on a single chip. It supports the whole NVIDIA AI software stack and is now offered in the NVIDIA Jetson AGX Orin developer kit and production modules for robotics and autonomous systems. These platforms include those for autonomous vehicles (NVIDIA Hyperion), medical devices (Clara Holoscan), and robotics (Isaac).

Also Read:

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

━ Related News

Featured

━ Latest News

Featured