NVIDIA claims ARM’s chips are capable of beating x86 A100 processors

More From Author

See more articles

Myntra Upcoming Sales 2025: Your Fashion Calendar for Maximum...

Myntra Upcoming Sales 2025 In the ever-evolving world of fashion e-commerce, Myntra continues to be India's go-to destination...

Dimensity 6020 vs Snapdragon 695: Mid-Range Chipset Battle

Dimensity 6020 vs Snapdragon 695: Qualcomm Snapdragon 695 5G (SD695) is a fast mid-range ARM-based SoC found...

My Jio Recharge Plans as of January 4,...

My Jio Recharge Plans: Since its establishment in 2016, Reliance Jio has made a remarkable impact on...

NVIDIA’s deal to acquire ARM has been stuck for a whole year now, and the agreement seems to not take off for a whole while now. However, the company has already started promoting the compute architecture in benchmarks. Recently the company’s A100 GPU-equipped server with an ARM and x86 CPU was very similar performance.

However, the fact remains that while ARM beats the socks off of x86 in low power/high-efficiency scenarios, the processor cannot scale the power efficiency to high clocks. The primary reason for this is Leakage, and this is why Apple’s new A15 chips have been a relative disappointment so far.

But the servers are always touted to have the absolute edge of high-performance compute then are an area where x86 has typically reigned supreme. The ARM-based A100 server CPUs managed to beat x86 in the niche 3d-Unit workload.

“Arm, as a founding member of MLCommons, is committed to the process of creating standards and benchmarks to better address challenges and inspire innovation in the accelerated computing industry,” said David Lecomber, a senior director of HPC and tools at Arm.

“The latest inference results demonstrate the readiness of Arm-based systems powered by Arm-based CPUs and NVIDIA GPUs for tackling a broad array of AI workloads in the data centre.”

However, in terms of inference, GPUs remain the king. And the green didn’t hold any punches back, and the company pointed out that an A100 GPU is 104x faster than a CPU in MLPERF benchmarks.

The inference is what happens when a computer runs AI software to recognize an object or make a prediction. It’s a process that uses a deep learning model to filter data, finding results no human could capture.

MLPerf’s inference benchmarks are based on today’s most popular AI workloads and scenarios, covering computer vision, medical imaging, natural language processing, recommendation systems, reinforcement learning, and more.

The A100 GPU was tested at the popular Image Classification ResNet-50 benchmark and even at the Natual Language Processing, and the GPU reigned supreme in everything. Currently, NVIDIA’s deal to acquire ARM has been struck due to legal issues. It seems that the European Union has no desire to let the UK-based chip maker fall into the hands of an American Company.

source

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

━ Related News

Featured

━ Latest News

Featured