Intel’s AI hardware accelerators are better than any GPU currently available on the market

More From Author

See more articles

A Complete Manual for Chetak Lottery Result Today in...

Chetak Lottery is fast becoming the most popular choice among lotto enthusiasts in India, offering an exciting...

Marco OTT Release Date 2025: Streaming Now on Prime...

Picture this: the moment I first heard about Marco, my heart skipped a beat. This wasn't just...

Xiaomi QLED TV FX Pro and 4K TV FX...

Looking to upgrade your home entertainment system? Xiaomi India has just unveiled its latest smart TV lineup...

Since, the craze of ChatGPT, AI has come to the forefront of everything as more and more users have started using it for mainstream usage. However, for such AI needs, the need for high-speed computers plays an important role, while ChatGPT relies on NVIDIA’s GPUs extensively, Intel has stepped up to satisfy the needs of such high-end computing demands.

Intel’s democratization of AI and support for an open ecosystem will meet the computing needs for generative AI

As generative AI models get bigger, power efficiency becomes a critical factor in driving productivity with a wide range of complex AI workload functions from data pre-processing to training and inference. Developers need a build-once-and-deploy-everywhere approach with flexible, open, energy efficient and more sustainable solutions that allow all forms of AI, including generative AI, to reach their full potential.

Intel’s AI hardware accelerators are better than any GPU currently available on the market
Automatic evaluation of generated language output by BLOOMZ models (up to 176B parameters) on 100K LMentry prompts, using Habana Gaudi accelerators

Intel is taking steps to ensure it is the obvious choice for enabling generative AI with Intel’s optimization of popular open-source frameworks, libraries, and tools to extract the best hardware performance while removing complexity.  Intel’s AI hardware accelerators and inclusion of built-in accelerators to 4th Gen Intel® Xeon® Scalable processors provide performance and performance per watt gains to address the performance, price and sustainability needs for generative AI.

Recently, Hugging Face, the top open-source library for machine learning, published results that show inference runs faster on Intel’s AI hardware accelerators than any GPU currently available on the market, with Habana® Gaudi®2 running inference 20% faster on a 176 billion parameter model than Nvidia’s A100.

Intel Xeon

In addition, it has also demonstrated power efficiency when running a popular computer vision workload on a Gaudi2 server, showing a 1.8x advantage in throughput-per-watt over a comparable A100 server. So, as the AI market heats up, Intel, the silicon giant for decades now is ready to challenge NVIDIA to offer its computational bandwidth to customers who look to progress with AI even further.

Read more about this announcement here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

━ Related News

Featured

━ Latest News

Featured