Since, the craze of ChatGPT, AI has come to the forefront of everything as more and more users have started using it for mainstream usage. However, for such AI needs, the need for high-speed computers plays an important role, while ChatGPT relies on NVIDIA’s GPUs extensively, Intel has stepped up to satisfy the needs of such high-end computing demands.
Intel’s democratization of AI and support for an open ecosystem will meet the computing needs for generative AI
As generative AI models get bigger, power efficiency becomes a critical factor in driving productivity with a wide range of complex AI workload functions from data pre-processing to training and inference. Developers need a build-once-and-deploy-everywhere approach with flexible, open, energy efficient and more sustainable solutions that allow all forms of AI, including generative AI, to reach their full potential.
Intel is taking steps to ensure it is the obvious choice for enabling generative AI with Intel’s optimization of popular open-source frameworks, libraries, and tools to extract the best hardware performance while removing complexity. Intel’s AI hardware accelerators and inclusion of built-in accelerators to 4th Gen Intel® Xeon® Scalable processors provide performance and performance per watt gains to address the performance, price and sustainability needs for generative AI.
Recently, Hugging Face, the top open-source library for machine learning, published results that show inference runs faster on Intel’s AI hardware accelerators than any GPU currently available on the market, with Habana® Gaudi®2 running inference 20% faster on a 176 billion parameter model than Nvidia’s A100.
In addition, it has also demonstrated power efficiency when running a popular computer vision workload on a Gaudi2 server, showing a 1.8x advantage in throughput-per-watt over a comparable A100 server. So, as the AI market heats up, Intel, the silicon giant for decades now is ready to challenge NVIDIA to offer its computational bandwidth to customers who look to progress with AI even further.
Read more about this announcement here.