AMD has long aimed to capture a share of the datacenter GPU market from Nvidia but has faced challenges, mainly due to poor software support for its Instinct GPUs. However, the situation is now improving, and AMD may have found its breakthrough product. AMD’s CEO, Lisa Su, anticipates that the upcoming Instinct MI300-series will become the company’s fastest product to achieve $1 billion in sales.
The All New AMD Instinct MI300
Lisa Su stated, “We now expect datacenter GPU revenue to be approximately $400 million in the fourth quarter and exceed $2 billion in 2024 as revenue ramps throughout the year. This growth would make MI300 the fastest product to ramp to $1 billion in sales in AMD history.”
The reason behind the expected success of AMD’s Instinct MI300 series lies in its broader target audience. The new product family is designed not only for supercomputers and select data centers but also for cloud service providers (CSPs) planning to use these processors for AI training and inference. AMD is already shipping Instinct MI300A accelerated processing units for the El Capitan supercomputer, one of the first machines with performance exceeding 2 ExaFLOPS. It features a multi-chiplet design with Zen 4 and CDNA3 chiplets.
Additionally, AMD plans to begin shipments of its Instinct MI300X processor to cloud service providers in the coming weeks. Unlike MI300A, MI300X relies solely on CDNA3-based chiplets for AI and HPC tasks.
On the hardware side, AMD’s Instinct MI300A and MI300X accelerators are progressing as expected, meeting or surpassing internal performance expectations. In terms of software, AMD has expanded its AI software ecosystem, improving the performance and features of its ROCm platform. ROCm has been integrated into mainstream PyTorch and TensorFlow ecosystems, and Hugging Face models are consistently updated and validated to run on AMD hardware, including Instinct accelerators.