AMD has officially released the specifications for their Instinct MI300 ‘CDNA 3’ accelerator, which uses Zen 4 CPU cores in a 5nm 3D chiplet container. The most recent AMD Instinct MI300 accelerator specifications show that this exascale APU will be a monster of a chiplet design. The CPU will be made up of many 5nm 3D chiplet packages, totalling 146 billion transistors.
Among those transistors are numerous core IPs, memory interfaces, interconnects, and other components. The CDNA 3 architecture is the foundation of the Instinct MI300, but the APU also includes 24 Zen 4 Data Center CPU cores and 128 GB of next-generation HBM3 memory running in an 8192-bit wide bus configuration, which is simply mind-blowing.
During the AMD Financial Day 2022, the company confirmed that the MI300 will be a multi-chip and multi-IP Instinct accelerator that includes not only the next-generation CDNA 3 GPU cores but also the next-generation Zen 4 CPU cores.
To enable greater than 2 exaflops of double precision processing power, the U.S. Department of Energy, Lawrence Livermore National Laboratory, and HPE have teamed up with AMD to design El Capitan, expected to be the world’s fastest supercomputer with delivery anticipated in early 2023. El Capitan will leverage next generation products that incorporate improvements from the custom processor design in Frontier.
- Next generation AMD EPYC processors, codenamed “Genoa”, will feature the “Zen 4” processor core to support next generation memory and I/O sub systems for AI and HPC workloads
- Next generation AMD Instinct GPUs based on new compute-optimized architecture for HPC and AI workloads will use next generation high bandwidth memory for optimum deep learning performance
- This design will excel at AI and machine-learning data analysis to create models that are faster, more accurate, and capable of quantifying the uncertainty of their predictions.
— via AMD
In the most recent performance tests, AMD demonstrated that the Instinct Mi300 outperforms the Instinct MI250X by 8x in AI performance (TFLOPs) and 5x in AI performance per watt (TFLOPs/watt).
AMD’s Instinct MI300 ‘CDNA 3’ APUs will be manufactured on both 5nm and 6nm manufacturing nodes. The chip will include the next version of Infinity Cache as well as the 4th Generation Infinity architecture, which will enable CXL 3.0 ecosystem support. The Instinct MI300 accelerator will include a unified memory APU architecture and new Math Formats, allowing for a tremendous 5x performance per watt increase over CDNA 2.
AMD also predicts that the AI performance will be more than 8 times that of the CDNA 2-based Instinct MI250X accelerators. The UMAA on the CDNA 3 GPU will connect the CPU and GPU to a unified HBM memory package, reducing redundant memory copies and delivering reduced TCO.
- ASUS launches its ROG Strix G17 Gaming laptops powered by AMD Ryzen 7000 CPUs at the CES 2023
- BenQ and Eyesafe join hands to reveal the world’s Most Advanced Blue Light Display tech at the CES 2023