Today, AMD revealed its AI Platform strategy, offering customers a comprehensive portfolio of hardware products for cloud, edge, and endpoint applications, supported by deep industry software collaboration.
AMD Unveils AI Platform Strategy, Introduces Advanced Accelerator for Generative AI
The highlight is the introduction of the AMD Instinctâ„¢ MI300 Series accelerator family, featuring the world’s most advanced accelerator for generative AI, the AMD Instinct MI300X. Powered by the next-gen AMD CDNAâ„¢ 3 architecture and supporting up to 192 GB of HBM3 memory, the MI300X delivers exceptional compute and memory efficiency for large language model training and inference.
AMD also introduced the AMD Instinct Platform, which combines eight MI300X accelerators into an industry-standard design for ultimate AI inference and training solutions. Sampling to key customers begins in Q3. Furthermore, the AMD Instinct MI300A, the world’s first APU Accelerator for HPC and AI workloads, is now sampling to customers.
Empowering an Open AI Software Ecosystem AMD showcased the ROCmâ„¢ software ecosystem for data center accelerators, emphasizing collaborations with industry leaders. Notably, AMD and the PyTorch Foundation have upstreamed the ROCm software stack, enabling immediate support for PyTorch 2.0 with ROCm release 5.4.2 on all AMD Instinct accelerators.
This integration provides developers with a wide range of AI models powered by PyTorch, ready to use on AMD accelerators. Hugging Face, the leading open platform for AI builders, announced optimization plans for thousands of Hugging Face models on AMD platforms, including AMD Instinct accelerators, Ryzenâ„¢ and EPYC processors, Radeonâ„¢ GPUs, and Versalâ„¢ and Alveoâ„¢ adaptive processors.
Robust Networking Portfolio for Cloud and Enterprise AMD showcased its robust networking portfolio, featuring the AMD Pensandoâ„¢ DPU, AMD Ultra Low Latency NICs, and AMD Adaptive NICs. The AMD Pensando DPU combines a powerful software stack, “zero trust security,” and a programmable packet processor, making it the most intelligent and performant DPU available.
Deployed at scale by cloud partners like IBM Cloud, Microsoft Azure, and Oracle Compute Infrastructure, the AMD Pensando DPU is also utilized by enterprises such as HPE Aruba, DXC, and VMware vSphere® Distributed Services Engineâ„¢ to accelerate application performance. AMD previewed the next-generation DPU roadmap, codenamed “Giglio,” which promises enhanced performance and power efficiency, expected to be available by the end of 2023.
AMD Pensando Software-in-Silicon Developer Kit (SSDK) Additionally, AMD announced the AMD Pensando SSDK, enabling customers to rapidly develop or migrate services to deploy on the AMD Pensando P4 programmable DPU. This kit empowers customers to leverage the capabilities of the AMD Pensando DPU for network virtualization and security features within their infrastructure.