AMD has recently unveiled its AMD ROCm™ 5.7 for the Radeon™ RX 7900 XTX and Radeon™ PRO W7900 GPUs, specifically tailored for Machine Learning (ML) development workflows with PyTorch. The company has now broadened its support to encompass the Radeon™ RX 7900 XT GPU, providing AI developers and researchers with an expanded range of options.
AMD Delivers More Options for AI Developers by Extending PyTorch Support to AMD Radeon™ RX 7900 XT
The Radeon RX 7900 XT GPU is built on the same RDNA™ 3 GPU architecture, equipped with a generous 20GB of rapid onboard memory, and includes 168 AI accelerators. This makes it yet another robust solution for accelerating ML training and inference workflows on a local desktop.
For PyTorch users seeking a local client solution, they now have additional choices to harness the parallel computing power of desktop GPUs, thereby decreasing their dependence on cloud-based solutions.
Erik Hultgren, Software Product Manager at AMD, expressed his enthusiasm about the new addition to their portfolio. “We’re thrilled with this latest expansion of our product line. Together with ROCm, these high-end GPUs make AI more accessible from both a software and hardware standpoint, allowing developers to select the solution that best aligns with their requirements.”
The Red team remains committed to providing the AI community with an increasing array of solution options through the AMD ROCm software and the support of now three high-end RDNA™ 3 architecture-based GPUs for Machine Learning development using PyTorch.