Expanding on our prior announcement of backing the AMD Radeon™ RX 7900 XT, XTX and Radeon PRO W7900 GPUs with AMD ROCm 5.7 and PyTorch, we’re now enhancing our client-based ML Development services on both hardware and software fronts with AMD ROCm 6.0.
AMD Expands Support for ML Development to Additional Radeon GPUs & Announces Support for ONNX Runtime with AMD ROCm 6.0
To start, AI researchers and ML engineers can now also utilize Radeon PRO W7800 and Radeon RX 7900 GRE GPUs for development. By supporting such a vast product range, AMD is facilitating the AI community’s access to desktop graphics cards at an array of price points and performance levels.
In addition, our solution stack is now enriched with support for ONNX Runtime. ONNX, standing for Open Neural Network Exchange, serves as a versatile Machine Learning framework that enables the conversion of AI models across different ML frameworks. Consequently, users can now execute inference on a more extensive variety of source data on local AMD hardware. This also introduces INT8 via MIGraphX – AMD’s proprietary graph inference engine – to the existing data types (including FP32 and FP16).
With the introduction of AMD ROCm 6.0, we’re maintaining our support for the PyTorch framework, incorporating mixed precision with FP32/FP16 into Machine Learning training workflows.
These are thrilling times for anyone venturing into AI. ROCm for AMD Radeon desktop GPUs presents an excellent solution for AI engineers, ML researchers, and enthusiasts alike, and is no longer confined to those with substantial budgets. AMD is committed to continually extending hardware support and introducing more features to our Machine Learning Development solution stack over time.
via AMD