AMD received a patent for a unique CPU with a machine learning (ML) accelerator vertically stacked on the I/O die, or IOD, on September 25, 2020. AMD could be working on a data centre-based system-on-chips (SoCs) with embedded FPGAs or machine learning accelerators for specialised GPUs. Similar to how AMD adds specialised cache to their newest CPUs, AMD may put an FPGA or GPU on top of its processor I/O chip.
The technology is critical because it will allow the company to include more accelerator classes in future processor SoCs. AMD’s patent does not guarantee that the newly designed chips will be available to consumers. Users can see what the future holds with proper research and development at the forefront thanks to the company’s newest effort. AMD hasn’t said anything about the fresh patent, so we can only guess what the company has planned for the new designs.
The patent for AMD’s ‘Direct-connected machine learning accelerator’ explains the probable applications for an ML-accelerator stacked onto the processor with the associated IOD. An FPGA or compute GPU will be used to process ML workloads, which will be stacked on an IOD with a specific accelerator connector. AMD can start this architecture by putting a unique accelerator in local memory, either utilising memory linked to the IOD or a separate part not connected to the IOD’s head.
When the term “machine learning” is used, it is frequently associated with data centres. With this new technology, the chip maker will need to increase the workloads of its chips. AMD’s invention would allow workloads to be increased in speed without the need for expensive and bespoke silicon in system CPUs. More power efficiency, data transmissions, and capacities would all be advantages.
Because the patent was filed so close to AMD’s acquisition of Xilinx, it appears to be strategically timed. Now that we’ve been a little over a year and a half after the filing and the patent was finally published at the end of March 2022, we could see the new designs as early as 2023 if they come to fruition. AMD’s fellow Maxim V. Kazakov is identified as the patent’s inventor.
AMD working on new EPYC processors
AMD is working on new EPYC processors, codenamed Genoa and Bergamo, that use a design that combines the I/O die with an accelerator. AMD may be able to create AI-based processors with machine learning accelerators under the Genoa and Bergamo series.
In terms of AMD’s EPYC processor line, the company is looking for a better 600W cTDP or configurable thermal design power for the EPYC Turin processor line’s fifth iteration. EPYC Turin CPUs have twice the current EPYC 7003 Milan series’ cTDP. In addition, the company’s SP5 EPYC processors, which are in their fourth and fifth generations, may consume up to 700W of power in short bursts. The power consumption of the Genoa and Bergamo processors would increase if an ML accelerator was added to the processor. Vertically stacked accelerators, such as AMD’s recently patented ML-accelerated CPU architectures, would help future server chipsets.
The company can now offer compute-focused GPU designs, powerful FPGA designs, Pensando programmable processor series, and a solid x86 microarchitecture thanks to Xilinx technology. Multi-chipset designs are now a reality for the chip maker, similar to the AMD’s Infinity Fabric interconnective technology. By combining multi-tile APUs for datacenters and processors built with TSMC’s N4X performance nodes and rounding it out with either a graphics processor or an FPGA accelerator with an optimally enhanced N3E process tech, datacenter processors with vertical stacking will offer more options for enterprises.
Also Read:
Analysts predict of graphics cards to plummet in the coming years