Nvidia is one of the top manufacturers of chips for accelerating artificial intelligence (AI) and machine learning (ML). So it makes sense that it appears to be one of the forerunners in incorporating AI into chip design. The business has revealed in a paper and blog post how its AutoDMP system can speed up modern chip floor-planning by 30X over existing techniques by using GPU-accelerated AI/ML optimisation.
Automated DREAMPlace-based Macro Placement is referred to as AutoDMP. It is intended to be plugged into an Electronic Design Automation (EDA) system used by chip designers to speed up and optimise the laborious process of determining the best locations for processor building blocks.
In one of Nvidia’s demonstrations of AutoDMP in action, the programme used artificial intelligence to solve the issue of choosing the best configuration for 256 RSIC-V cores, taking into account 2.7 million standard cells and 320 memory macros.
On a single Nvidia DGX Station A100, AutoDMP needed 3.5 hours to create an ideal layout.
The chip’s landscape is significantly influenced by macro placement, which “directly affecting many design metrics, such as area and power consumption,” according to Nvidia. In order to maximise the chip’s performance and efficiency, which directly impacts the customer, placement optimisation is a crucial design task.
The speed of GPU-accelerated algorithms can be 30 times faster than that of earlier placement techniques. Additionally, AutoDMP supports cells of various sizes. In order to reduce wire length in a limited space, AutoDMP places standard cells (grey) and macros (red) in the top animation.
Also Read: