NVIDIA has revealed that they are leveraging AI to optimize and accelerate next-generation chip designs by up to 30x.
NVIDIA is utilizing AI to skyrocket chip design & to aid in performance, power consumption, and overall cost per chip
On the NVIDIA Developer blog, authors and developers Anthony Agnesina and Mark Ren posted a technical walkthrough of how Automated DREAMPlace-based Macro Placement, or AutoDMP, assists with designing chips utilizing GPUs and artificial intelligence. A companion paper (PDF), “AutoDMP: Automated DREAMPlace-based Macro Placement,” was published a day before by Agnesina and others for the International Symposium on Physical Design this year. Their research showed that AutoDMP could optimize 2.7 million cells and 320 macros in three hours using the NVIDIA DGX Station A100.
The process of AutoDMP is to be connected to a platform used by chip manufacturers called an Electronic Design Automation (EDA) system. The two work together in tandem to increase the process that older systems would take, trying to locate specific areas for the initial design of the CPU. In one of the demonstrations of the power of AutoDMP, the placement tool created a 256 RSIC-V core layout that incorporated 320 memory macros and 2.7 million normal cells. This process saved the development team an immense amount of time by solving the challenge in roughly three hours.
Macro placement has a tremendous impact on the landscape of the chip, directly affecting many design metrics, such as area and power consumption. Thus, improving the placement of these macros is critical to optimizing each chip’s performance and efficiency.
— NVIDIA Developer blog post, “AutoDMP Optimizes Macro Placement for Chip Design with AI and GPUs”
What is fascinating is that this process was initially manually designed, with macros decisively placed based on previous designs that have primarily worked. The most significant hurdle was finding the best-suited macro placement, a large time-killer.
Although the concurrent cell and macro placement method achieves promising results, we believe it can be further improved. Numerical algorithms have many algorithm parameters, constituting a large design space. The final quality of the placement depends on which parameter configuration is chosen. Expanding this design space with concurrent cell and macro placement can further increase the suboptimality gap.
In addition, conventional placement algorithms combine multiple design objectives into a single objective for optimization. A multi-objective optimization framework could extend the search space and reduce the optimality gap.
Utilizing the DREAMPlace engine to assist with macro and cell placement, the machine solves the constraints by formulating the issue as a wire length optimization problem. From there, it calculates the restrictions to develop an optimal solution (the blocks appearing in red).
The global placement animation. Macros (in red) and standard cells (in gray) are spread together inside the floor plan outline during optimization to minimize the wire length under density constraints.
Ultimately, NVIDIA would love to see how this new “methodology can unlock new prospective design space exploration techniques.”
Modifying the initial location of the cells from the center to the upper right results in two very different final placement landscapes
NVIDIA has uploaded AutoDMP to GitHub as an open-source project for anyone to test and utilize themselves.
Last week, it was reported that NVIDIA, in collaboration with TSMC, ASML, and Synopsys, would pull their collective knowledge and resources together to accelerate next-gen chip manufacturing by up to forty times optimization using cuLitho, or computational lithography.
Last year, Synopsys designed a DSO.ai. an autonomous toolkit, that reached a milestone of over one hundred commercial tape-outs using its artificially intelligent design.
The post NVIDIA Shows How AI Plays A Prominent Role In Optimizing & Accelerating Chip Designs By Up To 30x by Jason R. Wilson appeared first on Wccftech.