With so many startups and large semiconductor firms racing to get new AI chips to market, electronic tool and design service firms like Synopsys, Cadence and Mentor Graphics are looking for new approaches to help designers speed their time to market. Ironically, one of the approaches being taken is to use AI to help build better AI chips. The back-end of the design process, called physical design, is especially ripe for AI-enabled tools, and early adopters are realizing excellent results. For readers interested in more details, please see my research paper on this topic here.
For those unfamiliar with chip building, allow me to set up the problem. Once a chip’s logic is finalized, which can take months or years, the physical design process begins, wherein engineers must determine where to put each block of transistors and how to interconnect them. This process is called place-and-route. With billions of transistors on a modern chip, this design layout and testing typically takes several engineers some 20 to 30 weeks to complete. If they get it wrong, the chip could be slower than designed, consume more power, cost more than planned and/or simply not work at all. But there is not one “right” way to lay out the chip; there are a gazillion possible options, involving trade-offs on the chip’s three primary design goals: performance, power, and area (or PPA).
In effect, the design team faces a massive “search” problem: floorplan exploration alone can encompass a staggering 1090,000 possibilities. To put that into perspective, the game of chess has “only” 10123 states, and the game of GO comprises some 10360 states. The gaming analogy is useful, since physical design and gaming can now be “played” by AI software. While AI can require tremendous computational resources, it can sort through unimaginably large sets of alternatives to optimize parameters to achieve a set of goals, which in the case of chip design is some optimal mix of PPA.
Winning the game with reinforcement learning
There is a branch of unsupervised learning in AI called reinforcement learning (RL) which has been used to solve these sorts of gaming problems with trial and error learning: let the computer “try” a solution, then reinforce the parameters of that solution with the result. Does it get better or worse? Then repeat a few trillion times until the solution converges, or “wins.”
The EDA company Synopsys has been experimenting with this approach with its clients and the results have been, well, staggering.
Figure 2 summarizes four projects across a range of complex chip designs undertaken by Synopsys and its clients. On average, these projects finished 86% sooner, were staffed by a single data scientist instead of 4-5 engineers, and all met or exceeded the projects PPA objectives. Interestingly, some of the designs produced by the AI were somewhat counter-intuitive, spreading blocks of transistors in unconventional shapes that a design team would be very unlikely to try. But the results speak for themselves; they can be faster, more efficient and can come to market much more quickly.
Speaking with the team at Synopsys, I get the clear sense that using RL in Physical Design is just the tip of the iceberg, and that AI and Machine Learning can be used across many workflows common in designing integrated circuits. I am also reminded of some of NVIDIA CEO Jensen Huang’s comments when he first announced Saturn V in 2016, NVIDIA’s in-house GPU-powered supercomputer, which was ranked in the top 30 supercomputers in the world at the time. Mr. Huang then predicted that Saturn V would become a powerful differentiator for his company, helping NVIDIA design engineers become more productive and produce superior products. Seeing Synopsys’ early project results with RL, I can begin to understand why Mr. Huang was so excited!