Is AI Reinventing the Way We Invent? A Conversation with Thomas Andersen, Synopsys VP, AI & ML

By Patrick Moorhead - July 18, 2023

On this episode of the Moor Insights & Strategy Insider Podcast, host Patrick Moorhead is joined by Dr. Thomas Andersen, Vice President AI and Machine Learning at Synopsys.

AI touches just about every facet of today’s business world—from infrastructure to tools and solutions. Dr. Thomas Andersen heads the artificial intelligence (AI) and machine learning (ML) solutions group at Synopsys, where he focuses on developing new technologies in the AI and ML space to automate the future of chip design.

During their conversation, Patrick and Thomas discuss how AI is changing the way we do business, how that is reflected in the semiconductor industry, how AI is being used to help design complex chips and more.

This is a fascinating conversation you don’t want to miss!

You can watch the full video here:

You can listen to the episode here:

TRANSCRIPT

Patrick Moorhead: Hi, this is Pat Moorhead with Moor Insights and Strategy, and we are here for another Moor Insights and Strategy Insider podcast where we have the top level executives from the most relevant companies in the technology space. And today we’re going to talk about what has been my favorite topic for the last six months, and that’s AI. And I am joined by Dr. Thomas Andersen. Welcome to the show.

Thomas Andersen: Thank you. Very exciting to be here.

Patrick Moorhead: Yeah, so excited that we can talk about AI, particularly as it’s related to big picture all the way down to Synopsys and what you’re seeing. But what I’d love for you to do is we can see in the lower thirds your title, and I gave you a little bit of an introduction, but can you talk about what you do for Synopsys, but also talk a little bit about what Synopsys does as well. You’re a household name in the chip and systems industry, but not everybody knows who the company is and what they do.

Thomas Andersen: Oh, absolutely. Synopsys is a leader in providing silicon to software solutions with the world’s most advanced technologies for chip design, verification, IP integration, but also for software security and quality testing. Essentially, we help our customers innovate so they can bring amazing new products to life, whether that’s the latest generation of cell phones, IoT devices, self-driving cars. Anything that has a chip inside, we are behind providing the silicon to software solutions to make it happen. And I personally, so my favorite topic is also AI, obviously, and I’ve been working on ML and AI strategy at Synopsys for the last, I would say five years. I’m very excited to be in this space as there’s a lot of new and changing things coming up.

Patrick Moorhead: Yeah, I’m really excited about AI. I am excited about your space though. I mean, 30 years ago, people who could do chips, they were all IDMs, meaning they manufactured their own stuff. The whole notion of disaggregation and specialization wasn’t even a thing. What I really love is how you’ve enabled companies that aren’t even looked at as classic chip makers be able to work on their designs, not only of their core IP, but of the SOCs, all the way to helping them in their systems. So, I’m super excited about this space, but hey, let’s dive into AI. I’d like to start high level. What are you seeing in terms of how AI is changing the way businesses do business and even any large enterprise, like a government is getting value out of this very transformative technology?

Thomas Andersen: Yeah, absolutely. I agree. It’s a very transformative technology. It really touches every piece of our lives. I think in the consumer world, you can see things like smartphones where you take photos, you get the perfect lighting when you take your selfies to self-driving cars. For example, in San Francisco, you can now drive with Waymo cars. I think Mercedes is getting their first level three cars on the road later this year. But there’s also usage of AI in many other areas that you don’t always think about. Things like cancer research, COVID vaccine research, climate modeling, and so on.

So, you can say it touches every part of our lives and you don’t always see it. And in the business world, of course, also, there’s many, many aspects starting with human resources, finance, marketing, sales, R&D, where you can essentially use the power of AI to automate tasks that traditionally you would’ve done manually and they could be tedious. They could be simple human tasks that touch aspects of your business, like your sales report or your marketing collateral or writing code when you write software. This can have a significant impact on the product development, whether it’s software development, hardware or manufacturing. There’s just so many opportunities that are still out there that we can pursue to make this transformative.

Patrick Moorhead: But let’s talk about chips. I mean, I love chips. I’ve spent so much time in the semiconductor business. I love software too. Okay? Half the research we do is on software. I’d say half of it is on infrastructure and hardware. There was a phase for a while that people forgot that hardware was really, really important. Software is eating the world, but I love to say, well, software doesn’t run on air. Let’s get that really clear. And I think semiconductor has got a lot of respect, particularly around the pandemic time. And as we check out what’s going on in the AI space, I mean, they’re literally the picks and shovels that make all of these fancy capabilities real. But how are you seeing AI reflected in semiconductors?

Thomas Andersen: Yeah, it’s a very good question. Semiconductor is obviously a different domain than I would say most consumer applications. One of the reasons is I would say the complexity of the problems is hard. Like designing a chip, for example, you need an expert versus say a mundane task of driving a car. And you can think about it, driving a car, we’re still not at level five. We’re level three, maybe starting at level four automation. And you could argue that, well, everybody can drive a car. Can everybody write software, design a chip? No.

So, obviously, number one, it’s a much harder problem. It’s also, and I think we’ll talk more about this later, data availability is obviously another issue. Because I’m not mining endless amounts of public data, I have very limited amount of data. I would say that if you look at the semiconductor industry compared to the breakthroughs that have happened in the AI space and other areas like the areas that you just mentioned like on videos, on YouTube, you get recommendations. I remember when Netflix came out, I remember I was amazed at how great they were at suggesting the next movie for me. That was like more than 10 years ago, but I remember I thought that was cool.

In the semiconductor industry, of course our problems are harder, and therefore I think it took a little bit longer to get to the point where we really see applications in AI that touch things. But we do see a significant chip shift happening in the entire chip development flow. I think there’s multiple drivers of it. There’s number one, traditionally chip development is a very, very long process. And Pat, what you said about chip making and how hardware is suddenly cool again, that is totally true. I think for the last, I would say maybe 20, 30 years, it was just a thing and it wasn’t really visible.

The software is what mattered, but now hardware matters because also the type of hardware architectures that you create to again, accelerate these applications is extremely important. And then when you do the chip development itself, waiting 12 to 18 months to come out with the next chip and needing hundreds of people and millions of dollars to build a new chip is very labor-intensive. It’s very expensive. So, essentially it screams for AI to help me. Help me with automating this, help me with reuse and help me essentially build it faster and better.

Patrick Moorhead: And for the chip folks out there, I mean, this results in smaller geometries, easier way to handle all that complexity. Big trend now, chiplets, multi-die type of arrangements. And quite frankly, we’ve seen every other good implementation of AI, the ability to help this with the engineering talent shortage that we have here.

I talked to probably a lot of the same chip makers you do, and the expense and verification test, costs are going way up. Sometimes we’re always just thinking, oh, getting into the bleeding edge next node, the cost of that. Yeah, it is a lot more folks, and the cost per transistor is going up too. But all of the other costs to get and lead us down that direction, verification and even test is skyrocketing as well. So, with those challenges in mind, how specifically is AI helping to meet those challenges?

Thomas Andersen: Yeah, very good point. Let’s talk a little bit more about those challenges. I think you mentioned a number of them. An important one is essentially the smaller nodes, smaller geometries, and therefore the complexity. And it’s funny because I think, I remember when I wrote my PhD thesis more than 20 years ago, every paper would start with the shrinking nodes and the complexity of the design. And I was thinking back then there was nothing compared to today. So, the challenge that we’re facing now are way, way harder.

The chips are just significantly larger, so therefore even to implement them with traditional tools are very, very hard. If you think about chip design, it’s often a very iterative process and you simply can’t iterate that much if you have such super large designs. People are moving to die, for example, because they need to stack chips because otherwise chips are getting too big. Again, you need solutions to essentially partition up those chips to figure out how they communicate most efficiently. So, there’s a lot of complexity in a nutshell.

And there is an engineering talent shortage. In fact, there was a recent study from Boston Consulting Group that forecast a 35% shortage of talent by 2030. And we hear this from all of our customers. Everybody is of course pushing for highest performance, but also they want to get their chips to market quicker and with fewer people. The other thing that I hear a lot is that I would say there is less experts. I oftentimes hear from our customers, they say, “Well, we have all these new grads and all these new people, but they don’t really know how to run the tools anymore.” And I mean, think part of that is true. To me, actually this is the part where AI can help because AI can make everybody an expert. And that’s one of the beauties about artificial intelligence.

Patrick Moorhead: Yeah, it was funny. I really disliked the word copilot at first when I heard it, but it’s actually really grown on me because that’s exactly what these tools can do. I mean, we’re not talking about wholesale removing people out of the loop, human in the loop, but it’s getting them out of some of this drudgery that quite frankly, they just don’t want to do anymore. I’ve seen this argument in desktop publishing and creative tools for years and even programmers. When we went from machine language to BASIC or Fortran, it was, oh my gosh, the death of the programmer, right?

And then we went to languages like C and C++ and it was death of the programmer. And then we went to an integrated development environment, and I mean every time, the market kept getting bigger and bigger and bigger, and we have more programmers, programmers and programmers. And the same thing happened with creativity tools, which was, oh my gosh, the people used to lay out and cut physical pieces of color and arrange maybe to create some advertising. And then we would take a picture of it. My gosh, what’s going to happen to these folks?

But essentially what we’re doing is we’re democratizing all of these different areas, whether it’s publishing, whether it’s creativity, and now with chip design verification and testing. So, can you dive down a little bit into what you’re doing specifically to help meet a lot of the challenges that you illustrated here?

Thomas Andersen: Oh, absolutely, Pat. And I really like your analogy of people that used to write maybe a simpler code, and then when a real programming language came out, they said, “Oh my God, I’m going to lose my job.” I think that’s human nature, unfortunately. Every time something new comes up, they say, “Oh, but I used to be really good at that.” It’s funny because we actually see the same traits as we’re rolling out AI technology. One of the things, for example, we have done is we’ve looked at the whole iterative process in the chip design flow.

For example, when you do design optimization and you want to push for lowest power, highest performance, smallest die size, you spend a lot of time tuning essentially your chip workflow until you reach these optimal targets. And traditionally that’s done by humans and they just run lots of experiments and they tune their flow and then they say, “Oh, I’m the expert guy. I know exactly how to operate and to squeeze out the most power.” And that’s actually a task you can really automate very well with AI, such as reinforcement learning, where you can essentially look at this problem space and take sample points and learn how a design behaves and optimize for the optimal power performance and area metrics.

And you can oftentimes, actually not oftentimes I would say always get a better result than the human designer would. And the same application applies to other areas. We’ve applied this at Synopsys to the full essentially chip design stacks. Design implementation, those we call synthesis in place and route. Things like improving verification coverage, which is a very similar iterative process where essentially, you tune your flow until you meet your coverage targets to things like test optimization to improve your pattern count for the tester, all the way to analog design.

We have tools essentially across the entire stack, we call this synopsys.ai, but we have essentially AI tools across the entire design chip stack that helps you design verified test, and so on your chips. Another important aspect to the optimization and getting better results is actually the learning and the reuse aspect. Traditionally, when you have a lot of experts in your company and all their knowledge is in their head and maybe they talk at a water cooler with their colleagues and they say, “Hey, I found this really nice recipe, and if you do this, then you get a better result.”

Now, if you think about AI algorithms, you can essentially build a learning system where all this information of how you get a better design is captured in a database and then this information can be shared with other parts of the company, and it isn’t just in somebody’s head. Or if say a person leaves or retires, today the knowledge goes with them. And with AI, essentially you have this and you have it stored in a database and it can be reused. To me, that’s extremely powerful because it helps you scale these types of solutions across the company. So, to me, these are the two key aspects of using AI for chip design.

Patrick Moorhead: And Synopsys was really doing AI before it was cool. I mean, we’re kind of using this upswell in interest to hop on video and discuss this, but you’ve been doing this for very much a long time and I wanted to point that out to everybody. There are the people, there are companies who are just doing it now because it’s part of being cool. And there’s other companies like Synopsys that saw the trend early and really honed in on, “Hey, how can I provide a better experience for my customers,” and went and executed.

So, we’ve talked a lot about verifying designs. We’ve talked about testing optimization, but I want to hone in on where you’re using AI to actually invent new technologies. The creation or the design part of the flow, probably right in the very front. How is AI being used to design chips better?

Thomas Andersen: Now for things like generated AI to create things though, it needs a large amount of data. And if you look at ChatGPT, it has information from pretty much everything that’s publicly available, but even that information is old. It’s likely from 2021 because building these models with all this information takes a huge amount of effort and time. Now, if you look at the semiconductor industry, of course our world is a little different. Our data is highly proprietary, and of course every customer has their own data that of course they will not share. And that’s all compartmentalized essentially across many different companies.

If you want to build a system that essentially can create, for example, new designs, let’s say RTL, when we come to chip design language, you need to have a lot of data to actually build a system that can do that, and you need to make sure that that data is yours, so there’s no copyright violations out there. The other thing I would say that’s challenging is the data needs to be… I mean, the output of the system needs to be 100% correct because I cannot afford to build a chip that then doesn’t function properly because somebody said, “Well, sorry, it made a little mistake there and it wasn’t right.”

So, you need assurance that what you’re creating is actually good, that quality is key. We’ve seen already that ChatGPT can actually guarantee accuracy as well. They’ve introduced things like fine-tuning with reinforcement learning, that’s called RLHF, reinforcement learning from human feedback where essentially a human gives information on whether the answer is correct or not. But I think many of you have probably read, and even in ChatGPT, you may get an answer that sounds very nice and sounds very good, but it may not actually be accurate.

These are some of the challenges that I think apply to our domain, but it doesn’t mean that it’s not possible to come up with solutions. We personally think there’s quite a lot of opportunity in the creation, and we think that’s the next big thing, creation of essential designs, automating workflows. I think these are some of the tasks that can be done with generative AI. If we overcome all the concerns that I brought up, the data, the amount of data, the correctness and so on, then I think that would be an extremely viable solution to chip design as well.

Patrick Moorhead: How are you dealing with the sharing of the data out there in the industry? Do you serve a lot of customers? How do you fence that data off? How are you thinking about this right now?

Thomas Andersen: Yeah, that’s a very good question, a question we get very often also from our customers. I mean, data security is absolutely key, and it’s really, it was from the beginning one of the main considerations from building. In my dream world, I would love that everybody shares data because then in a community, we become better. In reality, of course, we have competing companies and they all want to be the best, so everybody wants to make sure that their data does not leave their side. The AI applications that we’re shipping customers, they come pre-trained with some public information, for example, or Synopsys IP for example. We train them, for example, on Synopsys IP. We may train on things where both the customer as well as we have access to for example. But other than that, the majority of the training actually happens at the customer site.

You train on the customer designs and the training data that results from this training remains at the customer site, and it’s just as secure as any other data that they have in their secured disc, so there is nothing to worry about. The unfortunate part, of course, is that this doesn’t allow for any sharing, and therefore, again, the compartmentalization remains. I honestly, I knew from the very beginning when we started this journey many years ago, this would be a big challenge because unfortunately we cannot… I’m not Facebook or Google where I just mine everybody’s data that they give me freely, knowingly, or unknowingly. Or Tesla, which you can just drive their cars around and on the public roads and collect millions of miles of data.

Unfortunately, our world is much more challenging. But having said that, when you train on very specific data at the customer site, it actually works quite well. While in an ideal world, data sharing would be great, at the same time, I can argue that the solution that I provide to customer A that they then essentially train and tune on their particular chips is actually very effective because, just like you mentioned, ChatGPT. ChatGPT has all the information that’s out there in the world, but I don’t need all the information out there. I need very specific information. And when I built essentially a tuned application that customer A or customer B, I can make that happen at the customer site. From that perspective, I think this is something that at least we have partially overcome. Of course, in my ideal world, it would still be nice if it was possible to create shared trained models for sure.

Patrick Moorhead: By the way, I think that is the exact route, I’m seeing that in different industries. What some people aren’t aware of that when a lot of this data gets vectorized, it’s basically vectors and you can’t actually determine what they are. If you can train it on site and even move it up to the cloud, nobody in the cloud can actually even understand what all this data is.

I do predict, and this is one prediction I’ll make, that in with three to five years, we’ll have a multi-tier model that says, “Hey, if you let other companies share and you provide your data, you’ll have a better out outcome.” And by the way, I’m not the only one who makes predictions as an industry analyst. In fact, you made a prediction this year that talks about the generative AI will speed application development. So, hey, we’re halfway through the year. I hate to put you on the spot, but I’m going to put you on the spot. Where are we? Has progress been made on one of your big proclamations?

Thomas Andersen: Progress is always being made, of course. Yeah, absolutely. Earlier this year, I think we wrote about some of the directions that is taking. I think number one, we’ve expanded essentially our offering for optimization of complex workflows, like what we call Synopsys.ai across the entire suite of offerings. But number two, we are pursuing many opportunities in the generative AI design space. Earlier I talked about some of the challenges that are out there, like the data quality, the amount of data, copyright things, the need for having essentially a hundred percent accurate solution. But there is actually quite a few opportunities.

Things, for example, that we’re working on is for example, creating RTL from natural languages. You can think of it as a co-pilot. GitHub Co-Pilot is a co-pilot for pro software development, right? Java, Python, C++ and so on. You can think of the same way for chip design, essentially system Verilog creation through a co-pilot. I think initially this would be a system that’s still the human rights to software, but the system just auto completes. And maybe also from existing Verilog, it can give you a summary of what this function is doing so you can help improve it.

Ultimately, I mean, the dream would be that somebody writes a natural language and the system implements it. I think that’s a long road to get to that point, but it’s absolutely doable. And then if you take it away from the language application, everybody talks about language applications because of ChatGPT, but generative AI is many more things. You can do things like images, audio, video and so on. When you think about that, there’s of course other things like physical aspects, for example, floor plans, power network, chip architectures, these could all be created maybe initially through a co-pilot and then through natural language applications.

And then the other thing, and so this is sort of the creation of the chip design. The other thing is I would call the workflow automation. You can start with things like a knowledge base. Today, you have documentation and somebody needs to read online, “Okay, this command does that.” And then there is another database somewhere that has user experience where a user said, “Hey, if you encounter this problem, here’s a solution to that.” And people today, they essentially search it with keywords. Sounds like 1980, but I think with GPT type applications, you can think of, it’s chat bot and the chat bot knows everything about EDA workflows. And I can simply initially ask it, “Hey, how do I do this? How do I do that?” It can ultimately create you workflows automatically. You could say, “Hey, I want a workflow where you implement and verify this arm processor and here’s my specs.”

I mean, ultimately that’s where I think this is going to. The way you interact with tools. If you think about it, I mean, obviously the users of our tools are engineers and they’re used to writing things like code as inputs, and they’re used to look at log files and timing reports or congestion maps as their output. But if you think about it, this could be done much better in a more human interactive way where maybe the system tells you, “Hey, I looked and I think there’s this and this thing you should be doing to improve it.” I call this essentially human machine interaction that can be much improved to make it more human-like.

Patrick Moorhead: Yes.

Thomas Andersen: And then ultimately there’s another concept in generative AI, and these are LLM agents. So, ultimately you can combine large language models that essentially look up in your knowledge base or give you a suggestion based on a particular problem. You can combine this with reinforcement learning agents that then become automation of your workflow. That means you run a tool, you get a particular output, and the system will actually not just interpret the output, it will actually take an action and make your design better and ultimately solve all the problems.

Because again, we often talks about things like highest frequency, but there’s so many mundane tasks that the designer has to do where it’s cleaning up timing constraints or looking at DRC violations. A lot of human tasks are spent with these essentially mundane, repetitive tasks. I think for things like that, things like an LLM agent could be extremely powerful to automate that. So, these are the areas we are pursuing, and I think you can look forward to more announcement from us pretty soon as we are going down this route of essentially using generative AI for chip design.

Patrick Moorhead: Thomas, first of all, I really enjoyed this conversation and I also like that we covered so much ground. I mean, top of the funnel to how enterprises are using AI, can use AI into the future all the way down, how do chip makers use this and designers, all the way down to what you’re seeing in the future. It was a little tease at the end, which I appreciate. I’m really looking forward to hearing about what you are going to bring out because again, this isn’t new for you. You came out with your first AI-based tool years ago, so I can’t wait to see the enhancements that you’re doing now to really benefit chip makers.

Reduce time to market, reduce expense of test validation and design. I’m hopeful that a lot of chip makers are going to just jump on this. What I know for certain is there’s not enough resources that we need in the chip design test and validation phase, and the expenses are getting so out of control, particularly on the newer complex design. These things really sound music to people’s ears. So, thanks for coming on the show, Thomas.

Thomas Andersen: Absolutely, very exciting to be here. Thank you.

Patrick Moorhead: This is Pat Moorhead with Moor Insights and Strategy signing off. What a great discussion. Let us know what you thought about the video. You know where to find me on social media. I’m on it way too much, but if you like the video, you should subscribe to the channel. Just hit that subscribe button wherever you are on the planet. Good morning, good afternoon, good evening. Have a great day. Take care.

Patrick Moorhead

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.