Infrastructure Silicon for Accelerated Computing – Six Five Insider at Marvell Industry Analyst Day 2023

By Patrick Moorhead - December 15, 2023

On this episode of The Six Five – Insider, hosts Daniel Newman and Patrick Moorhead welcome Chris Koopmans, Chief Operations Officer at Marvell Technology for a conversation on the growing performance demands of AI and infrastructure silicon for accelerated computing.

Their discussion covers:

  • How Marvell is reimagining data center architectures, from the ground up, under the growing performance demands of AI
  • The key technologies Marvell is focusing on to enable the changes to data center architectures
  • Marvell’s developments in chip IP, innovations in switching and optical technologies, and other solutions for accelerated computing

Be sure to subscribe to The Six Five Webcast, so you never miss an episode.

Watch the video here:

Or Listen to the full audio here:

Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.


Patrick Moorhead: The Six Five is on the road here at Marvell on the beautiful bay, and we are talking connectivity. We are talking AI. It is incredible, Daniel, how much conversations and research both of our companies have done. And here on The Six Five talking about the importance of connectivity as it relates to AI.

Daniel Newman: Yeah, look, that data has to move and you’re talking about massive, massive workloads nowadays. And I think we take for granted at times how this stuff all happens. You’ll sometimes hear, maybe the media will refer to it as “picks and axes.” They’ll say things like that, but the truth is this stuff’s really hard. These are very, very complicated problems that there are companies that are solving these problems, and if we really want AI to work the way we see it, this ubiquitous AI experience, there’s some plumbing that has to take place.

Patrick Moorhead: Totally. And it’s interesting over the last 30 years how compute, moving the data, and storing the data, there seems to be architectural limitations. And what’s pretty clear now is we are getting a ton of compute, let’s say on the GPU side, but we’re seeing that compute move to an ASIC model that’s even more efficient, particularly on the inference side. Sure, there’s some training done on ASICs, but then to be able to move that data not only between the compute units, but between the racks, between the fleets, than in between the data centers is going to become, and in some cases is the bottleneck today. I think the perfect person to talk about AI and connectivity is Chris Koopmans with Marvell. Chris, great to have you back on The Six Five. It’s awesome to chronicle your journey, not only on The Six Five, but in the stuff we write. Both Dan and I tune into your earnings and we typically on The Six Five on our Friday show, check out the score.

Chris Koopmans: Excellent. Glad to be here. Looking forward to the conversation.

Patrick Moorhead: Yeah.

Daniel Newman: Yeah. I think it was six months ago, and six months, by the way, Chris, is forever right now. You laugh, but I watched the recent keynote you gave to a number of industry analysts, and Chris, you put up a slide and it was Sam Altman. Now, thankfully it was Sam Altman and it was something legitimate, not some crazy story going on about board governance issues. And you put up something along the lines, it was late November last year, and it was check out Chat GPT. And by the way, it only had 1600 likes or something. It was a lot, but it was…

Patrick Moorhead: I ignored it.

Daniel Newman: Yeah, no, it was awesome. But I thought it was a great way to start, Chris. And what I thought about is, it was probably a week later, we were in town and you were doing an analyst presentation and there wasn’t a lot of mention of AI. And I love how you set the tone and I was thinking maybe you could start there. It’s like, a year is forever, six months since you came on and did the Six Five Summit with us. You’ve made a ton of progress, but just talk a little bit about the journey that Marvell’s been on over the last 12 months since maybe that day and that slide you started off your presentation with.

Chris Koopmans: Sure. Yeah. I think you’re referring to about a year ago, Marvell’s industry Analyst Day 2022 was on the 6th of December, and Chat GPT launched on the 30th of November. It was like six days after launch. And yeah, ultimately at that industry analyst event, we talked about Marvell’s data infrastructure strategy, and we set our strategy about seven years ago to focus on data infrastructure and the big parts of data infrastructure being Cloud, 5G, and automotive. And last year, in particular in Cloud, we talked a lot about where we saw the Cloud business model and the Cloud silicon opportunity going. And there was a lot of discussion about AI. There wasn’t any discussion about Chat GPT, because of course nobody really knew much about it at that point. And we talked about the great connectivity needed for AI and we talked about the silicon needed for AI as well in the compute side of things.

But of course at that time the data center was going through a big inventory correction. There was the big supply crunch that drove a huge purchase cycle that then went into an inventory correction cycle. What I don’t think anybody really foresaw at that point was just what was going to happen some number of months later was this huge demand then for this AI infrastructure or really more broadly the accelerated computing infrastructure to drive AI. And you’re right, we in the past have viewed AI as like a sub opportunity within Cloud and we’ve talked about in our data center segment, it’s really grown to be the opportunity within Cloud. In fact, as you mentioned, earlier this year, we had to start breaking out how much revenue we were getting from AI. It got so big. And back in May, I think it was in our earnings, we said that, “Hey, the prior year, calendar ’22, it had been about $200 million in revenue and then in ’23 it would be more than double, and in ’24 more than double again.

Daniel Newman: Which, by the way, to give you a little credit, you were one of the early first and only so far, it’s starting to happen, but that were actually able to show somewhat clearly to the market, I remember you came with those earnings and the market loved it. They absolutely loved it and you’ve had a pretty good jaunt up. Sorry to interrupt, but I think that was worth pointing out.

Chris Koopmans: No, I agree. And all it’s done since then is continue to grow. In fact, now mostly the questions I get is how long can this keep growing? How fast can it grow, how high can it go? And, of course, it’s anybody’s guess. Ultimately right now we’re certainly responding to the tremendous demand and we don’t see any signs of it stopping anytime soon.

Patrick Moorhead: Chris, one thing that some people who don’t follow semiconductors a lot, they don’t think about timeframes, not that software is easy, it’s hard, but if you want to change software, you can change it overnight. Not complete architectures, not a billion lines of code, but you can change it. And in semiconductors you have to make bets and lay gates, three, four years and you need to do architectures even farther out. You’ve talked about new data center architectures, expanding that where you’re striving to set the future data center architecture because it’s one thing to connect stuff, but there’s compute, there’s storage, and how that data is moving all the way from the data center to the edge and even to a device and everybody in between. What is the data center architecture or the future for AI? I know it’s broad, pick your timeframe and we can go from there.

Chris Koopmans: Sure. No, it’s evolving very rapidly. I think in the past what you had was a Cloud architecture that was designed for multi-tenancy and designed for a multitude of applications to be able to drive an efficiency and to be able to expand and collapse the demand for each of those applications. Now you’re seeing individual applications like training large language models that are getting so big that they have to have dedicated architectures. And I would say that there really is no one architecture for AI. Actually, I think we’re at the beginning of a pretty tremendous innovation cycle around computer architecture and data center architecture. One the likes of which we probably haven’t seen in decades where just a couple of decades ago they formed the data center and server architecture and that lived on for 20 years. Now we’re starting over again and you’re going to see a lot of experimentation, a lot of innovation. And I guess the good thing for Marvell is lots of data. All that requires tons of bandwidth, low latency, high powered optical connectivity often. And Marvell is a leader in that.

Patrick Moorhead: Is it safe to say that the topology will not have as many layers? Is that what you’re expecting?

Chris Koopmans: Yeah, I think ultimately when you’re increasing bandwidth as fast as you can, you also need to reduce latency. And ultimately the way to really control costs, lower power, increase bandwidth, lower latency when possible is to collapse some of the layers and to be able to have a very high fan out to be able to bring all of these clusters together at low latency without having to go through many different hops. That is definitely one of the areas that we see happening.

Daniel Newman: Chris, talk a little bit about what you see happening across custom silicon. We’re seeing a lot more, we’re hearing from, obviously there’s a democratization that’s going on and lots more companies are getting into the game. Clearly Marvell sees this as an opportunity as well. Cloud-optimized silicon, something you talk about. And not just the silicon though, also the optical for instance, another area that you seem to be looking at very opportunistically, you’re looking at CXL and other disaggregation technologies, you seem to have a very thoughtful approach and it’s based upon your bet that there’s going to be more custom, more disaggregation and the need to move the data faster. Is that the big bet right now?

Chris Koopmans: Yeah, I’d say there’s two big things that Marvell’s focused on, and you talked about how everything needs to change in the architecture, and we call this really infrastructure for accelerated computing. If you look at the old data center architecture, it was racks of servers running general purpose computing. Accelerated computing has changed all that. It’s accelerated all the connectivity needs. It’s accelerated the cadence and it’s actually expanded the innovation. And Marvell’s focused on two big parts of infrastructure for accelerated computing. One is the connectivity, and there’s all kinds of connectivity, optics, CXL, memory, all these… And then there’s the compute itself, which our focus anyway is custom. And to bring back to what you said Pat, about the time cycles, it was October of 2021 that Marvell coined the term “cloud-optimized silicon.” And we talked about our cloud-optimized silicon opportunity. Most of the questions we got were, “Why would they do that?” Why would they actually do their own silicon? And are you really sure that’s really going to happen? Of course, now fast forward two plus years and you’ve seen just in the past few months, just a whole bunch of announcements from hyperscale data center operators for their own optimized silicon that they’re delivering. That’s the timeframes that it takes. And I don’t think anyone right now questions that… and I wouldn’t even say it’s moving from GPUs to optimized silicon. I would say it’s expanding. There’s such a growth and such a myriad of opportunities to optimize your silicon for different use cases, that there’s really just innovation that’s creating new opportunities across the board. And yeah, that’s our focus and custom.

Daniel Newman: And I just want to point this out, because I don’t know that everybody out there necessarily appreciates this. And I always look at Marvell as if it’s a very humble company, probably for two reasons. One is because you’ve been a great partner to many OEMs and OEMs to help them build. And also because you can’t always say, even if you wanted to, but you were alluding to something. But you talk about all these announcements, all these companies are coming out. Marvell is oftentimes a partner and works very closely with many companies, some of which you’re maybe hearing from, and that’s something you’ve really built a really impressive business around is being an enabler for this Cloud-optimized silicon. And like I say, we can’t say them by names, but countless companies are depending on Marvell and Marvell’s at the core.

Chris Koopmans: That’s right. I think ultimately what gets missed sometimes is that when you’re building custom chips and these OEMs or these data center operators are announcing their own individual silicon, they’re almost always relying on some sort of a partner in the background. And the reason for that is that there’s a tremendous amount of work to get ready to build a new product at a new generation or a new set of nanometers, if you will. It’s often years of work. You talked about the time cycles, even before you start working on the chip, there’s three years worth of development of IP that has to happen. And if you’re a company like Marvell, you might be doing 30 or 40 tape outs at one node. If you’re only doing two, a processor and an AI, or maybe three, if you want to do one training and one inference, it’s very hard to actually invest that much up front. And you end up wanting to find partners in packaging, partners in IP, partners in layout and physical design, and partners to really help them bring their dream to reality. And that’s what we call cloud-optimized silicon. We have a very flexible business model. We’re working with pretty much all of those companies to help them achieve what they need to do. And by the way, all those companies are buying a combination or are building a combination of custom and off-the-shelf silicon to meet what they’re actually trying to accomplish in terms of the service that they offer.

Patrick Moorhead: Chris, a lot of the things that come with custom, the goodness is the efficiency and the specificity of what it does. Sometimes there’s this counterargument that says, “Hey, it puts the burden on software, or it puts the burden on somebody else.” I’m curious, what is Marvell doing to soften that? I don’t want to call it a blow, but it is a challenge, because you can bet on one of 500 use cases, let’s say for custom compute. Are you developing special software around that and simplifying this for customers?

Chris Koopmans: Yeah, it’s a great question. Ultimately, this is just another aspect of the overall accelerated computing paradigm.

Patrick Moorhead: Okay.

Chris Koopmans: Clearly the easiest software model would be to run everything on general purpose CPUs, but you couldn’t get the work done so then you got to move to something more special purpose like a GPU that comes with its software layer to do it. Same thing, if you want to go even more special purpose and you do custom, that software layer’s got to be there. And that’s the business of our customers. When we’re building these custom silicon products for our customers, they’re building all of the software layers and capabilities to be able to enable their customers. And we see a great proliferation of these types of capabilities out there. And the good news is there’s a ton of demand. There’s so much demand for these types of AI services and all the developers that are working towards building these types of products are very TCO focused as well. If you can find a product that has a little bit better total cost of ownership, a little better price performance for your application, it’ll pay for some software engineers pretty quickly.

Patrick Moorhead: Yeah, it’s amazing. Your customers get to the size where they can do this and they are building complete data center architectures. And that’s another way we’re making up for the slowdown in Moore’s Law, and we found the killer app, and the killer app’s AI. You had talked, and I’m glad you did, talked about core IP. You have to build the core IP, you have to put it together in some sort of solution, get it on, in your case many times, bleeding edge. Can you talk about some of the firsts that the investments that you’re continuing to make? You made a major acquisition in optical that has been successfully in your timing was really good, because again, like you said, it’s not an or, copper or light, it’s both, and they both have value, but can you talk a little bit about that core IP that you’re working on?

Chris Koopmans: That’s a great point. And we talk about copper and we talk about optical. Fundamentally, what the company called Infi brought was high-speed connectivity. And the core of that is really the digital signal processor. The PAM4 DSP that started at 50 gig, went to 100 gig, 200 gig and beyond, and that can actually be applied to optical or electrical. That’s a matter of reach, and distance, and reliability, and things like that. And we’re seeing a proliferation of use cases for that. We’ve announced our active electrical cables using the same DSP technology. We’ve announced our line card re-timers using the same type of technology, active optical cables using the same type of technology. There’s all kinds of different applications, and what you’re seeing is as the speed increases, you’re going to have to have this type of technology in the network to be able to process the data.

Going back to your point on the core IP, ultimately it used to be that we were constantly doubling the bandwidth. We went from 25, to 50, to 100, as I mentioned. That used to be on a three or four year time scale. That’s now accelerated to 18 months that we need to be able to start jumping, driven by accelerated computing and AI. And what’s happened now is that instead of having, in Marvell, it’s called our central engineering team that develops all this IP, tests it all, tapes it out on a test chip, makes it all work, then hands it to our business unit engineering teams to build products out of. It used to be that one team would go from one generation to the next. Now we’re working on three generations in parallel. And ultimately that means three teams working on not only the stuff that is going to be the next one to hand over, but the next one and the one after that. And one of the reasons why this is getting so complicated is you’re starting to break into boundaries of physical limitations.

Patrick Moorhead: Sure.

Chris Koopmans: How fast can you actually get data in and out of a single pin with the SerDes in a particular type of technology? And we’re investing in 400 gigabit SerDes right now, 448 gigabit SerDes. I think seven years ago when I started at Marvell, 25 gigabit was the state of the art. And now we’re talking about 448 gigabit SerDes built into an animator, and who’s going to be first? It’s pretty remarkable the level of investment and the boundaries that we’re breaking.

Patrick Moorhead: It’s fascinating, the parallelization of R&D, and this is becoming one of these trends. Dan, you and I have done a few interviews where sometimes the layer, they’ll thread research with product, and then in this case it’s multiple timeframes on top. And in the end, obviously at some point somebody has to say, “Okay, this is what we’re taping out.” But no, it’s fascinating how we’re flexing as an industry to narrow the timeframe it takes to do what used to do in three years in 18 months.

Daniel Newman: But, yeah, I think he made a couple of really good points though, Pat, because he brought up the partnerships and it does take a village. We have a lot of ecosystem conversations here, but there’s this monolith belief out there that there’s this one company that’s doing everything and it’s just not, it’s not at all the case.

Patrick Moorhead: Everybody is licensing something.

Daniel Newman: And there’s some that do more and there’s some that do less, but it’s across the continuum. And when you see a new company announcing a brand new chip, and by the way, when you’re seeing them go generation to generation, to generation, in months, it seems like months, this isn’t happening alone. And that’s why I asked Chris about that because there is a lot of partnership. Now, we’ve only got a minute here left, Chris, but I do have to ask you, because you started off talking about the story about what everyone’s asking you. How far does this go? How fast does it grow? And of course, I know I’m not going to get a specific answer, but my philosophy has always been the market as a whole is growing. This isn’t a zero-sum game. Marvell can grow very fast and others that you’re competing with can grow very fast and other architectures can grow very fast. But you have to be looking and saying, “Gosh, what’s happened in a year? The speed of the law of diffusion of innovation, how quickly new technologies become old technologies.” This can grow for a while before you hit the wall. I don’t know if you’re going to double every quarter, but you have to feel optimistic.

Chris Koopmans: I’m very optimistic. First of all, our overall data center business is very strong. Obviously AI is driving a lot of that, but for us, it’s not just a what’s happening in the market, it’s also our own individual product cycles. We’re the leader in connectivity. We’ve talked about that. That’s a sign of going with the market and it’s growing dramatically. On top of that, our first custom silicon for AI will start ramping next year. That’s completely in front of us. And the other thing I would like to say is, is that the overall, it’s not just around AI. The overall Cloud model’s being accelerated as well. And really I think ultimately for the foreseeable future, you’re going to see very strong growth. I think this will be probably the fastest growing market in the semiconductor industry, and we’re super excited about it.

Daniel Newman: Hey Chris, thanks so much for joining us again.

Chris Koopmans: Absolutely. Thanks for having me. Good to see you guys.

Patrick Moorhead: Thanks.

Daniel Newman: All right everyone, we are here at Marvell by the Bay with Chris Koopmans, and we are going to sign off here now because it is time to get on with this AI journey and an adventure. But stay with us. We’re going to keep chronicling what’s going on here at Marvell and across this silicon semiconductors, Cloud, and infrastructure industry. Pat, we do it all. Chips to SaaS here on The Six Five.

Patrick Moorhead: That’s what it’s all about. Fully integrated, full stack, baby.

Daniel Newman: Thanks for tuning in. Hit that subscribe button, join us for all of our shows. We appreciate you very much. See you later.

Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.