Talking Lattice, Marvell, Google, AMD, Broadcom, IBM and Meta

By Patrick Moorhead - December 8, 2023

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The handpicked topics for this week are:

  1. Special Guests!
  2. Lattice Development Conference
  3. Marvell IA Day
  4. Google Gemini AI 1.0 and New TPU
  5. AMD Advancing AI Event
  6. Broadcom Q4 FY23 Earnings
  7. AI Alliance Led by IBM and Meta

For a deeper dive into each topic, please click on the links above.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: Hey, everyone. Welcome to another episode of the Six Five Podcast, episode 195. I’m Daniel Newman here, your hostess with the most. I never said that before, Pat, but I just wanted to try something different on this particular show.

Patrick Moorhead: Kind of cool.

Daniel Newman: How are you doing, buddy? Welcome back to… We’re home today. This is pretty amazing. It’s been a long time since we’ve both been in the same city at the same time, doing the same show in the same place.

Patrick Moorhead: I know, it really is. It’s been an exciting week. We did three events, launched a new company together and supersized one of our companies. It’s been a super exciting week and I got a lot of people laughing at me for my social media this morning, which I’m pleased to announce that I’m not announcing anything this morning.

Daniel Newman: I actually think people out there are probably wondering that about me. I don’t have any large announcements lined up for at least a couple more days. I can’t tell you that I’m done for the year. Look, we got to keep everybody on their toes. December for some people is the month they take off, and for others it’s the month they leapfrog ahead of the competition. For us, we chose the latter. And by the way, we always choose the latter here.

For everybody out there that is wondering, we did have a really big week. Monday we announced some exciting new talent and I want to hold that for a minute because you’ll all understand why. On Tuesday, Pat, you and I announced a new Vice President, General Manager of the Six Five. Maybe we’ll bring him by some time to meet everybody. John Schon, wonderful guy, most recently at Gartner. And we also, and probably even more excitedly, announced Six Five LIVE, which is going to be our new on the ground live format bringing all day long analysis using the popular Six Five platform, but with some really exciting new talent.

And then you and I were like, “Well, we’ll take Wednesday off, maybe.” We didn’t actually take it off, but we went to an event. We went to the MD launch. There was a big moment in the AI space and we’re going to talk about that here, so I won’t waste too much breath up front, but they were like, “Yeah, let’s get back to announcing stuff on Thursday.” Thursday, Pat, you and I decided to launch another venture because everything we do together turns to coffee? No, no. Turns to gold, sir.

We are going to bring our new partner and President of the Signal65 business, Ryan Shrout, long time in the business, more than almost two decades in the testing validation business, but also most recently head of the Performance Lab for the client computing part of Intel. He’s going to help us grow the merit of measure business that you like to say, Pat, and measurement of merit or whatever you always say, but I’m going to say with a good housekeeping of testing. Then Friday you decided to announce that you weren’t announcing anything so we could have our show without actually having a conflict.

Patrick Moorhead: Oh, no, I am announcing that you and I are going to be racing race cars at the Circuit of the Americas this afternoon.

Daniel Newman: We are going to be racing race cars today and we’re going to take selfies and I’m going to enjoy every single-

Patrick Moorhead: No, I mean, selfie analysis is my favorite.

Daniel Newman: Self analysis.

Patrick Moorhead: Yes.

Daniel Newman: I think we trademark that, Pat. We’ll get one of our very large fleet of lawyers to do a very expensive engagement to trademark that because I would really want to protect that because selfies versus deep, thoughtful analysis seem to get more likes. I don’t know why that is. It’s silly, but it works. Anyway, so listen, one of the things we’ve been trying to do over the last several weeks has been starting to introduce new talent, and the Six Five does have a new crew for part of the live. Two of the newest talents that are going to be joining the show include two people that we announced.

One, Lisa Martin we acquired at The Futurum Group, LuccaZara, marketing advisory to the tech industry, but also just terrific personality that’s been doing media and technology analysis for some time. And so Lisa is going to join us, and then Futurum announced it’s new Chief Research Officer, Dave Nicholson, but also Dave is going to be joining really to lead a lot of our outbound communication as part of our Six Five LIVE crew. Pat, without further ado, why don’t we bring Dave and Lisa onto the show?

Patrick Moorhead: Boom. Okay, that was good. Who is ever back in the control room, I appreciate bringing this in. No, great to see y’all. Maybe a good place to start is talk about what you… Well, Dan kind of talked about what you did in the past, talk about what you’re doing at Futurum right now. But anyways, welcome to the show and it’s going to be great working together on the Six Five. We did a little bit of pre-gaming last week, which was a lot of fun eating sushi around the fire, and we told some stories around the fire too. But anyways, it’s really a privilege for me to work with the two of you. I’ve read your stuff, I’ve watched your stuff and I’m just super excited about what the future holds.

Daniel Newman: Why don’t we quickly give you guys each a chance to do the intro. Tell us a little bit about yourself. Us. I know about you. Tell our Six Five audience about yourself and let’s do ladies first. Lisa, give us the background and maybe tell us why you’re super excited about what you’re going to be doing here at Six Five on the Six Five LIVE team and in Futurum. Oh, no.

Patrick Moorhead: Did I mute I mute it?

Dave Nicholson: Muted her.

Lisa Martin: Oh, hey. Now I’m here… I love telling stories. No worries. That’s control room pad.

Patrick Moorhead: That control room. I know.

Daniel Newman: I see it on TMBC. It’s okay. It happens here.

Lisa Martin: Okay, good. Well, we’re at that same level at the Six Five. My cheeks have been hurting from smiling all week since the news dropped on Monday. I couldn’t be more excited to be joining The Futurum Group and one of the hosts of the new Six Five LIVE program. I’ve been a storyteller for a very long time, didn’t realize it. I helped launch biological payloads on the NASA space shuttle, really helping to take very esoteric data and tell stories with it that are compelling and that makes sense.

Patrick Moorhead: Wait, wait. Sorry. Did you say NASA?

Lisa Martin: Yes I did. I was a NASA payload scientist after getting my master’s in molecular biology and telling stories there as well in the life sciences field.

Daniel Newman: Pat, unstick your face. You look ridiculous.

Patrick Moorhead: No, I’m just like, she had me in NASA. And I had read about it, but I’ve never really heard the story behind it.

Lisa Martin: I was managing biological payloads that flew on the space shuttle really looking at physiological changes to the body, to the heart, to proteins in the brain, to bacteria that are found in sterile water. Pseudomonas aeruginosa, that went aboard the space shuttle because it grows in sterile water. We were looking at the effects of microgravity on different physiological and microbial specimens and really helping astronauts have a better experience when they’re in space and when they get back. That was the start of my career after school.

Patrick Moorhead: Appreciate that. And we go to Dave.

Daniel Newman: How do you compete with that, Dave? How do you do that?

Dave Nicholson: I saw Star Wars 27 times, I believe. But on a semi-serious note, but really actually very serious because I can’t get it out of my mind, there’s a concept when driving on racetracks and they refer to it in terms of the number of tenths that you seek to achieve. Ten tenths is you got a good chance of dying when you’re driving. Why don’t you guys keep it under, say, six tenths today?

Daniel Newman: Don’t get in the car together.

Dave Nicholson: We do not need to see any headlines from COTA.

Daniel Newman: Will I go down as a Paul Walker? I mean, I’m good-looking.

Dave Nicholson: Don’t, don’t, don’t.

Patrick Moorhead: Dave, I want to avert everybody’s fears in that the max speed I’ve ever gotten to at COTA is 172 on a straightway.

Dave Nicholson: It’s the turns I’m worried about, Patrick.

Patrick Moorhead: I got you. That straightaway though, you’re looking right into basically a brick wall, but no. And brakes, we churn through more brakes than fuel, but no, I appreciate that. Let’s not do that, Dan.

Dave Nicholson: I’ve had a long career in tech working on what I would refer to loosely as the vendor side of the table as opposed to the customer side of the table. Since the age of 5, 30 years ago, I started my career working for some of the biggest tech companies and really a lot of the best of my experience has come from those business meetings; Those real sales calls, if you will, where you’re interacting with people whose careers could be destroyed if they make the wrong decision. So, gained a lot of insights over the years there that gave me an opportunity to join the team at Wharton, specifically in their CTO Academy and their digital transformation for Senior Executives Academy. I serve in a position that’s referred to as Success Coach. You could think of it as sort of a assistant to the professor’s, bringer of actual real live experience to the academic mix.

I’m joining as Chief Research Officer. I guess the thing I’m most excited about frankly is I see all of the combination of our analysts, research labs. All of that for me is like an endless buffet of knowledge. And especially at this unique time in history, it’s absolutely a thrill to be here. Just to riff on the motor sports theme, Dan knows that I’m a big Formula 1 fan. In a Formula 1 race when something traumatic happens, a pace car can enter the track and the pace car resets the entire field. It doesn’t matter how far ahead one car is over the rest of the pack, the pack bunches up and there’s a restart.

In my mind, that’s what AI represents today. And so I’m talking to my students and folks in the IT community and the things we’re talking about today didn’t exist six months ago. All of us together here on the Six Five side, on the TFG side, on the Six Five LIVE side, we have this opportunity to listen to the restart, look for the restart flag, just like some of the much larger folks in this business. That’s what’s really, really exciting for me. With Lisa, I’ll be one of the hosts of Six Five LIVE. So, yeah. Super exciting.

Daniel Newman: Listen, I just want to say we’re thrilled to have you both with us. We really can’t wait to see us get this product off the ground. Plan is CES 2023 we’re going to have a buffet of contents and-

Patrick Moorhead: ’24.

Daniel Newman: I keep saying ’23. I do not want to let this year go. We’re going to rewind it and we’re going to add live elements to every event we did. No. Thank you. Good. Thanks for making sure to correct me. It’s important everybody out there corrects me when I’m wrong. It’s also important to note that I will be endlessly making victory laps every time that I’m right.

Patrick Moorhead: Always

Daniel Newman: We keep it all on balance. But Dave, Lisa, we love having you in the family. We love having you on the team as champions of what we are trying to do. Yes, couldn’t agree more, AI is changing everything and this deal, this week, this news will change everything about our trajectory together. Look forward to having you back on the show sometime soon. Lisa, enjoy New York. Dave, go back to bed, it’s early where you are. We’ll see you guys really soon.

Patrick Moorhead: Thanks, everybody.

Lisa Martin: Thanks, guys.

Patrick Moorhead: Looking forward to it. Bye-Bye.

Lisa Martin: Me, too. Bye.

Daniel Newman: All right, everybody. It’s time to get the regular show started. I hope that was entertaining, but just in case you didn’t know, the show is for information entertainment purposes only. While we will be talking about publicly traded companies on this show, please do not take anything that I say as investment advice. I’ll let Pat do his own disclaimer. Just kidding. Don’t listen to him either, for many, many, many, many reasons. Just kidding, Pat, I think you’re super smart. All right, big week, we were on the road, we started the week. Let’s just go in order. Let’s just go in order. Lattice Semi had their first ever developer conference.

Patrick Moorhead: It was great to talk with the entire leadership team. I would call it day zero. We met with CEO, Jim Anderson, Head of Products and CMO and Chief Strategy Officer, Esam, has been a great partner. But let me hit the big announcements up front and you can fill in all the fun stuff here. Two big announcements there. Avant, as we talked about on the show before, is really hitting the mid-range of FPGAs. For decades, Lattice has been focused on lower power and lowest performance. We’re talking about milliwatts and as you get into the new Avant series, which I think is doubling their SAM here, it allows them a better opportunity to compete with both Intel and AMD. They brought out Avant-G and this is a general purpose FPGA. You could pretty much make it anything you want. You want AI, you want flexible I/O with multiple interfaces. They have a dedicated memory interface as well.

Then they also brought out Avant-X, FPGA family. This is really focused on super high bandwidth and security. These are paper launches they’re sampling today, which means you’ll probably see them in the market and end products six months to a year. The second-biggest announcement they made was with NVIDIA and AI. When I first heard NVIDIA and AI, I was thinking, well, NVIDIA does a little bit of AI themselves.

And they do, but the biggest challenge, let’s say in industrial robotics on the assembly line is the ability to preprocess data in the sensor. Whether it’s the camera, it might be a vibration sensor. The two companies are working together, and I’m going to try to simplify this as best I can by Lattice has an FPGA close to the sensor and it does AI processing and then feeds rock, I’ll call it, CUDA language, trying to simplify this, into NVIDIA Orin, which is their reference board for in-dashboard or industrial automation and stuff like that. Those were the three big announcements that they made, Dan, but we had some very strategic conversations as well with the senior leadership team, and you can check that out on the Six Five.

Daniel Newman: I think you hit the big notes. It was good to be sitting down with the leadership team. Jim Anderson and his crew are always very gracious and they’ve been very ambitious. Look, this is a company that had more than a dozen straight quarters of beat and raise, just incredible results. Understanding their role in the kind of low end or the small FPGA space and have seen a strong opportunity as the focus on larger FPGAs has been really what AMD with Xilinx has been leaning in on, and Intel as well, to move up the market and into the mid-tier. That’s what they’ve been really spending a lot of time hammering home to the market is that they’re going to continue to go up. They’re doubling their SAM. They’re offering some very unique capabilities in areas like AI and low power that are helping to enable their partners, which are the big chip companies in many cases that we all know and we pay a lot of attention to, to create designs that are more power efficient.

There were some great demos. You can check out the Six Five videos we did on the ground to show how much lower power, how much more efficient memory they were able to deliver. Then of course they had some great AI demos too, things like face tracking, security applications, or how quickly they can get the boot up and down on the FPGA to make sure that the security on the server is as high and robust as possible. Milliseconds matter, as we heard on our video. But Pat, overall, having a developer conference is pretty baller. That’s the official analyst language, being able to get developers and that developer ecosystem to be paying attention to what you’re doing. They had thousands of people there, they had big companies on stage. This company is turning a corner. They’re in the billions now and that means a lot. They deserve a lot of respect.

It was a great conference, Pat. It’s a company to keep an eye on, even though industrial slowed a bit. They finally had their first quarter this time that didn’t guide up. The editorial here is they really are kicking butt and it’s hard to not be rooting for them. Humble team. It’s just small and scrappy and I appreciate that very much. So. All right pal, let’s jump on. We’ve moved on next day to an industry analyst day. You and I on the ground all day long at Marvell, and you and I did the double duty. That’s what we do here. We run our research firms and then we do a night job, as you like to say. Our night job is PG. We are on video, but it’s talking in technology. Yep, there you go. Got it right. And so we sat down with the President, Chief Operations Officer of Marvell, rising star, Chris Koopmans.

Chris has kind of been like a six-month visitor to the Six Five. He did our summit, one of our main keynotes. He visited us on the ground about six months before that. Now six months later, we were doing the recap. Look, there’s a lot there, Pat. The data infrastructure story around AI is palpable. Marvell was probably the second company that got real credit around AI because they were the second company after NVIDIA that was able to announce a meaningful revenue number. Now, again, we’re talking different stratospheres in terms of the size of the numbers, but Marvell has seen it’s on a quarter to quarter run rate. They went from about 200 million a year of AI infrastructure and data infrastructure to 200 million a quarter over a short period of time.

During that period of time, Marvell has been able to really communicate the story that yes, part of the AI story is all about compute, but the other part of the story is all about moving the data. And so Marvell has got critical infrastructure that is necessary and that the OEMs are going to build with, their modules that are going to be required to be able to move data light and electronic. And that’s where Marvell’s growth story has come from. Having said that, the company didn’t just talk about data infrastructure. They did talk a little bit about the rest of their business across the auto and 5G, but it was a heavy day of AI. It was a heavy day of AI and I think they were calling it accelerated infrastructure. Accelerated data infrastructure was the word they were using to align with accelerated computing. The other thing that really hit home for me, Pat… And of course watch the Six Five video. That’ll come out shortly. You’ll see us sit down with Chris.

The other thing that really came out to me too is this is a company that’s become a really reputed and required partner to the ecosystem for cloud optimized silicon. We’re going to see accelerators coming from Marvell, I believe in the coming year. They’re going to have their first AI accelerator, cloud optimized silicon. They’re a partner to many OEMs and many that are developing silicon. They’ve kind of been a critical one-stop shop white glove service. That has been something that the company’s been able to lean on meaningfully to gain business. Now this is an area we’re growing competition. You’ve got companies like arm, they’re announcing their own IP plus white glove, but Marvell was there early, they’ve been there often and they’ve been very successful. This is also a company that has seen a lot of its business parts struggle. And so while AI has been up, everything else has kind of been sideways and down.

If we’re turning a corner, that’s going to be really good for Marvell, but otherwise they’re going to need to see that AI revenue start really quickly to offset other slower parts of their business, Pat. But it was a good day overall. Lots of good conversations, good demonstrations. What’d you see?

Patrick Moorhead: That was great analysis, Dan. The way that Marvell hasn’t had the growth overall, as I’m sure they would like, but what I always like to do is look at self-inflicted wounds versus market wounds. The reality is that carrier automotive and enterprise is down. It’s down for everybody. Every single semiconductor company we talk to. Now, that one niche, the Hyperscaler AI niche is growing, just like you said. Marvell has had some exceptional growth in there, and they participate primarily on the network side and they have some custom ASIC for AI that is ready to hit the market here. And what do you do as a company if you’re in a position like that? You just trudge on and you move forward, because you know the market is going to come back. It’s like the ocean. The ocean goes out and the ocean comes in, and you need to be prepared when it comes in that you’re the best prepared to do that.

And the company I believe is investing in core intellectual property, building blocks, and also creating solutions, complete solutions to be able to address these core markets competitively. Two main announcements they made at the show. First of all, OCTEON 10 processors, two new ones. These ones are for network equipment folks, firewall folks. It really is a giant SOC, right? We call them DPUs. It contains our Neoverse 2 cores. It includes some ASICs on there to do acceleration. Two new ones out. Those are primarily used in the enterprise, and also I’ll call tier two hyperscalers. Then they came out with quite frankly what I think is their biggest play long-term, and that’s upping the game with optical DSPs. If nothing else that the industry agrees with, is generative AI brings this explosion of data like we’ve never seen before. The reality is if you don’t connect the racks, if you don’t connect the fleets of racks, if you don’t connect the fleets of racks together and you don’t connect data centers between each other in the fastest, lowest latency, lowest power possible, then you’re not going to have growth.

And as we talked with Chris, and I think you and I have known this, but it really is the trifecta. You can’t have a fiddle or a crab architecture where you’ve got a giant compute accelerator and not the right connectivity and not the right memory and storage. All of these things need to come together. Two new solutions came out. One was called Perseus, and this is optical short reach. Dan, you and I did a lab tour of AWS that we’re allowed to talk about.

Daniel Newman: Yes.

Patrick Moorhead: It’s public. What Perseus does is it replaces passive copper cables with active optical cables. So distance are 5 meters to 500, and they brought out this other one that’s called SpikoGen 2 that is 10 kilometers. 10 kilometers. Why would you need an optical solution, a DSP, a PAM-4 optical DSP to do that? It’s connecting data centers together or connecting you with the backhaul of the internet. This is clearly focused on the cloud operators. Those are the three announcements. Talk a little bit about strategy, but please be sure to check out our conversation with chief, our operating officer, Chris Koopmans.

Daniel Newman: Yeah, great one, Pat. Thanks for covering off on the announcements and the details. Let’s move on, Pat. There’s a couple more. We’re doing them chronologically here because just ahead of AMD’s event, that morning Google came out with some big announcements, new LLM and new silicon. First of all, Pat, what are you more excited about, LLM or Silicon?

Patrick Moorhead: I have to tell you, I’m probably excited by both equally, and I know that’s a-

Daniel Newman: Cop out.

Patrick Moorhead: That’s a total cop out, but I got to tell you, I see silicon and I want to dive into it. There’s more LLMs coming out than there is new silicon, but Google came out with TPU v5p. And what is a TPU? TPU is an ASIC that is likely built by Broadcom that is hyper connected together. We talked about the bandwidth part to do a training and inference. And it’s the fifth generation, and you might argue it’s the sixth generation, but they’re calling it 5p. When it first came out, and I would say the first four generations were really about internal use cases. So Google Search, Google Photos, and it was really targeted at machine learning and maybe a little bit of deep learning in there, but as it related to Google Cloud, they really didn’t open it up to people until about the fourth generation.

I would say really opened it up on the fifth generation because when I talked with Google about v4, there really wasn’t a lot of Google Cloud action. I think the reason for that is, excuse me, they may not have needed it. Then the early days in the enterprise, it was really an NVIDIA show for training and people wanted to have that CUDA compatibility. But as NVIDIA gets into 52-week lead times on their AI solutions, as we see ASICs are absolutely more efficient, that’s not even debatable, folks. ASICs are just more. It’s harder to program. You have to put in that layer.

Now I feel like TPU v5, I just got to say it, I feel like it was rushed. Not a lot of preview time on it, not a lot of time to soak. It just hit. The news outlets that covered it first were basically consumer. It’s like, wait a second, what does this mean to the enterprise? Listen, I love consumer and I think it’s sexy and I think it’s cool, but it’s like analysts, we don’t really impact that. I have a lot of questions. And then when I look at the, I’ll call it the raw specifications, which don’t always matter, like bandwidth and stuff like that, AMD and NVIDIA kind of run circles around it. Also, there was no competitive comparisons. You might say, well, AWS didn’t have competitive comparisons. But you know what they said? They said that Trainium 2 is the highest performance, most efficient ship to do training and inference for LLMs that they offer. What that means by proxy is we’re higher performance than the NVIDIA H100. I don’t know if that translated to the H200 that they announced as well, but still it doesn’t matter.

You really don’t know at the end of the day what this can do. The second question I have, probably the final one is how is this available to Google Cloud customers? I think it’s through Vertex AI. I could be wrong. I hope it is. But my final editorial comment is the timing was interesting, which was the morning of AMD’s advancing AI event. Maybe that’s because it was a AMD event, had Azure and Oracle Cloud and it didn’t include Google with AMD’s new stuff. But when I was at AMD, I was Google’s largest chip supplier in 2005. To be a partner that long, like 18, 19 years and to drop a bomb right in front of your partner’s event was interesting.

Daniel Newman: Pat, you hit a lot of the high notes. First of all, the problem, we’re in a bit of this era of leapfrog and compete and everybody’s showing everything and wanting to get it out to market fast. I think some of the reason that we’re seeing, whether it’s silicon innovation coming faster than maybe would be optimal or it’s LLMs being launched faster than what would be optimal, to me, it’s more about needing to continuously show progress in the public eye. Google I think learned its lesson, I hope so, on the first Bard announcement where it definitely fell on its face, but I do think it recovered quite nicely. This one’s interesting, Pat, because the Gemini demo is really impressive, but there are some discrepancies, and some of the discrepancies in the market are, was the demo real? And so I’ve been reading a lot about this and I want to kind of give a two-fold answer to this.

Pat, there’s always been a little bit of the behind the curtain of a demo at any event. The question is on a scale of, hey, we kind of optimized it a little bit into the demo video versus we pushed the truck down a hill. I think it’s a little further to the left or the right, but I do think that this is kind of one of these where there’s explanations and the problem that people don’t understand is we’re seeing progress being made in real time. And so everything the Gemini demo did, it is capable of doing, but it wasn’t capable of doing in the exact way it was presented, like the rock paper scissors demo. Right now, it couldn’t do it in real time watching the hand gestures back to back to back, but if you prompted it with an image of all three at the same time and hinted that it’s a game, it could do that.

I like the analogy of that because what we’re seeing is what we saw is the end state. This is where we’re going to be and we probably will be there in a blink of an eye, but we’re not actually quite there yet, but this stuff is being optimized in real time. I think the same thing could be said about the silicon. Look, they’re training the model, and I think the real important thing Google wanted to get out there is that it’s building silicon that can be used to both train its own models, which is a big sort of statement piece that all the hyperscalers are wanting to make right now. AWS was able to do this first with Trainium 2 and Anthropic, and of course what it’s going to do with Titan. And Google doesn’t want to be left behind, so it’s like, “Hey, we’re doing this.”

But yes, this is definitely not the last piece of silicon they’re going to develop. I’m pretty sure they probably already are taping in or working on their next two, three versions of this thing. It’s kind like, am I happy with where it’s at right now? Do they make as much headway? Are they ready to compete with the…? I saw some analysts write about the H200 like it’s already out. We love to tell stories about things that don’t exist and then we make it look like they do. Vendors love when people do that, but the truth is everything’s a bit in motion right now.

Patrick Moorhead: Here was the sensitivity though. So let’s dial back a year ago, I think when you and I were at the Microsoft copilot event, and we went right from the big announcement a year ago to they let us, with a person next to us, ask copilot questions. Hey, I have $700 and I’m in Barcelona. What should I do? Tell me what you would recommend and here’s where I’m there. It got it right most of the time. It made mistakes. Then a couple of weeks after Google does their first Bard event, their stock goes down like 10% and you can’t even find the replay. It was a disaster, right?

Daniel Newman: Yep.

Patrick Moorhead: But you and I, the whole time, are like, “This is a marathon and not a sprint,” and here we are. I even think even with Google stubbing potentially… I mean, they stubbed their toe in the press on this in the way that they did the demo. I still don’t see any knockout blows that miraculously makes Google Search lose 30 points of market share overnight, but I think it’s very important for Google to do a follow-up where they just nail it. Kind of like they did with their enterprise event that you and I attended and then at I.O., right? They came through and it was very credible, very planned, but I just thought this was rushed.

Daniel Newman: A little bit, and like I said, every demo along the way has had just a little bit of Hollywood, and so the question is pure manipulation versus Hollywood effects.

Patrick Moorhead: The car down, that is so good. That is solid gold, Dan.

Daniel Newman: Thank you. That’s a Six Five annual highlight as we wrap up the year. All right, Pat, well listen, this is one we’re going to keep talking about. It’s onward. By the way, love all this competition. Let’s get onto our last long topic because we’re going to be hammering home five and six today. Like always, we do nothing in five minutes. I can’t even get the intro done in five minutes.

Patrick Moorhead: Do you have a hard stop in 13?

Daniel Newman: No, but I got a hard stop at 16, 18-

Patrick Moorhead: I got you.

Daniel Newman: All right, so let’s talk about AMD advancing AI. I’m going to do the highlight reel and then I’m sure you’ll like to get into the mud with it because you’re always really good at that stuff. First of all, this day was a big day that had been in the works for a couple of years. We’ve been hearing about the MI series. We’ve been hearing that AMD is going to have a NVIDIA compete strategy. We’ve been hearing the market needs a competitive data center GPU. We were hearing that maybe it was a inference, a powerhouse, and then we heard maybe it will be training and inference. On this day, AMD was able to march out with a one very, very competitive data center GPU for the cloud, the MI300X.

Second, they were able to march out with partners that are incredible validators of what they are doing: Meta, OpenAI, Microsoft on stage with them talking about utilizing their new MI300X as part of their go-to-market strategy for different uses. Some for all uses, some for inference uses, but nonetheless, using the AMD products. The company was able to come out and talk about higher layers of abstraction in their ROCm 6, which is the critical get-it-right that AMD needed to be able to lure in a broader part of the ecosystem to be developing for their hardware, and had some very positive momentum around ROCm. They were also able to announce a on-prem friendly for HPC and accelerated computing. They had the likes of Dell and others on stage that they were partnering up with this.

Then, Pat, they really ran down the gambit and brought the PC in and they’re like, “Well, let’s not just make it a data center show. Let’s do PC’s too, while we’re here,” and were able to announce the newest version of their AIPC. You and I got to talk to their entire executive team, talked to Lisa Su, had a great interview, can’t wait to share that with all of you. Also, talk to their heads of data center, their CTO, Forrest, Mark Papermaster. Others got to talk to their AIPC team. Very, very exciting week. Pat, here’s the question. This is the question everybody asks. There’s two questions. One, is AMD competitive within NVIDIA? Two, is AMD going to be able to…? Are the cloud providers going to be a partner or a competitor, and when does that happen? Those are the two questions that everybody asked me this week, on both US and on international CNBC, and we both did some of that and those were the questions.

One, Pat, this is not a finite race that’s been run and it’s not over. Lisa mentioned that. You look at the TAM expansion, you look at the demand, you look at the supply chain, you look at the need for alternatives. AMD is in the race. They got a multi-billion dollar pipeline, and I think that pipeline will expand. I think this announcement is validation. By the way, I still think NVIDIA’s got a really big lead and they’re going to continue to innovate. It’s not A or B, it’s A and B. The other thing is, look, I think that the cloud providers are going to continue to be good purveyors of merging silicon into the world. They will build vertical integrations. They will partner up with all of the silicon providers, NVIDIA, AMD, Intel, and they will do that for as long as it makes sense. This is collaboration and competition in its best.

And by the way, they don’t even talk to the same people about it all the time. It’s just not the same thing. But the fact of the matter is that you’ve got companies that are deploying workloads in the cloud that want Intel, some want AMD, some want NVIDIA. They are going to be successful because they’re building a good product with good specs. Pat, last thing I want to say, because I can talk for a long time about this, but I’m trying to keep this on time, is that I was really impressed at the boldness of AMD. They’re not always the company that wants to really come out and punch the competitor, but to some extent I thought they were very bold coming out. I know there’s a new GPU coming from NVIDIA, but look, you can only compare what’s out and available in the market right now. AMD was able to take advantage of the moment comparing with what’s in the market and show impressive training and really incredible inference capabilities on the new MI300X, which that, let’s be candid, was the star of the show.

Patrick Moorhead: Dan, there is so much to talk about here. You’re right, we could do an entire show, but tune into all their interviews we did with the senior executives, Lisa Su and three-

Daniel Newman: Forrest Norrad, Mark Papermaster.

Patrick Moorhead: Exactly. Victor Pang. I was on CNBC last night, like you. It’s funny, they should just bring us on at the same time. That’d be hilarious. But here’s where I am. It took AMD many years to field a credible AI-focused GPU, and it’s the MI300X. This is not just about NVIDIA having a 52-week lead time. This is about AMD bringing in some killer hardware, and also on its sixth generation, I think after 13 years. One of my first white papers I wrote we published was on ROCm. But ROCm, which is essentially like NVIDIA’s CUDA, this abstraction layer that sits above, let’s say, or sits below a PyTorch, like a framework, you can directly write to that, but it is competitive for AI. I would say ROCm 5 was competitive for machine learning. ROCm 6, and based upon what people said on stage…

And even Meta, people don’t give Meta enough credit for the research, the science and the code that they delivered. They fricking invented PyTorch, folks, and they’re very good. They have LLaMA models now that are open source to really shake up this industry. And for them to say anything nice about ROCm 6 I think is a huge accolade for the company. Now, they didn’t say, “We’re using it for sure, a hundred percent, not just going directly to PyTorch,” but for them to say anything nice about that, I thought was a big deal. You can’t argue on X with Azure and OCI and Dell technologies and Lenovo and Supermicro.

By the way, Lenovo and Supermicro especially equates to, people are asking for AMD. Dell doesn’t sell what Dell customers don’t ask for. They do not push products. That’s just not their thing. Lenovo did in the last three to four years help create markets for AMD. In some ways, I like to look, they’re kind of the replacement for HP and HPE as it relates to partnering with AMD.

Final word on AIPCs, it’s interesting, I really didn’t even talk a whole lot about that the company had this 7,400, this rise in 7,400. Not a lot of people were talking about it, and quite frankly, it’s because of the software. What struck me is that AMD’s PC strategy is very similar to Intel’s in the beginning, which is, we’re going to leverage CPU, GPU and NPU to deliver the AI magic. That’s a challenge, and it takes resources because it’s harder to program across CPU, GPU and NPU as it is to just going to the NPU. That’s where I think Qualcomm in the middle of the year is going to have an advantage on this whole thing. It’s going to be interesting how this parses out. But at a minimum, AMD will be competitive with AIPCs in 2024 as long as they can get some key ISVs on board optimizations in the beginning for this, CPU, GPU, NPU that doesn’t heat up the system.

Then secondly, they didn’t say in their roadmap that they were going to up the NPU, but it’s just kind of an obvious thing, that’s going to happen. Qualcomm’s got the 40 to 45 tops showing up mid-year and it’s just so easy to program. So, sorry, I know we’re going. I see your body language. Let’s go, Pat. Stop talking. We got two more times.

Daniel Newman: No. No, no, no. You never talk too much. Ever-

Patrick Moorhead: I do.

Daniel Newman: … in your entire life. Look, the track is waiting. The track is waiting, my friend. We got a couple of items that are worth noting too that we’re going to tie this off on, Pat. One of them is the first earnings report since the close of the deal for Hock Tan. You went on CNBC, you talked about this, you killed it, crushed it. What happened there?

Patrick Moorhead: They had an earnings beat. What’s interesting, I thought they met revenue, but people are talking about them missing.

Daniel Newman: It depends on the number. I saw a beat, like a small beat, but it was actually right above, so.

Patrick Moorhead: Exactly.

Daniel Newman: I don’t know what they’re talking about.

Patrick Moorhead: Exactly. I get on the show and it’s like, “And they miss.” It’s like, “No, they didn’t.” Anyways, that doesn’t mean anything. It’s kind of exactly what I had expected, with no surprises. Semiconductors, mid-digit growth, EBITDA. Let’s just jump into the forecast and what they talked about the future. $50 billion in revenue. I think the street was wanting 51. EBITDA of 30 billion. My take on that is, that’s incredible freaking numbers. It’s going to come after a lot of re-architecture spinning off the end-user computing group and carbon black, some layoffs, some consolidations. It’s the classic Broadcom play of letting the business unit run its business and then minimizing share services at the corporate layer.

It worked in semiconductors. Broadcom learned a lot with their two first software acquisitions. Anyways, and I think that they’re going to nail this one. I think they’re just being conservative. They’re focused on the right things with VMware, which is private and hybrid cloud. I’ll do a victory lap. I think a year ago I said that they were going to spin off the end-user computing group and anything that has to do with a PC. It’s not strategic. It was affirmed last night that those would be divested.

Daniel Newman: That’s it?

Patrick Moorhead: That’s it, baby.

Daniel Newman: I want to know how anybody could have a single complaint about a company that generates $30 billion of EBITDA and $50 billion of revenue.

Patrick Moorhead: No, I know. It’s incredible.

Daniel Newman: This company’s incredible. Pat, 60% EBITDA at 50 billion in revenue. I don’t know. You can question his approach, but it’s very hard to question those results. And so good earnings report, okay, look, single digit growth of VMware, it’s going to take a minute. I can tell you as someone that’s bought six companies in 12 months and trying to spin them up and grow them, that there’s always a bit of a… It’s a slingshot when you first acquire them and you get all your synergies. At first, sometimes you even go backwards. It’s like, why am I going backwards? Because there’s so much minutia you have to deal with when you acquire a company, and now do this at companies that are billions instead of millions of dollars and try to figure this out. People, products, go to market, distribution, relationships, all this stuff takes time to sort. That 8% or so that they’re forecasting will grow. I absolutely assure you, because he’s going to refocus it.

He’s going to focus it on the things that are important. He’s going to take all the focus away. By the way, Pat, I just thought it was kind of interesting, they’re going to move to Palo Alto. They say that Hock is shrewd with the dollar, but he knows a good thing when he sees it. He’s also told everybody to get back to work. He wants the big campus and he wants it full of people. And so I thought that was pretty cool. That is an amazing campus. I look forward to seeing it full of people, Pat. It was a ghost town. We went there a couple of different times. I was there. It was eerie. It was almost dystopian how empty this place was. He will have the place buzzing and humming with people. I think if the workplace is anything like our life, I don’t have a lot of sympathy for that.

We’ve been on the road every day of every week for the whole year. I think people that have great jobs working for really exciting companies should occasionally go back to the office. That’s not a bad thing either. Overall though, good numbers from Broadcom. All right, let’s take this baby home. The last item was a big… In the spirit, Pat, we got this one a little out of order, but wanted to save this for the end because there’s a lot of interpretation of what’s happening here.

IBM and Meta went out together and announced the formation of an alliance called the AI Alliance, promoting OpenAI, looking at everything from processing to governance to security to development. They’ve brought 50 plus founders into this group. This included academic institutions, national laboratories, SaaS companies, infrastructure companies, PC, compute and processing. What does it stand for, Pat? Look, I’ve got kind of a two-headed interpretation of what’s going on here. The first is that AI is moving incredibly fast. Incredibly fast. Two is that there’s problems with few companies having too much power.

And so it’s always interesting when a company like a Meta, who’s sort of descriptively known for data abuse… Again, Meta is an incredible company that does amazing things, but people that use the product, Facebook and Instagram, the ads are spooky. I don’t know why. But having said that, they’re also building really neat silicon. Nobody really talks about it, but Meta is doing some really neat things in silicon. They’re building the next generation of AR and VR. They’re changing the way people are connected around the world, and they’re using AI and have been for a very long time, recommender engines, filtering for data systems. And Pat, this group and IBM, by the way, has been a bellwether of this enterprise AI movement.

You’ve said this often. I’m going to quote you. Since I’m not going to let you talk, I’m just going to quote you. But you’ve said, “They do not get enough credit for being the first. They were the first company with a generally available AI, enterprise AI solution, and they were the first company to come out with an indemnification that says, ‘If you’re utilizing these technologies in your business, we will indemnify you from some of the legal risks that are being created by utilizing generative AI in your business.'” These two companies, coupled with AMD, coupled with ServiceNow, you’ve got some of, like I said, the biggest laboratories in the world. You’ve got, what, Sony in Japan? They’re all coming together, saying AI is a big, important thing.

There’s a lot of concerns with privacy, data, and risk. There’s a need to make sure not any one company has too much power or control over the proliferation of AI. I like seeing this type of consortium, Pat. I’m going to leave you on this note. My only concern with consortiums like this is, what is the ethos and then what is the mandate? Ethos I’m already pretty comfortable with, but the mandate is how much are you going to make sure that these founding members and the people that join this thing eventually are holding themselves and holding each other to accountability, to actually live the mission and meet the mission and drive the mission of having this open, secure AI environment that’s going to advance the human and economics of our world?

Patrick Moorhead: Wow, that was a big ending there, dude.

Daniel Newman: Thanks, man.

Patrick Moorhead: Can I just say out loud what nobody’s saying out loud? This is an alignment of companies that are very goal aligned to have solutions that aren’t locked into NVIDIA and that aren’t locked into closed models. That is what this is about. At AMD, we used to call this the virtual gorilla. We had 1/20th or 1/10th of the resources that Intel had, so we aligned across companies to be able to pull together solutions, because nobody liked any company having 100% market share. People aren’t comfortable, supply chain wise, that NVIDIA is such a big part. We’ve got a 52-week lead time here. They love the innovation from NVIDIA and they love the foresight and they love the investment. But the fact is that people are uncomfortable with that.

It’s not just the hardware. I’m reading through the lines that this is also a way to minimize reliance on CUDA as well. I’ve served on the boards of many of these alliances in my 20 years when I had a real job. The key here is you got to have to get some quick wins. You have to devote specific resources that are named for the group. You need to have somebody who is orchestrating this. I think that this is what IBM and Meta are going to do. Otherwise, stuff just gets stuck in committee. Dan, there was a theory 10 years ago that OpenCL would be able to, as an industry group, an industry standard, be able to be the proxy for CUDA. It was the Open version of this.

The thing was is that NVIDIA went 10x faster than OpenCL and then everybody basically gave up and here we are. Listen, hats off to NVIDIA. I’d be doing the same thing if I were there. Shrewd business people working 24 hours a day, investing multiple billions in R&D and throwing the 50-yard bomb every three years. I’m really interested to see where this goes. This is V1. I’d like to see networking vendors involved as well. We saw Broadcom, Arista and Cisco on stage at the AMD event talking about low latency ethernet. I read between the lines there and they didn’t say the whole thing. There’s obviously some lower level optimizations that are going on there because it’s not just about accelerators, it’s about connecting accelerators together and connecting all those GPU service together.

The second one, I’d like to see SaaS. You and I both talked about that and I view the Oracle announcement as OCI as opposed to Fusion or NetSuite or something like that. It was great to see the SaaS providers who were in there. ServiceNow, a big company, a big value add on the enterprise side, but we need to see more of that. So listen, hats off to IBM and Meta for the leadership and everybody who signed up first. If there’s not something that comes out in three months, I’m going to start asking questions.

Daniel Newman: Pat, that’s a great insight and really great way to wrap up the show. There’s going to be a lot more on this. This isn’t the end of that topic. I think we did it though. I think we did it. Great show, great conversation, great guests. Had David and Lisa, thanks so much for joining. Excited to have you as part of Six Five LIVE. It was a big week in AI, it was a big week for you and me, but we got stuff to go do today, so let’s make that a wrap. Everyone out there, hit that subscribe button. Join Patrick and I for all of our Six Five weekly shows, and of course sign up for The Six Five. It’s growing, it’s expanding, it’s going to be amazing. Also, check out our new testing validation, measures of merit, merits of measure company, Signal65. But for now, we’re out of here. We’ll see you all later.

Patrick Moorhead

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.