Talking Apple, AMD, Samsung & Micron, Adobe, Samsung & Apple, Arm

By Patrick Moorhead - April 22, 2024

On this episode of The Six Five Webcast, hosts Patrick Moorhead and Daniel Newman discuss the tech news stories that made headlines this week. The handpicked topics for this week are:

  1. Apple Vision Pro Developers Losing Interest?
  2. Awards Samsung & Micron over $6B in CHIPS Act
  3. Does AMD Really Have a Datacenter AI GPU Problem?
  4. Adobe’s Use of Midjourney
  5. Samsung Knocks Apple Off Of Number 1 Market Share
  6. Arm Says CPUs Can Save 15 Percent of Total Datacenter Power

For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Webcast so you never miss an episode.

Watch the episode here:

Listen to the episode on your favorite streaming platform:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: Hey, everybody, it’s Friday and we are back with another episode of The Six Five podcast. Very excited for this Friday. Big week in tech. Aren’t they all, Pat? Big week in the world, man. Weekend to weekend. Last weekend I was stewing in my man cave watching drone missiles shot across the world, and then wondering what the next week was going to look like, and it was like business as usual and everybody’s mad at Tesla, or something, and then onwards we went. But it’s Friday. It’s episode what, 223? No, 213. I gave us 10 more than we’ve deserved. We’ll be at 223 in 10 weeks, everybody. Pat, how are you doing this morning?

Patrick Moorhead: Doing well. It was a late dinner last night with my bestie. It was fun talking AI. I’ve lived in Austin over 20 years and went to a place that I had heard of but had never gone to, and it was interesting. It’s a place called the Persian Club, which is right smack dab in the middle of East Austin, and it’s like a social club. Dinner there, good conversation, chatting about AI. But it’s been a little bit of a recuperative week. Shockingly, we’re not on airplanes.

Daniel Newman: Speak for yourself, dude.

Patrick Moorhead: Oh, that’s right. Big boy went, got on an airplane to take some selfies with the CEO of a company he just bought. Dan, what did you buy this week?

Daniel Newman: So I made it the first two minutes without being self-promotional, so that was good. But let’s talk about what we did. I did a thing. We did a thing. The Futurum Group acquired Techstrong, and Techstrong is one of the market leaders in intent demand. So intent-based demand, live-streaming, and content. And then of course the popular owner of some of the most read websites in business and tech. 15 million readers on DevOps.com, Security Boulevard, Cloud Native Now, Digital CXO, Techstrong TV, lots of good stuff there. And what I’m so excited about for The Six Five is that this is going to become a distribution arm.

So we’re going to go OTT, you’re going to be having Six Five streamed right on your Roku, right on your Samsung Smart TV, Apple TVs. There’s going to be an app for that, buddy. And so in the morning, instead of turning on any of those other business shows, you should be turning on The Six Five. We’ll be running our Six Five Summit, which goes live in June on there. But look, we’re going to add hundreds of annual web events, dozens of virtual events on the ground, live-streaming. We’ll be at RSA. We’re going to be a ServiceNow Knowledge. And of course there’s going to be so many integrations with Six Five sharing the research from Signal. Tech Field Days will be there. This was a good week.

But if I can stop for a minute and just say something that’s more important than any of this business stuff. I just want to say a quick prayer from my mom. She’s in surgery this morning, she’s dealing with cancer. This is the fourth time, a recurrence. And praying for your mom. Love you, mom. You’re the strongest woman I know, and I know everybody says that about… Every good child says that about their mother, but you really are super brave. I can’t wait to hear it was successful and to talk to you on the other end. So thanks for letting me have a minute there. But it was a crazy week. So I guess when you’re building things fast, Pat, you move fast, you buy companies, and then you stop and you reflect a little bit and all this is going on. But hey, let’s get past all this preamble crap, we got a lot to talk about today.

For anybody out there that hasn’t seen the show before, I’m going to ask you why. You should hit that subscribe button and be part of our community. Watch us every week. We are the guys that like to talk about what’s going on. A little bit unfiltered, straightforward, a little news, lots of analysis. This week we’re going to talk about Apple. We’re going to talk about some awards from CHIPS Act. We’re going to talk about AMD and what’s going on with their GPUs. We’re going to talk about Adobe, and did they do something wrong in training models? Samsung, did they overtake… We talk a lot about Apple, man, do we talk a lot about Apple. And then Arm and what’s going on with some of the power envelope and are they creating an interesting advantage?

But just a quick note, this show is for information and entertainment purposes only. And while we will be talking about publicly-traded companies, and by the way, next couple of weeks, a lot of earnings and a couple of special guests coming on the show, Pat, we’ve got a couple of special guests. We’re not going to tease it yet though, We should wait. Just surprise everybody?

Patrick Moorhead: I think we won’t tease it yet.

Daniel Newman: We won’t tease anything yet, but don’t take anything that we say as investment advice. So, let’s get after it, Pat, because although The Six Five is six topics, five minutes each, we stink at that. It’s never five minutes. So Apple Vision Pro, are they losing momentum with the developers?

Patrick Moorhead: So there was some research that came out from a company called AppFigures that showed the precipitous decline of Apple Vision Pro developers. It came out on the first week, and you had 150 apps that were added. 3-25, the week of 3-25, there was a whopping one application added. And I compare this to the early days of the iPhone and it really was the inverse. You saw it just going up into the right, as opposed to this giant cliff there. And by the way, let me just first say that I’m not anti Apple. I just expect a lot from Apple given their resources, given their heritage, and I don’t like their monopolistic behavior that they’ve done. But I have to say that Apple Vision Pro does do some pretty incredible things, that its competition, that costs a half or a third, it does better, right?

Daniel Newman: No, I was just going to say, don’t you ever tell your kids, “I expect more from you?” Apple’s the kid that you expect more from.

Patrick Moorhead: Totally. And I do. So its video experience is pretty awesome. There were some really good reports from Anshel Sag and how good the PGA golf app was, but demand is falling off a cliff. And what I do is I look at the installed base, and for developers, you don’t want to just throw money in, particularly when you’re doing something dramatically different, and you can’t just take an iPhone app and slap it into AVP and have a good experience. You have to have proximal understanding. You have to have a bit of a different UI and experience.

So it looks like it takes incredible amount of work to do that. And I think this is a statement of future demand for this model of AVP. AVP 2 will be cheaper, it will be better. And AVP 3 and AVP 4, and maybe we will see a turnaround in that. I have it booted up my AVP for six weeks, n=1, but there’s not the killer app to go in and do it.

Daniel Newman: Pat, I tweeted something, it was probably one of the most viral tweets in a long time when I asked the question… What you’re seeing in the data is I think what we’re feeling without needing to do a lot of assessment, and that’s lost interest. You remember the week or two after it came out, people were driving up, and getting out of their cars, and they were wearing them and you were going to coffee shops and you’re seeing people, and everybody was talking about taking their first flight with their AVP.

So all this front-end, early adopting influence personas were out and about and they were largely Raving about how amazing it was. But then after two weeks it just started to fade. And I just asked the question and of course it was controversial, but even Robert Scoble retweeted and shared that he was seeing, he’s always been a pretty prolific Apple fanboy.

Patrick Moorhead: In fact, he’s renamed it to the Cult, I think.

Daniel Newman: And so the long story short is that this is like a beta product that was made generally available. You have a small subset of users, you do have some really interesting and exciting use cases, it does not seem that the wider public is convinced that it’s a good use of $3,500. You have one, Pat. So I’d be interested. Sorry, I’m losing my voice. I don’t know what’s going on. I’d be interested, how many times have you picked it up in the last three weeks, four weeks? And feel free to share your use cases with me and the audience.

Patrick Moorhead: Video is the biggest use case.

Daniel Newman: Watching movies.

Patrick Moorhead: YouTube, even though YouTube doesn’t have a native app, watching videos on it with YouTube are pretty good. I watched a couple movies on it, and it is absolutely captivating. Now it’s super annoying that you’re literally sitting in bed watching a movie, you have to have the lights on, otherwise none of the proximal technologies work. And any time you move your hands, your hands come into the frame. So it’s super annoying.

Daniel Newman: It was like I was in a car the other day that had that gesturing on the panel. And as someone that talks with their hands, the radio station keeps changing, the volume kept going up, it kept swiping, because the idea of gesturing is a good idea until you actually meet someone like me that talks like this. All right, so bottom line is a lot of work to do for Apple. Let’s go on the second topic today, Pat. We had a couple more CHIP Act Awards this week.

Following the seat at TSM, it seems like the $6 billion is the range. Everybody’s going to get $6 billion. You get $6 billion, and you get $6 billion, and you get $6 billion. Oh, and Intel, since you’re the most important, you can have $8 billion. I don’t know Pat. Look, I think you and I have both been pretty steadfast about this correlation that exists between memory technology and AI chips, and that building the advanced. Mostly we’re talking about HBM right now, at scale is going to be a rate limiter for the success of AI silicon, because you need enough memory for all the throughput that you’re going to require to deliver on the promise of AI. So who’s building that?

Well in the U.S., it’s primarily Micron is the US-based leader. And of course Samsung is another major player in this space. So both companies are getting more than 6 billion. Samsung’s going to be expanding. It’s local here, Pat, presence both in Taylor and Austin. It’s largely going to focus on what people expect to be it’s four, and then later two, nanometer process and its most advanced process for memory.

Now, Pat, interestingly enough, all the news keeps banging on the fact that US-based manufacturing of Silicon has dropped from 37% to 12% over the last three decades. Interestingly enough though by the way, this is just a reminder, you and I both know this, but for everybody out there, the actual percentage of leading edge built here in the US is…

Patrick Moorhead: Very small.

Daniel Newman: It’s like zero. It’s negligible is basically the amount. And so even when we try, when Reuters tries to say it’s 12%, that’s mostly older nodes above seven. So, the fact is that what we’re trying to solve, for those that have forgotten because it took a couple of years from the time that the CHIPS Act was announced by the time that these grants started going out, we’re trying to solve the fact that we have no resiliency outside of Taiwan for anything seven nanometer or below. So all these grants are heavily focused on those processes and it’s obviously logic and memory.

So with TSMC, you’re seeing the focus on logic with Intel Logic, with now we’re starting to see the memory companies with Samsung and Micron getting substantial awards. And there are others, some of you probably heard of SK, they’re not, so far, have not been granted any dollars here. But for Micron, it’s largely expansion in the Idaho and New York markets. They have a big Four fab project in New York. So they probably got a lot of support from Chuck Schumer, Senator Chuck Schumer, for that.

We know that New York has an interesting semiconductor out in Albany. We’ve been to the IBM Research Center there. We spent some time, but they’re really trying to be another hub of innovation there. So, this is going to obviously expand it. We know that Global Foundries also has a pretty substantial footprint out in that part of the world.But Samsung, it’s all Texas, baby. It’s all here in Texas. This is where it’s all happening. This is where you should move right now, viewers. You should be down here in Austin, as long as you tend to agree with Pat and I’s viewpoints. But look, it’s moving forward. Awards are being granted. You’re talking about some different timelines with each of them.

And so it’s going to be several years, though, before we’re starting to see the byproducts of these. And by the way, both companies, I think at Samsung’s $40 billion is what they’re planning to spend all in. And Micron, I think it’s a hundred over two decades, though. So, good progress on the awards. I’m still wondering when at what point we’re going to start to see real, meaningful production here in the US It’s at least a few years out.

Patrick Moorhead: So memory market is back. And for those who have not been following for the last 40 years, memory is hit-and-miss. You’re either in an oversupply or an under supply. When you’re in oversupply, your customers are driving rapid price cuts and it’s ridiculous. And we saw negative gross margins for Hynix, Samsung, and Micron. But memory’s back, baby. Why? Because all of the platforms that it goes into are gaining demand, and all three of these companies pulled back on their capital expenditure, and even pulling back on the amount of hours that they run their foundry.

So prices are going up. You may have seen that Micron basically announced they were sold out of HBM3 for the year, and it was March when they made that announcement. We saw Samsung come in from a P&L standpoint on semiconductor. Now they don’t break out memory versus logic, or sorry, memory versus things like processors, but you just know that most of that was memory because most of the processors they manufacture, they consume themselves.

So it’s great to see this memory, this back. And oh by the way, wait until AI PC and AI smartphone hits in with all this on-device models. It’s going to be absolutely bonkers, because you need more memory to do the processing of it, and you need more storage to store the 10, 20, 30 models that are going to be sitting on your phone. And then from a Samsung standpoint, here in Taylor, it’s super, super exciting. And essentially what that means is that you’ll have Intel coming out with 18 a first here in the United States.

You’ll have a two nanometer coming out in Taylor by 2026. And then I think TSMC said by 2028, they would have two nanometer running in Arizona. I’m still piecing together if TSMC is being conservative on their dates. I had originally thought that to get the money they would’ve had to do bleeding edge here. And 2028 for Samsung, two nanometer is anything but a bleeding edge, it would be one or one and a half nodes…

Daniel Newman: It’d be like seven right now, right? It’d be like saying we’re at the leading with seven?

Patrick Moorhead: Probably more like five. Three nanometer is leading edge. And by the way, this nanometer stuff is fake. I am tired of doing air quotes around it, but if somebody can use an electron microscope and show me a two nanometer gate width, then I will retract. I will retract that.

Daniel Newman: 0.1.

Patrick Moorhead: Macro… Sorry, what’s that?

Daniel Newman: I was joking like 220, 221.

Patrick Moorhead: So macro here, US by 2030 has a… They want to do 20% of chips here in the United States, and that’s how this all comes together.

Daniel Newman: Some good adds there Pat. And thanks for filling in some blanks.

Patrick Moorhead: Well, except I didn’t move the Chiron forward like an idiot.

Daniel Newman: We can. Well hey, we should get upset at the producers back in the production shop.

Patrick Moorhead: Producers, I am going to fire you. You are on notice. In fact, you’re fired. Please leave.

Daniel Newman: Fired. I should leave now. All right, so because we don’t like to talk a lot about semiconductors, let’s talk some more about semiconductors. Pat, by the way, the tweet of the week was “Hardware’s back, baby!”

Patrick Moorhead: Saw that. Dude, you totally dangled that in front of me and you knew that was red meat for me.

Daniel Newman: I put that juicy bit on there and I just laid it on the thing. I even at you, I put it out there and I added you. I just knew you were going to write something and make that blow up a little bit more. But also, look, how long were you worried that you were getting hidden in the closet? We expanded our business for years focusing more and more on software, but it wasn’t about software instead of hardware, it was software needs hardware, and it’s become symbiotic. And now having some hardware chops is actually cool again.

And Google, and Microsoft, and other companies are actually leaning in more to hardware engineers, because the fact is that if the hardware’s not right, all the software development in the world isn’t going to fix that. So anyways, sidebar, another red meat tweet that came out this week was that Google apparently has turned away from going forward with the MI300 as part of its offering, Pat. And that seemed to get buzz and create some concern and some interesting conversations on Twitter.

Patrick Moorhead: It did. And I love when I get attacked on Twitter, it’s pretty good. But so this came from a Dan Neistadt tweet, by the way, Dan, if you ever watched the show, you should get on the show. You should talk about this. Anyways, he cited the information article. Essentially, Dan, as you said, Google Cloud or/and Google are saying they’re not doing that. So a couple of things going on here.

First of all, Google didn’t say they would never use AMD MI, they just said, “We’re always looking for the best technology out there.” And for this round they’re using their own homegrown and NVIDIA. The other thing that I think was missed in the context is literally one of the largest hyperscalers with the largest data estates, Meta, was not included in the conversation, which made it incomplete. So when I step back, AMD actually has two out of the four US-based hyperscalers that are adopting it.

And by the way, I said something. I said that they’re sold out, which got a lot of people going. Let me give you the double-click of I mean what by that. What is spoken for, reserved. You can still buy an MI300 through the channel if you want. And those were reserved for the channel. But when you look at the giant volumes and the ramp that AMD had to do for its two out of four hyperscalers. No, by the way, Dell, Lenovo, and HPE, I think that the company is taking a measured approach on who it makes commitments to.

Having worked at AMD for 11 years, I saw sometimes that AMD would get into the situation where they would give a little bit of processors to a lot of different people, and that left a lot of them dissatisfied. Now in CPUs it’s easier, because the work that goes into the software isn’t zero, but it’s nothing compared to the software work you have to do to stand up a new data center, AI GPU.

So I stand by my belief that MI300, for the most part, is all reserved. You still can buy it. And I think when you look at the commitments, and I’m going to read right off the transcript from the last earnings, is they said, “From a supply standpoint, Lisa thinks that her supply chain partners can ship more than $3.5 billion.” And if we map what AMD’s commitments were, or sales for previous year were $400 million, and then that there was a forecast that went to 2 billion for this year, and then $3.5 billion I believe, when it’s all said and done, AMD will probably be around $5 billion in MI for the year. And a reminder that MI300 is the fastest ramping product to a billion in the history of AMD, pretty amazing when you look at what Opteron and Epic did. So just trying to clarify that and hey, hit me up on the comments if you think I’m completely out to lunch there.

Daniel Newman: Well, there’s a sell in, sell out, sell through aspect of all of this. And when you have channel partners that are buying and building servers based on an architecture, they want to put some on the shelf. So AMD sold them. So it’s all a technicality. But the thing about Twitter acts is that for those bullies and toxic people, it’s an absolute, it’s heaven for them. It’s heaven for toxicity. But it’s also, it can be a good debate. So that was somewhere in between. Anyways, let’s talk. Let me just add a little bit. There’s not a lot to add on this particular topic here. The long and the short of it is that AMD, by the way, you know who ramped that? Opteron? Were you around anybody? Is there anybody?

Patrick Moorhead: Well, gee, I don’t know. I hired the first product manager for Opteron in 2001 and I was there and then I ran corporate marketing when we did the announcement.

Daniel Newman: Okay, just making sure I could give you a little victory lap here.

Patrick Moorhead: Thank you.

Daniel Newman: Where I could squeeze one in. Look, AMD, there’s two, three different forces they’re fighting right now. You’re fighting the NVIDIA force, which is palpable. Everybody’s feeling the pressure. And if you’re selling your AI solutions, right now everybody’s going to get all the NVIDIA they can and they’re going to buy. AMD was the first viable merchant silicon GPU in the market right now, programmable and offering with the software, Rockham and Frameworks, and it had strong demand.

But the cloud providers are doing their own thing. The cloud providers are all building their own AI silicon. And let’s not mistake for a minute, they’re going to offer a variety, but they do have to focus. They’re making big bets, big investments now, too. AWS has been doing this for a while, but Microsoft and Google have now turned up the temperature. They are adding more. And so you got to be thinking that they’re going to put some rails up on how many things they want to offer, because you give the lowest common denominator to sales, and they may not sell the in-house stuff.

So I think there’s an impact on that. We’ll have NVIDIA because that’s what everyone’s asking for, but on the secondary offering side, if it’s going to be any other platform, if it’s not CUDA, if it’s not NVIDIA, we are building and selling our own higher margin for them. So I think that might be the bigger force. And let’s not kid, like what we’re seeing Intel pull off with the AI PCs, you can be sure they’re in the background. Intel is still very instrumental to all these cloud companies and their CPU business. And with what they’ve offered with Gaudi 3, while it’s not the same as a full on GPU, I believe there’s conversations about reserve capacity there, too.

And if you’re one of these giant cloud companies, you are thinking not just about what you’re going to offer right now, you’re also thinking about what you’re going to be supporting over the next few years to come. So they’re balancing homegrown, the demands of NVIDIA, demands, I believe, coming from Intel and having the alternative of AMD, and AMD doesn’t always have the same size to market, and to push, and to create demand as some of the other companies that have bigger marketing budgets and in NVIDIA’s case, a lot more immediate demand.

All right, so let’s talk about another controversy one that you didn’t create with a tweet this time. So Adobe got, they took some heat this week. I think the story first broke on Bloomberg, but basically that Adobe their whole, “We’re really ethical. We train everything 100% using an ethical framework on our own stock imagery.” It turns out that someone was able to deconstruct the model and find that Midjourney, another model that’s used, another LLM for, or not an LLM, it’s an image generation AI tool, some of their imagery was used in Adobe’s Firefly. Pat, okay, so can I be a little bit, I’m going to be a little callous about this.

Patrick Moorhead: Do it.

Daniel Newman: Feels to me like, really? Really everybody’s loving on some Sora and here you got the CTO of Sora, “What did you train your model on?” “Publicly available data.” “Did you use YouTube?” “Uh…” And then, by the way, did I do a pretty good impression of the interview? I’m going to get the angle right. You did. All right. So anyways, and they came back and basically nobody cares. In the end it’s like, so basically she didn’t say we trained it on something else, but she didn’t really say they didn’t train it on YouTube data. So here we have a model that’s what, 5% apparently that was trained on mid-journey. I think lower prioritized data in the overall model framework.

I think if we actually unpack all the models that have been trained, we would be super disappointed to find out that everybody’s telling us something that’s not exactly correct. This doesn’t give Adobe a free pass. They try to use the high road, we’re doing it and we’re better in how we do things. And it’s never good when it comes back that you didn’t do what you said you did. But having said that, I also think there’s just such a huge gambit. The vast majority, almost 95%, was trained exactly as they prescribed. I do think that this is a little bit of clickbait, it’s a little bit of, oh my god-ism. And even the titles made it sound like Adobe used all Midjourney. It was a very small amount. I think we either need full transparency or we don’t. Meaning if we’re going to do this and roast companies for what they’re doing, then we should look at all the models and how they’re trained.

I don’t think we want to do that though. I don’t think people want to know how this happened. I don’t think a lot of people want to know how much of their personal and private data has probably been used, anonymized or not, to train these models. I think it’s a little eerie out there. Having said that, I do think Adobe is trying very hard to hold the line, be a bit more above board, be a bit more transparent in what they’re doing. Never works well when you say that, and then it comes out that you didn’t do exactly what you say. But I wouldn’t be surprised, Pat, and that’s why I’m callous about it, is if we unpack the training data sets for almost all these models that we look at to find out that a lot of data from a lot of sources that surprise us was actually used in the making of these models. So Adobe’s got a little cleanup to do, but not… I don’t think this is as severe as the headlines.

Patrick Moorhead: Dan, I don’t even know what to take to the bank anymore. This was not something that I expected from Adobe at all. And particularly the company has a page called “Adobe Firefly versus Midjourney.” And the last thing on the page talks about community first, and compensating Adobe stock contributors, and commercially safe for individuals and enterprise, creative teams. And now I believe that they’re doing all of those, but it was a little bit of a surprise and Adobe was the most, in my head, the most pristine of that because they, at least the way I interpreted it was, it was black and white.

“We’re using Adobe stock footage that we’re paying contributors for and you don’t have to worry about getting sued” or something like this. So are we finding out that pretty much everybody is doing this? I think so. And unlike you, Dan, I would like to see what’s under the hood of all of these models. In fact, one of the things I gave a lot of credit to Salesforce is that, and IBM, is they gave their sources and their methods for it. So for instance, what’s the data they used, how did they prune it, and what was the method of the output? And to me that’s really good. I will bet you that Adobe’s getting some indemnification requests at this point. But we’re going to have to see.

And by the way, comparison to DALL-E 3, you have no idea what they trained it on. Remarkably, when you try to create a Yoda character, it comes back and it looks like a freaking Yoda character, did they totally Hoover the entire Disney workup? I don’t know. But if we’re looking at that from that angle, Adobe looks pretty darn clean. You cannot get Firefly to do anything that comes back that looks like it came back from licensed content, Disney content, or something like that.

Daniel Newman: That’s a great point, Pat. That’s why I said the trillion parameter models though, billion even. This isn’t being QA’d by some fact-checker. It’s too big. There’s literally no… The only thing you can build is more AI to actually check the AI. I don’t know. There’s going to be mistakes made, but if this is what we’re holding the world accountable for, which it should, I just hope we have some higher standards up the chain in other areas.

All right, so let’s get to the topic five, Pat. Let’s hold Apple accountable once again. Apple’s back on our radar. Apple lost the number one mark. God, what a no, good, bad, terrible, really sour couple of weeks for Apple. Now they’re not even the number one market share for handsets.

Patrick Moorhead: No, listen, historically we’ve seen Apple and Samsung go back and forth for number one market share. From a market perspective, Apple drives most of their business from premium. Samsung has premium, plus they do a lot of business in the middle of the pack. This is units not revenue. When you look at revenue, Apple would have the clear advantage here. So I guess Samsung has number one unit share. Apple has number one revenue share. It appears, and this is from the IDC data, that China is where Apple lost a considerable amount of market share.

IDC says that smartphone shipments increased about 8% year over year, but Apple shipments dropped 10%. The biggest gainer here were companies like Xiaomi that essentially blew by them. Now interestingly enough, Xiaomi is using semiconductors from folks like Qualcomm, and Qualcomm’s competitors, coming from TSMC as compared to some of these homegrown Huawei handsets, or Huawei sub-branded handsets.

So anyways, Apple’s got to pull it together. I think what we’re going to see at WWDC, potentially, is Apple’s solution to this. Samsung and the Chinese manufacturers got the jump on Apple with AI smartphones, current crop of AI smartphones do most of it from the cloud, not from on the device. But I think what we will see at WWDC is how Apple is using on-device AI to drive these experiences.

It’s interesting, I respect Ben Baharin a lot. He doesn’t believe that there’s going to be a super cycle for AI smartphones, and I find that interesting. I respect him a lot. We can agree to disagree. I believe that Android and iOS will come to the table and put on-device capabilities, that essentially will, all things equal on model size, be a lot faster of AI you can do versus the cloud. It’s going to be a lot more private, because doing that on the device itself, and I think you could even make the case that it’s safer.

Daniel Newman: I’ll lean into the private. I don’t know if the latency… It’s going to have to be pretty noticeable for people to really push hard on.

Patrick Moorhead: Here, Moot Point agrees! There we go.

Daniel Newman: That’s a moot point. So this is the same conversation, Pat, we’re having about the PCs. It’s very exciting. I think we’re going to sell more. I think there’s a pent-up demand between the initial cycle that came with the 2020 and what went on with stay at home and everything. And then the whole, now we’re into AI, but I still think we’re fighting. You and I are on a lot of the same calls. “Tell us what the app is.” And then of course the whole idea of what’s local and what goes to the cloud. What data are you giving to the big cloud providers, to the big search providers? What data do you want to keep on your device, and keep private, and is it ever truly, fully on your device and private to you? Because it’s always at least going back to Apple or Android. It’s there, but who you share your data with.

But I do think privacy on-device, local, that could be a selling point. Apple’s leaned in hard over the years to being the more privacy-centric company. So that’s a potential, too. I’ve tracked what they’re doing with Realm or Realm or whatever, and they’re certainly showing at Apple the ability to drive these models on-device, smaller parameters. I think 250 million-ish parameters up to about a billion rather than these multi, multi-billion, or trillion, parameter models. I think that’s going to be important if you can condense these model sizes down, but still get a very high performance outcome without the latency. So that’s going to be something to keep an eye on as well.

As for Samsung, congratulations. As for Xiaomi, look, you can build phones and cars. So apparently that makes one of these companies is able to pull that off. And Huawei had, I guess their death was premature. They have found a little bit of traction back. It looked like it was maybe over, Rover, but they’re coming back. I think Apple’s problem in China is not just Apple’s problem, it’s a political problem. Apple is US-based, the US and China have tension that probably won’t be resolved. And AI is going to be the battleground in which we’re going to fight for economic strength and dominance. If the US continues to make China’s life miserable as it comes to getting the leading edge innovations from US-based companies, this is one thread that China can pull on to create challenges.

Patrick Moorhead: Hey, I’m wondering with this Xiaomi market share increase, wondering what that means for Qualcomm.

Daniel Newman: The premium-tier handsets, that’s always been a strength for Qualcomm. If they start to get momentum, who’s it going to be going to? Either Qualcomm or it’s going to be MediaTek. And of course Qualcomm is the undisputed leader at the premium level.
What do you think?

Patrick Moorhead: No, I think we’re going to see some surprise action on Qualcomm. A hundred percent.

Daniel Newman: You think the numbers are going to be rich?

Patrick Moorhead: And, listen, Qualcomm wins regardless. It either gets the modem in Apple, but with things like Xiaomi, you’re getting modem, and the AP, and the margins I’m sure are just freaking incredible.

Daniel Newman: I’ve said the same thing. One, Qualcomm tends to be grossly undervalued. It’s seen some market momentum. They’re in every handset. There’s content in every handset. And of course strength in China within its own brands, it’s not a bad thing for that company.

Patrick Moorhead: And by the way, I did the double click on every one of, maybe the website that I’m looking on, every one of the phones that Xiaomi had up there was Qualcomm. I don’t know if they’re just hiding the other processor makers or something like that. But I think we’re going to see some really good stuff from Qualcomm. Oh, by the way, Samsung coming in and taking the roost from Apple is going to help as well.

Daniel Newman: Well they charge extra for that. So it’s all ad placement, Pat. That’s why I put my name on the Red Bull car. What can I say?
By the way, we’re not in China.

Patrick Moorhead: When are we going to do an F1 sponsorship, Dan?

Daniel Newman: I don’t know, Six Five at F1. Gosh, they could really benefit, don’t you think?

Patrick Moorhead: Signal65. Maybe we do… We pull both businesses in.

Daniel Newman: Let’s pull them all in. Heck, we can be the live… Liberty media. We can be the Liberty media. We can do the tech coverage on Six Five at F1 race. Let’s do it. Let’s do it. All right, F1. You heard us Liberty Media bring us on. We’ll come on the Netflix show, Pat. We don’t mind that. So all right, final topic. Rene Haas came out talking about effectively, okay… Let’s just frame this up a little bit. Remember when Sam Altman said the $7 trillion he was going to try to raise to build Silicon? That was a cool story for a little while. Well, a lot of people don’t realize, or maybe they realize, but the electricity and power was part of what he was talking about when he was talking about that amount of spend and that requirement. And the fact of the matter is we are rapidly increasing the use of power, electricity, rapidly increasing the demand to get power to create data centers.

By the way, this isn’t like a, you can’t just pop up a building and bring a bunch of 15 amp circuits, and there’s a lot of consideration from an engineering standpoint for power. So when you start seeing this massive scale up of all this AI, you’re going to be seeing bigger draws. We’ve talked about power requirements in worldwide going from 1% to 2% since AI. I don’t know if that’s been a hundred percent validated yet, but that’s one of the claims. You’ve seen in countries like Ireland, double huge amounts of expanded use where there’s all these data centers, and eventually we’re going to have the challenge of where do we create enough energy? It’s not going to come from solar farms, it’s probably not going to come from windmills. We’ve got challenges to figure this out, and so there’s two ways to solve the problem.

One is we need to figure out how to create more clean, ideally clean, energy, clean in different ways. That doesn’t mean just solar and wind, that could be nuclear. We need to create the power. And then on top of that, we need to try to find efficiencies. When you hear companies making claims about chips, generally the claims are focused on two things: they focus on the performance and they focus on the efficiency, or the power. And so Rene Haas from ARM came out and talked about how it’s CPUs, it’s ARM-based, can save 15% versus others. And so this brought a lot of question about as we create more efficiency, does that necessarily create more volume? And what’s the impact on that? And is this a real number? Are the newest x86 versions really this much less efficient? We always know ARM has had a big focus on more efficient designs.

But, Pat, in the end, the question is if power is the rate-limiting resource does 15% better, if that claim could be validated by Signal65, or by Signal65, or possibly by another firm that validates claims. By the way, some of these numbers have been proven over the time there’s been, these comparisons have been done. But if you could save that power, does that make a material difference? And does that tilt the scales more and more in favor of ARM, which has already seen the scales tilted more in its favor over the past several years?

I think if that becomes the understanding of, AI is highly tied to ARM-based or ARM-paired designs, Pat, this could be pretty compelling. I said 15% efficiency could look like 150% in terms of interest because companies are trying to solve the two things: more performant, more efficient. It’s very provocative, but we know that the amount of demand, amount of use is going to keep going up. So I don’t know, it’s probably more of a proclamation this moment, but this is going to be one of the most important topics beyond new process for more powerful designs, is also less power hungry, more powerful design.

Patrick Moorhead: My apologies if you had mentioned this already, but this is a blog from ARM CEO, Rene Haas. I did pick at this. Listen, I’m a facts and details guy and measures of merit. I have a deep, deep-seated joy in my heart. I’m a product person at heart, product marketing person second. And digging into the claims, now, like Dan said, we didn’t do the research on this, and Signal65 didn’t do the testing and validation of it, but I did ask the company where they got their figures from and what their methodology was. They didn’t send me a spreadsheet, which would’ve been nice, and they didn’t send me the sources of it, but when they walked me through how they did that, it really started off with tops down, what’s the total power consumption of a set of hyperscaler data centers?

And by the way, that is public data that a lot of these companies issue with their ESG reports and also in industry conferences. And then taking the approximate percent of that power attributable to the compute. And if you can imagine a rack, you have compute, you have storage, you have some sort of networking, and then you have networking that connects the trays. You have networking that connects the racks, and then that turn into a fleet, and then you have networking that connects those fleets, and then you have to cool the whole thing.

And then they approximated an efficiency factor for ARM versus x86 efficiency. And they took that from their partners’ measured claims, right up to 50% savings, and then they applied it to the difference from ARM’s market share to a target market where it would be broad adoption of ARM. And I don’t know if that’s a hundred percent of it, but clearly this wasn’t just thrown out there. And when the CEO says something, Rene is a very facts and details guy, I’ve known him forever, this is what they come up with.

And essentially there’s two ways you can play this. I like to call it power sloshing, which says if you less power in CPU compute, you can slosh that power over to a GPU, by the way, or storage, and then it’s power. And you need to recognize the cooling that you might have to put against that. The other way to look at it that says, Hey, if all of your CPU compute were ARM, you could reduce the power footprint by 15%. So there’s two ways you can play that: more GPU by savings or networking, and storage power, and cooling, or reducing your power footprint in there.

And by the way, in today’s wackadoodle days of GPU compute, my guess is that most of these folks take it to GPU, by the way, not just GPU but accelerator. Whether I don’t want to be biased here. But anyways, probably said more detail than people were looking for, but I just wanted to get that out there.

Daniel Newman: Big topic, Pat. Look, we love talking about what’s next and how much more powerful and bigger models, but some of this, the infinite power required to create this, what do they call it, the AGI, the future and AI, and it’s not negligible. It’s pretty substantial. And so solving this problem is going to be important. So got lots of great technologists out there solving it. We’re going to keep talking about it. And that’s what we do here. So, Pat, good show this week. Really appreciate everybody out there tuning in. We covered a lot of ground, talked a lot about Apple, but we talked some AMD, we talked about the CHIPS Act, Samsung and Micron, we talked a little bit about the ethics of AI with Adobe and what’s going on there with Midjourney.

And then we talked about a more sustainable approach, this is the sustainability Pat that I like to talk about. It’s meaningful and it’s measurable. Measures of merit as, someone once said to me.
So all right, we’ve got our Six Five Summit, June 11 to the 13th. We’re excited. We hope that you’ll register and be part of it. We’ve got some great speakers. We, of course, will be back next Friday with a special guest here on The Six Five. It’s going to be Earnings-Palooza week one, but there’s going to be several of those. Pat will be on TV 50, 60 times in the next week. So just keep an eye out for him. I will be retweeting it. And anyways…

Patrick Moorhead: Isn’t your team going to be on a lot?

Daniel Newman: You are my team, buddy. I’m team Pat. All right, hit that subscribe, join us for all of our shows, be part of The Six Five community because where else would you want to go for this kind of analysis? I couldn’t get that word out, analysis. But for this week, for this show, for Patrick and myself, it’s time to say goodbye. So we will see you next time.

Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.