Arm Says CPUs Can Save 15 Percent of Total Datacenter Power

By Patrick Moorhead - April 23, 2024

The Six Five team discusses Arm Says CPUs Can Save 15 Percent of Total Datacenter Power

If you are interested in watching the full episode you can check it out here.

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

Transcript:

Daniel Newman: Remember when Sam Altman said the $7 trillion he was going to try to raise to build Silicon? That was a cool story for a little while. Well, a lot of people don’t realize, or maybe they realize, but the electricity and power was part of what he was talking about when he was talking about that amount of spend and that requirement. And the fact of the matter is we are rapidly increasing the use of power, electricity, rapidly increasing the demand to get power to create data centers.

By the way, this isn’t like a, you can’t just pop up a building and bring a bunch of 15 amp circuits, and there’s a lot of consideration from an engineering standpoint for power. So when you start seeing this massive scale up of all this AI, you’re going to be seeing bigger draws. We’ve talked about power requirements in worldwide going from 1% to 2% since AI. I don’t know if that’s been a hundred percent validated yet, but that’s one of the claims. You’ve seen in countries like Ireland, double huge amounts of expanded use where there’s all these data centers, and eventually we’re going to have the challenge of where do we create enough energy? It’s not going to come from solar farms, it’s probably not going to come from windmills. We’ve got challenges to figure this out, and so there’s two ways to solve the problem.

One is we need to figure out how to create more clean, ideally clean, energy, clean in different ways. That doesn’t mean just solar and wind, that could be nuclear. We need to create the power. And then on top of that, we need to try to find efficiencies. When you hear companies making claims about chips, generally the claims are focused on two things: they focus on the performance and they focus on the efficiency, or the power. And so Rene Haas from ARM came out and talked about how it’s CPUs, it’s ARM-based, can save 15% versus others. And so this brought a lot of question about as we create more efficiency, does that necessarily create more volume? And what’s the impact on that? And is this a real number? Are the newest x86 versions really this much less efficient? We always know ARM has had a big focus on more efficient designs.

But, Pat, in the end, the question is if power is the rate-limiting resource does 15% better, if that claim could be validated by Signal65, or by Signal65, or possibly by another firm that validates claims. By the way, some of these numbers have been proven over the time there’s been, these comparisons have been done. But if you could save that power, does that make a material difference? And does that tilt the scales more and more in favor of ARM, which has already seen the scales tilted more in its favor over the past several years?

I think if that becomes the understanding of, AI is highly tied to ARM-based or ARM-paired designs, Pat, this could be pretty compelling. I said 15% efficiency could look like 150% in terms of interest because companies are trying to solve the two things: more performant, more efficient. It’s very provocative, but we know that the amount of demand, amount of use is going to keep going up. So I don’t know, it’s probably more of a proclamation this moment, but this is going to be one of the most important topics beyond new process for more powerful designs, is also less power hungry, more powerful design.

Patrick Moorhead: My apologies if you had mentioned this already, but this is a blog from ARM CEO, Rene Haas. I did pick at this. Listen, I’m a facts and details guy and measures of merit. I have a deep, deep-seated joy in my heart. I’m a product person at heart, product marketing person second. And digging into the claims, now, like Dan said, we didn’t do the research on this, and Signal65 didn’t do the testing and validation of it, but I did ask the company where they got their figures from and what their methodology was. They didn’t send me a spreadsheet, which would’ve been nice, and they didn’t send me the sources of it, but when they walked me through how they did that, it really started off with tops down, what’s the total power consumption of a set of hyperscaler data centers?

And by the way, that is public data that a lot of these companies issue with their ESG reports and also in industry conferences. And then taking the approximate percent of that power attributable to the compute. And if you can imagine a rack, you have compute, you have storage, you have some sort of networking, and then you have networking that connects the trays. You have networking that connects the racks, and then that turn into a fleet, and then you have networking that connects those fleets, and then you have to cool the whole thing.

And then they approximated an efficiency factor for ARM versus x86 efficiency. And they took that from their partners’ measured claims, right up to 50% savings, and then they applied it to the difference from ARM’s market share to a target market where it would be broad adoption of ARM. And I don’t know if that’s a hundred percent of it, but clearly this wasn’t just thrown out there. And when the CEO says something, Rene is a very facts and details guy, I’ve known him forever, this is what they come up with.

And essentially there’s two ways you can play this. I like to call it power sloshing, which says if you less power in CPU compute, you can slosh that power over to a GPU, by the way, or storage, and then it’s power. And you need to recognize the cooling that you might have to put against that. The other way to look at it that says, Hey, if all of your CPU compute were ARM, you could reduce the power footprint by 15%. So there’s two ways you can play that: more GPU by savings or networking, and storage power, and cooling, or reducing your power footprint in there.

And by the way, in today’s wackadoodle days of GPU compute, my guess is that most of these folks take it to GPU, by the way, not just GPU but accelerator. Whether I don’t want to be biased here. But anyways, probably said more detail than people were looking for, but I just wanted to get that out there.

Daniel Newman: Big topic, Pat. Look, we love talking about what’s next and how much more powerful and bigger models, but some of this, the infinite power required to create this, what do they call it, the AGI, the future and AI, and it’s not negligible. It’s pretty substantial. And so solving this problem is going to be important.

Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.