How Dell Supports Customers with AI – Six Five On the Road at Dell Technologies World

By Patrick Moorhead - May 21, 2024

On this episode of the Six Five On the Road, hosts Patrick Moorhead and Daniel Newman are joined by Dell‘s Arthur Lewis, President, Infrastructure Solutions Group for a conversation on Dell’s role in accelerating AI innovation for its customers. Amid the buzz of Dell Technologies World, Arthur offers deep insights into the current AI landscape, Dell’s recent AI announcements, and the significance of building a robust AI ecosystem.

Their discussion covers:

  • The significant buzz around AI at Dell Technologies World and the trends Arthur is observing
  • Dell’s strategies and solutions to help customers advance their AI initiatives
  • An overview of the AI-related announcements made by Dell at the conference
  • The critical role of the AI ecosystem in Dell’s approach and how it benefits customers
  • The primary message Dell wants customers to remember about its commitment to AI and innovation

Learn more at Dell.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The Six Five Webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we ask that you do not treat us as such.

TRANSCRIPT

Patrick Moorhead: And The Six Five is on the road here at Dell Technologies World 2024 in Analysts’ second home, Las Vegas, Nevada. It has been all about AI, AI infrastructure, AI PCs, AI software, and AI end-to-end services all pulling it together. Daniel, we saw some big partners up on the big stage with Michael. ServiceNow CEO, Jensen Huang from NVIDIA been pretty awesome.

Daniel Newman: Yeah. We had Samsung. We had ServiceNow. We had, of course, NVIDIA. We were building AI factories. We were talking about AI PCs. Look, you can say it’s cliche. You can call it a trend, but when it’s driving huge economic growth, when companies are looking into it and depending upon it to reach new heights in their businesses, you call it the thing you should spend time talking about when you’re on the road. Yes, in our second home here, Pat, in Las Vegas, it is Dell Technologies World 2024, and AI is what needs to be discussed.

Patrick Moorhead: Isn’t it wild? I don’t know. Two years ago people were yawning at infrastructure, but now infrastructure is cool. None of this stuff can happen without incredible infrastructure. It is my privilege to have Arthur Lewis, who happens to run the infrastructure business for Dell Tech. Arthur, great to see you, my friend.

Arthur Lewis: Thank you for having me, Pat.

Patrick Moorhead: Absolutely.

Daniel Newman: Yeah, it’s good to be here with you. Pat, you remember when we were saying people said software would eat the world and everybody was cool if they were talking about software? You and I-

Patrick Moorhead: Hardware’s a commodity, all of it.

Daniel Newman: Hardware’s commodity. You and I, at different times, we can argue who said it first, but said, “Silicon will eat the world.” We got that pretty right about five years ago when we made that call. Now here we are and chips are cool. Infrastructure’s cool. Arthur, I think that makes… We had another guest on a show that said, talking about ESG and not talking about it is not cool, or talking about is cool. Infrastructure is cool, but AI… So you heard Pat and I, our little preamble, AI, AI, AI. It is a trend. It is a bit cool. It is what everybody’s focused on, but it is for good reason. You’re here at Dell Technologies World. You’re meeting with customers, partners. You’re talking to your colleagues. What’s catching your attention? What’s going on? How are you seeing this thing evolve?

Arthur Lewis: Yeah. Look, we’ve said we’re living in one of the most interesting times in human history. For years, customers have been on a digital transformation journey, and the underpinning of which has always been the data. It’s all about the data, and the advancements that we’ve seen in artificial intelligence now provide customers with the tools to actually unlock the data. What we’re seeing is an opportunity for incredible innovation and incredible productivity in the workforce. It’s just simply amazing to see the amount of creativity that this topic is stimulating in every industry in the world. We talk about… Part of our mission is that technology drives human progress. This is the next industrial revolution, and this will improve humanity.

Patrick Moorhead: Yeah, it’s amazing because we have new technology working on new technology, the rate of innovation, it’s not just on our imagination. It’s actually increasing, right? I’m curious, how are you helping your customers to bend their generative AI curves to get their time to market at quality? How are you helping them accelerate?

Arthur Lewis: Yeah. Look, from a customer’s perspective, what we hear time and again is, “I don’t have the expertise or the knowledge to really deploy AI in my environment. However, my AI strategy is going to follow my data strategy. The majority of my data sits on-prem. I need to be able to deploy AI on-prem, and I need the flexibility to have a choice in terms of how, when and where I deploy models Dell, how can you help me accelerate the adoption of AI?” So we have a simple four-part strategy. Well, simple to articulate, difficult to execute. Part number one is about world-class infrastructure, the compute, the network, and the storage. This is incredibly important because these are very complicated system that must be fine-tuned, optimized, and work in unison. We have the benefit of being able to engineer all three components under one roof, under one engineering team to ensure that the underlying system works.

Part number two is making sure that we are building out the ecosystem for AI. We’re going to talk about solutions. You can’t have solutions with how to have ecosystem partners. We’re going to talk tomorrow about more about the partnership that we have with Hugging Face, the partnership that we have with Meta, how we’re working with them on Llama 3 and all of the things that we’re doing to help customers accelerate the adoption of AI. The third component is ensuring that we have turnkey solutions. So we offer 40 turnkey solutions that have been tested, and optimized, and validated for a myriad of AI use cases. And then we layer that all with professional and consulting services, really to help customers understand how they deploy the system.
Because the conversations that we have with customers, typically five topics. Topic number one is usually, “Hey, what use case are you guys thinking we should be thinking about?” Question number two depends… It’s into model selection. Question number three is around data preparation. Then you get to architecture, then you get to infrastructure. So having professional and consulting services in conjunction with our partner community really helps customers think about how they can deploy AI for maximum benefit.

Daniel Newman: Yeah, it’s a really significant opportunity. We had Sam Grocott up here, one of your colleagues in the business. He was talking a lot about evolving from a TCO to really an ROI model. Businesses are looking at the efficiencies and productivity gains that can come from this. We heard Bill McDermott talk about 40 years ago, how everyone thought that only executives would survive, and this was in the earliest eras of automation. Now with AI and GenAI though, everyone’s saying, “Is it going to be universal basic income? What’s going to happen in the world?”

In the end though, companies are going to get more productive. I think there’s a bit of a prune to grow strategy, but I think Dell is… Some of the stuff that I’ve heard is that you’re really able to show value to customers that want to implement AI, that want to not overly rotate to cloud. It’s going to be an and not an or, but there will be situations where the data should remain on-prem. The data actually needs to be, and from a standpoint of compliance, governance, regulation, all those things, it’s an advantage, but it’s also an economic advantage. Arthur, I’d love for you to share a little bit about the announcements. You guys displayed a great photo of the AI factory. What are all the announcements that you made today for your business?

Arthur Lewis: So let’s go through that first layer when we talk about infrastructure. So I think we’re all familiar with the PowerEdge XE9680, right? Well, we’ve made that product significantly better with the PowerEdge XE9680L, which is a liquid-cooled solution, specifically designed for the NVIDIA’s Blackwell 200 GPU. There are three main highlights that I will call out with the 9680L. Number one, we improved density 33%. We are now offering eight GPUs in a 4U form factor. That is an industry first. Second, we’ve leveraged our decades’ leadership in liquid-cooled solutions to improve energy efficiency two and a half times with our direct-to-chip cooling technology. Third, and maybe most importantly actually, we’ve greatly increased our networking capabilities by offering 12 PCIe slots, supporting full 400 gig Ethernet and InfiniBand for the highest level of throughput in the industry.

Next, not resting on that, we’re also announcing our rack-scale solutions. These will support air and liquid cooling. They will be the most dense and energy-efficient rack solutions in the market. They will be data center cooling neutral. They will be factory integrated, and they will be ready to be deployed. They’ll come in three variants. The first one is a 70 kilowatt air-cooled designed to support 62 GPUs… 64 GPUs, excuse me, for NVIDIA H100, H200, B100, AMD MI300X, and Gaudi 3. The second solution, which you heard today from Michael-

Patrick Moorhead: Nice.

Arthur Lewis: … is our 100 kilowatt liquid-cooled solution with rear door heat exchanger, specifically for NVIDIA’s Blackwell 200 GPU. This will be the most dense energy-efficient rack scale in the industry. The third solution will be a 130 kilowatt liquid-cooled design based on the next generation ORV3, 21 inch architecture designed to support NVIDIA’s Grace Blackwell 200 Superchip, but also X86 variants with Intel and AMD CPUs. We build on that with our networking announcements and our partnerships with Broadcom and NVIDIA to provide enhanced fabric capabilities, 400 and 800 gig switching, 400 gig NICs and DPUs, as well as enhanced fabric capabilities with our Sonic operating system and Spectrum-X from NVIDIA.

We build on that with power scale. This is the first ethernet-based storage that’s certified on NVIDIA’s SuperPOD. We’re adding significant hardware upgrades to the introduction of the new F910, DDR5. PCIe Gen5, 24 SSDs, all in a 2U form factor to deliver up to 1.47 petabytes. With these hardware upgrades and significant software modifications, we are now twice as fast as the nearest flash-only scale-out file competitor out there. Of course, we’re going to wrap this all up with an announcement on these 40 turnkey solutions. We’re going to talk about full stack deployment automation that we’re building with NVIDIA, single and multi-node validated designs with AMD, our collaboration with Intel on their developer cloud offering Gaudi 3 for flexible testing and reserved instances, the work that we’re doing with Red Hat on their enterprise Linux for AI, and many, many more in the hopper.

Patrick Moorhead: Gosh.

Daniel Newman: Did they announce anything?

Patrick Moorhead: I don’t know. It’s like a full- it’s like a full array of awesomeness. You know what I mean?

Arthur Lewis: It is so exciting. I like to say our innovation engine is firing on all cylinders and it has never run this hot.

Patrick Moorhead: I’m excited. I brought my platinum card. Does that work?

Arthur Lewis: Yes.

Patrick Moorhead: Probably, we need a few of those. Maybe I get half a rack out of that.

Daniel Newman: Maybe you can get half a chip.

Patrick Moorhead: No, probably. I don’t know.

Daniel Newman: Especially with the hot end of the-

Patrick Moorhead: One Blackwell. No?

Daniel Newman: Yeah, you know.

Arthur Lewis: Jensen’s walking around. We could ask for a discount.

Patrick Moorhead: There we go. I know where that conversation’s going. I’ve heard in the industry folks, no. It takes a village to pull all of this off, and I heard you talk about some of your partners, obviously, Broadcom, Intel, NVIDIA, AMD, and folks like that, but can you talk about the importance of your ecosystem to what you’re trying to pull off here?

Arthur Lewis: Yeah, you can’t build… If you think about the layer cake strategy we talked about earlier, you can’t do this by yourself. We have a rich history of engagement with many of these partners. You think about NVIDIA, AMD, Intel, Broadcom, rich relationship, but we’re also expanding into new relationships. You look at the work that we’re doing with Hugging Face, the work that we’re doing with Meta, we’re engaging deeply with Palo Alto Networks, with Run:ai, with Lamini, with Red Hat, with so many others. Our value add is we are trusted partners for the enterprise that don’t have the necessary skills to really understand how to deploy generative AI at scale. They are under a lot of pressure because of all the things that we talked about. This is now a board level CEO level conversation. They’re looking for a full stack technology partner. They’re not trying to say, “I want my compute over here, my network over here, my storage over here.” So we want to make sure that we understand the full stack solution, we’re building together the right ecosystem of partners to deliver on behalf of customers.

Patrick Moorhead: Right. That’s great.

Daniel Newman: Yeah. No doubt the ecosystem is going to come into play. Patrick and I have spoken endlessly about this hybrid multi-architecture. You’ve talked a lot about silicon diversity here, and you also talked a lot about networking, which is one of those topics that sometimes I think is the forgotten opportunity. Of course, Dell, you seem very ready to capitalize on that.

Arthur Lewis: Well, Michael said it today. On the networking side, we talk about the fact that GPUs are hungrier, and that means that they’re going to demand more data, which means that they’re going to demand more throughput. But let’s put some context around it. When we look at AI workloads, they drive 300 times the amount of data throughput that we would see in a traditional compute server. 300 times.

Daniel Newman: Which means a lot of networking is going to be required.

Patrick Moorhead: Well, the reliability that needs to go into that too and the speed is immense.

Arthur Lewis: The speed is incredibly important. But the optimization and the tuning, this is why being able to engineer it under one roof to ensure that… And it’s the transmission, not only between GPUs but also between server and storage, that’s incredibly important. That’s why the F910 is so important because that is the storage engine that’s feeding data into the compute, into the high bandwidth memory to process models, to generate tokens at a rate that is going to satisfy customers.

Daniel Newman: Yeah. Listen, we have only a couple of minutes left, Arthur, but I’d love to give you the opportunity to talk to the audience out there. What’s the one big takeaway, at least within your world of Dell and the infrastructure group, that you really want everyone to come away from Dell Technology World?

Arthur Lewis: Well, look, clearly, this is an AI fest, as we like to say. What I hope is coming across is, like I said earlier, our innovation engine is firing on all cylinders. We’re working with a broad swath of ecosystem partners. We aspire to be that trusted advisor to every enterprise customer that’s out there that’s looking to deploy generative AI at scale. But we don’t lose sight of the fact that there are traditional workloads outside of artificial intelligence. Eventually, we will move everything to an AI factory, but that’ll take some time. We also have incredible primary storage announcements with PowerStore Prime. We have an incredible software-defined portfolio with PowerFlex, with PowerScale, with ObjectScale. We talk about the world as a wash with data and the importance of data. We have an incredibly strong data protection portfolio and a multi-cloud story. So all things infrastructure is firing on all cylinders, not just on the AI side. But again, AI is really anything anybody wants to talk about today. Again, I can’t imagine, I’m so privileged to be a participant in this space. I’m a very happy camper.

Daniel Newman: That’s great. You should be. Well, Arthur, thank you so much for joining us here on The Six Five at Dell Technologies World 2024. Let’s have you back again soon.

Arthur Lewis: Thanks, guys. I appreciate it.

Daniel Newman: Thanks.

Arthur Lewis: Thanks Pat.

Daniel Newman: For everyone out there, thank you so much for tuning in here. We appreciate you joining The Six Five on the road at Dell Technologies World 2024 in Las Vegas. We’ve had plenty of coverage, lots of great conversations. Stick with us. More to come. See you all soon.

Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.