Their discussion covers:
- Varun’s point of view on the rapid rise of Generative AI, and what’s real versus just hype
- Varun shares the most interesting use cases of Generative AI he’s heard recently
- We take a look at Project Helix
- What other AI-centric partnerships Dell Technologies has coming in the future
Be sure to subscribe to The Six Five Webcast, so you never miss an episode.
You can watch the full video here:
You can listen to the conversation here:
Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Patrick Moorhead: Hi, this is Pat Moorhead and The Six Five is live at Dell Technologies World 2023 in Las Vegas. We attended an amazing in-person event. The excitement is high. I love the big stage events. Dan, it’s good to be back.
Daniel Newman: It is good to be back. It’s good to be on the road, Pat, as we like to call it. We have on the road, we have in the booth, we have our insider, but these are on the road. We are here at Dell Technologies World 2023 and Pat, it is… are we in the middle, we’re in the middle-late end of what feels like a three or four-month wave, and when you think about waves, you can’t help but think about tidal waves and we’ve had a tidal wave of AI. And let’s just say, we’ve had some great conversations here, I hope everybody’s tuned into all of them, but we haven’t had one that’s been really dedicated to this topic. I think this is time. I think it’s time.
Patrick Moorhead: Oh, surprise, we’re here in order for Varun to talk about AI. Thanks for coming back on The Six Five, Varun. It’s great to see you.
Varun Chhabra: Thanks for having me, guys. It’s always a pleasure to chat with you.
Daniel Newman: He’s definitely an alumnus.
Patrick Moorhead: Oh, totally.
Varun Chhabra: Yes. Yes.
Patrick Moorhead: Totally. He’s been on the show before and it’s fun because of your diversity, in terms of how you can talk about Telco, cloud, multi cloud, AI. I think we’ve had a different topic every single time you’ve been on the show, which is fun, but… by the way that was a compliment, by the way.
Varun Chhabra: Oh, I know.
Patrick Moorhead: Okay. Was that not clear?
Varun Chhabra: That was pretty clear.
Patrick Moorhead: I just wanted to make sure. All right, yeah, that was a compliment.
Varun Chhabra: Yeah.
Daniel Newman: So I want to start off though because one of the things like I made a joke to Sam Brokaw, I said they peppered in AI. No, they didn’t pepper in AI. AI was prevalent throughout the whole show.
Varun Chhabra: Yes.
Daniel Newman: But we are in this sort of era and every session we’ve been in, there’s this side of the continuum, people are like yes, yes, yes, AI, AI, AI and you get this other side of the continuum and they’re like a little bit more skeptical, a little bit more uncertain, but the genie’s not going back in the bottle here.
Varun Chhabra: Yes.
Daniel Newman: We’re moving forward with this. What’s your take though? What’s real, what’s hype? What do you think about this generative AI movement?
Varun Chhabra: Look, it’s a great question. There is a lot of hype. We can debate whether this is too frothy, too less. I don’t remember who, it was Bill Gates or someone who said when there’s transformative technology and you are at that inflection point, as human beings we get very excited and we often overestimate the impact in the short term, but we drastically underestimate the impact of those things in the long term. And I often remind myself about that over the last few months. I think you’re right, Daniel, that where customers are today for generative AI is a spectrum. There are customers who are saying hey, okay, great that you guys are doing this. Where do I get started? The concept makes sense. What use cases should I be putting this in? There are customers who are saying oh, I’ve taken an open source model off of Hugging Face, I’m training it with my data, I need to figure out how to scale this.
And then there’s an even smaller subset of customers that are so mature, that they’re so cutting edge I would say, that are trying to train their own model from scratch. So there is a gamut. I think as with all things, the vast majority of people are somewhere in the middle. I just think it’s very clear though that we’re in the precipice of something that will be truly transformative. I don’t possess the vision to say oh, it’s going to go this way or this way. It just feels like it’s really, really big.
Patrick Moorhead: There’s a lot of different things that can be done with generative AI. I know people hone down early onto the text form of that, but it is truly multimedia. It’s text, it’s video, it’s images and all the things you can do with it, and you can train it on world data, you can train it on a narrow set of vertical enterprise data and everything in between. What are some of the more interesting generative AI use cases that you’re seeing and here at Dell Tech World, I’m sure you’re having conversations with your customers about this too?
Varun Chhabra: Yeah, I think I will start with some examples that we’re doing at Dell and maybe talk a little bit about what we’re seeing with some of our customers as well. It’s really hard to put a capper on these areas because by the time we finish this conversation, there will likely be a new use case somewhere that’s happening in this building probably, but I think you’re right. LLMs are the place where a lot of people are starting. Pat, you’re absolutely right. On the show floor here at Dell Tech World, one of the demos that we’re doing in our modern data infrastructure space is actually a real-life demo that we’ve built with Jen Felcher’s digital organization. We’ve taken an open source model called BLOOM. It’s one of the more popular models on Hugging Face, and we’ve actually tuned it with publicly available data that we have on our Dell knowledge base. We have a lot of customers that go to our knowledge base and say well, how do I deploy this? I’m having trouble with this, what’s the configuration error I’m making here, etc.
And we know internally we’ve seen based on search patterns, it takes customers five or six searches to find exactly what they need. So how do we simplify that? This is the situation. This is a use case that’s built for an LLM or as Satya would say, a Copilot. So you go in, you type this is the problem I’m trying to solve, and you get an answer that’s curated for based on your search history, and it draws from multiple articles, but even there, we are just scratching the surface. Right now, it gives you a curated answer. You can imagine where you could take this. You could go from just getting an answer to say oh, actually here’s a Terraform deployment script that will help you deploy this infrastructure based on knowledge of what infrastructure you’ve deployed, whatever information you share with us, etc. So that’s one example.
Other examples that we’ve heard of right now, there was Carrie Brisky, who is from Nvidia and had joined us on stage yesterday, she’s shared that in the HR space, using this for virtual assistance on HR tickets internally can shave off a massive amount of time and response in resolving issues by using, again, knowledge based articles that are maybe more internal, not necessarily publicly available, but in the HR database and responses that are used from previous interactions and tune that.
The other places where we’re starting to see a lot of interest is in customer facing scenarios. Chatbots are an obvious place to go, but even virtual sales assistance. So as a customer is having a conversation with a salesperson, as long as there’s no privacy issues, a sales assistant listening to the conversation and actually providing real-time feedback to the seller hey, maybe you should go talk about this, we know this person likes this, etc. that kind of stuff. So LLMs are definitely where this is starting.
The other place on the multimodal nature, Pat that you’ve talked about beyond LLMs, is I need schematics for maybe I’m throwing an event. I want to figure out how I’m going to get the schematic out for how the booth should be laid out or how I want a particular meeting to happen. So think of that as text input, providing diagrams and graphics out there.
Daniel Newman: I was just going to say, you’ve seen Copilot, you mentioned that what they’re doing with being able to take text and generate a PowerPoint-
Varun Chhabra: That’s right. That’s right. That’s right.
Daniel Newman: … That’s pretty powerful stuff.
Varun Chhabra: It is, it is. Word documents.
Daniel Newman: We’ve played with stable diffusion.
Varun Chhabra: Yeah.
Daniel Newman: We saw the ability… and sorry, I didn’t mean to cut your thoughts.
Varun Chhabra: No, no, absolutely go ahead, of course.
Daniel Newman: I just was getting excited.
Varun Chhabra: Yes, exactly.
Daniel Newman: Pat, you make such a good point, when you start thinking about, like you said, if it’s just text in the end, it really becomes a really sophisticated search engine.
Varun Chhabra: That’s right.
Daniel Newman: Because it’s like oh, now it’s simplifying, it’s condensing, helping me. It’s picking me the references instead of making me do it, okay, that’s good, but in the long run, when you start to take all the different modals of media, you try to create, it’s two-way, it’s conversational, on top of that it does video. and-
Varun Chhabra: No, you’re right.
Daniel Newman: … It gets pretty cool. And by the way, we’re democratizing it. You mentioned everybody could do it.
Varun Chhabra: I’m so glad you said that. This is one of the things that really excites me about this. One of the things that comes up with generative AI, the first conversation is like oh, my employees can be much more productive. And yes, of course they can. We didn’t even talk about marketing use cases. I’m in marketing. It’s easier than ever to create a first draft of a blog post and then of course you need human intervention to make it in your own voice, but we’ve internally seen how fast it is… you can compress content creation schedules by, I don’t know, from two weeks to three days. So productivity is going to be a big boost.
I would argue the thing that we are not fully comprehending yet is the democratization of creativity ’cause if you think about it, let’s say you’re a person in HR and you have an idea for I am seeing people come in and talk to us about HR issues and our ticketing system, etc., etc., you have the expertise about how those HR interactions are going. Let’s say you had an idea that said I have this idea to build an app to completely transform the employee experience, you don’t know how to do development. You don’t know how to build apps. Can you imagine what’s going to happen inside enterprises now when these tools are made available to employees who previously did not have access to it.
Patrick Moorhead: Oh, it’s going to be like taking us from typewriter to word processor, taking us to desktop publishing when we didn’t have before. Again, this may just sound completely bizarre, but printing press type of-
Varun Chhabra: It really is like that. It is like that. It is like that, I agree.
Daniel Newman: And it’s really interesting too because you’re already starting to see it. We’ve seen announcements from software providers over the last several weeks, SAP, ServiceNow, that basically are starting to do, Oracle, starting to do with AI, some of the exact use cases that you’re mentioning and it can do it very, very quickly and it’s democratizing it. And it’s also, like I said, smaller companies because of open source are going to be able to adopt this very quickly to be able to… I keep saying there’s going to be more small big companies than ever, big companies with small numbers of people. It’s like that’s only used to be hedge funds.
So you guys made an announcement, project Helix, love to get your point of view. What did you think about that?
Varun Chhabra: So there’s a lot to talk about there, but let’s just start with the customer context. I talked a little bit about how there’s a spectrum of maturity as with any transformative technology or technology trend I would say. So that’s one thing we’re looking to solve. The other thing that we’re finding is, as we talk to customers, as enterprises are starting to scratch the surface on what Gen AI can do for them, there’s unique needs. How do you make sure the data can be trusted? If you’re going to put an LLM in front of a customer for a customer-facing scenario, could be a chatbot, the answer has to be accurate, it has to be appropriate.
It is very, very difficult to test all of those things out because you just don’t know how an LLM is going to respond to a specific input. One word in the input could change the output completely, so how do you create enterprise guardrails around making sure that the answer can be trusted, it is consistent with the brand voice. If you’re going to use proprietary high-value datasets for your own business intelligence, how do you make sure there’s privacy and security? We talked about models. General purpose models aren’t going to be the be all and end all for these very domain-specific use cases and then how do you do it at scale and where do you get the expertise from? These are the unique concerns and things that we’re finding enterprises are navigating. So Helix is really an attempt to help solve that.
It takes a lot of the technology we have available today, our infrastructure, our server, our storage, our infrastructure management software and video accelerators and video software frameworks that customers today can take advantage of, but they have to build all of that together and build that stack up and figure out how all these things work together. It’s really about providing full stack solutions that are very specific to customer needs. If a customer wants to train a pre-existing model using our infrastructure or Tuna model, I should say, and Nvidia’s frameworks, we’re going to create a solution for that as part of Helix. If they want to build a model from scratch and train that in and put the massive compute power needed for that, we’ll want a solution that’s going to help remember that.
If they’re further along the deployment lifecycle and they want to deploy this in production, what does the training infrastructure look like? What does the inferencing infrastructure, the edge look like? What do I use? How many GPUs do I need? Do I need GPUs for inferencing? The blueprints, the deployment know-how, the guidance, that’s really what bringing all of that together for these very horizontal use cases or other, delivering a horizontal platform that customers can then build their vertical needs on top of is really what we’re trying to do with Helix is reduce that friction to get started, to deploy them at production.
Patrick Moorhead: Dell has always been a partner-focused company and it knew which swim lanes to stick to and then which ones to partner. Just here at Dell Tech World, we saw Microsoft on the big stage, we saw Red Hat, and with Helix we saw Nvidia. I think I know the answer to this, but I’d like to hear you answer this, which is can we expect more of these AI partnerships in the future?
Varun Chhabra: Yeah, yeah, I think absolutely and look, as you said, Pat, and as hopefully people are seeing not just at this Dell tech World, but even recently, previous events, this is going to be a vast ecosystem. If you just… it’s mind-boggling the innovation that’s happening at different levels of the stack with generative AI. We haven’t even talked about LLMOps. We’re still figuring out MLOps and now we’ve got to figure out LLMOps, data management. There’s going to be a lot of partnerships at different levels of the ecosystem and there’s going to be partnerships with other silicon providers as well, but Helix is specific to Dell and Nvidia.
And if I may just talk a little bit about how this came about because one question I get here is like is this just something you guys created because all of this Gen AI stuff is happening or was it lucky timing or what? And of course, we didn’t… it would be foolish to pretend that we knew what was going to happen with the Generative AI space and how quickly that moved, but the seeds of what Helix is, were actually born two, two and a half years ago when we actually started working with Nvidia on the PowerEdge XE9680, which is the 8-Way GPU and internally, when we started doing this effort, there was a lot of debate within Dell who needs an 8-Way GPU? Who’s ever going to use an 8-Way GPU? Why are we doing all this co-engineering effort?
Patrick Moorhead: 8 H100s.
Varun Chhabra: Yes, co-engineering effort. Are people really going to need that? And it was really a lot of debates inside and at that point in time, our thinking was well, yes, we do think people are going to need it. Do we think people are going to be lining up to use it two years from now? No, but we thought it was very important for us to have a flagship, really the Cadillac version of exemplifying the design cooling, being able to operate this at scale and of course, that journey and the work we’ve done got us to a place where when this trend on Gen AI came up with ChatGPT, it was very easy for us to say, you know what, we’ve got to take this to the next level. So that’s kind of the genesis of how this happened. It wasn’t yesterday, it wasn’t in November, it’s really building up on the work we were already doing.
Daniel Newman: Yeah, it’s very exciting and you’re right, Pat, Dell’s partnership ethos was very on display here at Dell Technologies World.
Varun Chhabra: Yeah.
Daniel Newman: I think with AI it’s going to be very important because it’s where is AI being led and the big challenge, Varun, that I do believe is there’s the silicon layer where everybody’s competing for the most compelling silicon and then of course there’s the apps, and that’s where you saw how ChatGPT took decades of work from Dell and other companies, Varun, and made it instantaneously popular in the market.
By the way, like I say, I’ve used Google Workspaces, it’s been finishing my sentences for two years now.
Varun Chhabra: Yeah, that’s right. That’s right. Gmail as well as Outlook.
Daniel Newman: This is not actually all new and we’ve seen how quickly a lot of companies came out because a lot of companies were just waiting, someone had to move first. So we saw that happen, but this is such a great topic. Pat.
Patrick Moorhead: Yep.
Daniel Newman: I think we need more time with him, maybe a few more episodes to go deeper on this.
Varun Chhabra: We have many hours of conversation to drain here.
Daniel Newman: Yes.
Varun Chhabra: But I really appreciate you bringing me on.
Daniel Newman: This is way too much of a flyby, but for the On The Road and for this event, Varun, we got to say thanks for joining.
Varun Chhabra: Thank you, thank you for having me. It’s always a pleasure to chat with you guys. Whatever topic it is, I learn a lot and it’s always fun to talk to two people who love technology at its very heart. So thank you.
Daniel Newman: Thank you.
Varun Chhabra: It’s always a pleasure. I’m happy to join you guys whenever you guys want me.
Daniel Newman: Thank you so much, Varun.
Patrick Moorhead: Appreciate that.
Daniel Newman: All right, everyone, thanks so much for tuning in here. This was a great session. We all know Generative AI is a red-hot topic. Getting the Dell perspective, hearing about the partnerships, what they’re doing with Project Helix and everything else, you had it here. Hit subscribe. Join us for all the episodes here at Dell Technologies World. We appreciate you tuning in. For now, for Pat, for myself, we’ll see you later.