On this episode of The Six Five – Insider, hosts Daniel Newman and Patrick Moorhead welcome Rob Thomas, SVP IBM Software and Chief Commercial Officer from IBM and Dr. Dario Gil, SVP and Director from IBM Research to continue their conversation from the IBM Think event back in May on IBM’s AI business strategy.
Their discussion covers:
- The latest in IBM’s AI business strategy
- The top concerns with enterprise AI and what IBM is doing to address these concerns for its customers
- What makes the recently released IBM Granite foundation models unique
- IBM’s thoughts on AI safety and governance and how they are addressing these areas
Learn more about IBM’s AI platform, watsonx, on the company’s website.
Be sure to subscribe to The Six Five Webcast, so you never miss an episode.
Watch the video here:
Or Listen to the full audio here:
Disclaimer: The Six Five webcast is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.
Patrick Moorhead: The Six Five is on the road here in Yorktown, New York at IBM’s AI and Research Day for Industry Analysts. Dan, it’s been a great day and quite frankly, I’ve seen some level of content and specificity and customer stories and use cases around AI that’s been very positive, and I would have to say some of the best that I’ve seen so far in the industry. You and I have been on the road probably nonstop since, I don’t know, November, when generative AI was created, just kidding folks, but when it really-
Daniel Newman: Day one.
Patrick Moorhead: Yes.
Daniel Newman: Yeah, so it’s really good to be here. Of course, it’s always great to come out to Yorktown. This is the research center. By the way, it’s not just where AI is happening, it’s where a lot of things are happening. We’re going to hit that right here, but if you follow our X or Twitter or whatever we’re calling it these days, you follow our LinkedIn, you’ll see that there was some really good information that was shared in the public domain, some really great business use case ROI stuff, you know I love that stuff. Also, there was some stuff that we can’t share just yet. But what I will say overall is between November when generative AI was invented, okay, last time on that joke, and today we have seen some really, really great progress. IBM has been in the lead taking things GA first and doing some great things. But you know what Pat? Everyone gets to hear from us.
Patrick Moorhead: Yes, let’s bring in Rob and Dario. Great to see you again and welcome back to The Six Five. It was just a few months ago that we were talking in Florida, baking in the sun together, talking about the very beginning of some of your biggest AI announcements. So welcome back to the show.
Rob Thomas: Great to be here, and glad you guys came to Yorktown, really nice to have you.
Daniel Newman: As long as I don’t try to get an Uber out of here, I’m going to be just fine. But y’all can keep me here.
Rob Thomas: We want you to stay.
Patrick Moorhead: That might be part of the plan, Dan, so I wouldn’t float that if I were you.
Daniel Newman: It’s good to be loved. So Pat, you teed this up really nicely, and Rob, I’m going to start with you, but it was about six months ago, and as the sun came over the building and you and I were sweating, we were also really getting a number of meaningful updates. The advancement of your AI strategy with Watsonx was really taking shape at your event, your think event back in May. But it’s been six months, believe it or not. It’s been six months, half a year. Half a year in GenAI time is enough for 300 more companies to come to market with some type of solution. Meanwhile, you’ve been heads down, working hard. Talk about what has happened in the last six months in terms of your AI business strategy.
Rob Thomas: So to recap what we said in May, we said we’re going to deliver AI for business, which by definition has to run on hybrid cloud because everybody has data everywhere. So the base of what we’re doing is in Red Hat OpenShift AI. On top of that, we deliver watsonx. That is our platform for generative AI, and frankly all kinds of AI. We’ve got a builder studio, we’ve got a data platform, we’ve got governance, which is coming later this year. We also talked about this family of what we call AI assistance. These are more line of business focused, functional role focused. Since that time, I will tell you the client response, the partner response, has been incredible. We have settled on, I call it the big three, three big use cases that are sticking out over all others. One is digital labor. How do I automate repetitive tasks in my business? Every company’s looking to be more productive, a bit more fit in their operations. watsonx Orchestrate is really our flagship product for that, and that’s doing quite well. Next use case is customer service. How do I make serving clients better for the clients, make my team more productive. With watsonx Assistant, we’ve had a lot of success there. Then third is around application modernization or code. We recently made available watsonx Code Assistant starting on mainframe and Ansible. But we’re just getting started. As I look at what’s happening with clients, and we shared a lot of client stories with you all today.
Patrick Moorhead: Yes, you did. Thank you by the way.
Rob Thomas: I think there was nearly 100 or so that we talked about. Everybody starts with one of those three, but then it quickly branches out and I think the universe of use cases is probably 20 or 30 at this point.
Patrick Moorhead: Yeah, interesting. Dario, how about you?
Dr. Dario Gil: Well, when we were chatting in May in Florida, we make all the announcements from a strategy perspective of all the things we’re going to build. I think what has happened is we’ve delivered every single one of those things that we said. So on the core platform with watsonx, on the watsonx.data and the data platform and the AI, those were GAed in July. Governance is coming in early December that we’ll put out, the assistance that Rob was just talking about for Code Assistant, that was done in September on GA. So I think also from a rate and pace of commercialization of our R&D and our products and bringing them to market, I think it has been probably the best year that we’ve done from an organic innovation perspective in a long, long, long time.
Patrick Moorhead: So I’ve been in the industry a long time, in and around IBM for decades, over three, and IBM has had many firsts along the line. First with technologies, first with solutions. But I have to tell you, I was really taken back on being literally first to enterprise AI with watsonx.ai and watsonx.data. Part of the research that I did at Think was, okay guys, how are you doing this? And, okay, is this GA real? The GA is real. I got the whole spreadsheet of features and the whole spreadsheet of countries. But I think it was a conversation with you and your team, how you’ve interleaved research and product development. That’s typically not what happens out there. But it is the big understanding and the big investment that the two of you got behind early because this is an expensive game here and truly there’s only a few companies who can do the scale of what you’re doing. So Rob, you outlined a lot of great proof points and client customer cases with ROI numbers, X efficiencies, and things like that. I’m curious though, this isn’t just flipping a switch, particularly when you’re dealing with data. I’m curious, what are some of the top objections or reasons people aren’t just diving into this automatically?
Rob Thomas: Like many things in technology, the one objection you always have to get over is culture. What are the implications to me in a company if I do this, how does it impact our normal practices? Employees? You name it. I’d say we see that everywhere. Let me give you a couple examples, though, to bring it to life. With the Dun & Bradstreet, they announced that they’re adopting watsonx as their generative AI platform, and it’s a multidimensional relationship. On one hand, they’re building, tuning models in watsonx, which they can take to clients. They built something called Ask Procurement, which is basically an engine that they can sell to procurement officers because they’ve got all the data about businesses, but they needed a way to tune a model to actually deliver those answers. So it’s the perfect partnership. We have the base model, like we talked about, that was three years of investment for us to get to a base model. But Dun & Bradstreet can pick it up and deliver a product in 30 days. That’s incredibly powerful. Another example of somebody like Wind Tre, the telco in Italy, who in their case, it’s just about their internal IT support. How do they provide better uptime, better meantime to resolution? That’s about adopting watsonx, building an application, if you will, on watsonx to deliver that use case. I think we’re hitting on something. Building a base model is, conservatively, $500 million. How many companies are really going to do that? Probably not that many. So, we deliver a base model. Then we work with clients so that they can tune it to their liking, what they need. In many cases, they might just use open source. That’s why we worked with Meta to bring Llama 2 into watsonx while we’re working with Hugging Face. We’re the home for any model. We want anybody that wants to play in generative AI to be a part of watsonx.
Patrick Moorhead: So I’m hearing expense, simplicity, and I know there are a lot of other factors too. Dario, what’s your point of view on this one? What are you seeing as some of the things from keeping clients just to jump on this?
Dr. Dario Gil: Obviously in this context, we operate in a lot of regulatory constrained environments like in finance and banking and telcos, you name it. In that context, that can be an inhibitor. That’s why we’ve been so focused on having use cases that can have the level of clarity, ROI, and transparency with regulators of delivering value in a way that they can manage risk properly. But that’s why also we’ve cared so deeply about the data component of it and the governance component of it. Because unless we can showcase and prove the lineage of the data, what we’ve done, how we filter, what has gone into the model, how have they been benchmarked, and how we monitor ongoing, it’s impossible for them to deploy at scale. So those are the structural issues that we’re dealing with them. There’s also a lot of noise that has happened in the industry, obviously, with executive order, with coming legislation in the European Union and so on. So that can be a damper to the extent that people say, “What’s going on? How do we navigate it?” So we’re doing a lot of work, not only to engage in that process, but to help clients navigate that process.
Daniel Newman: I think you’ve identified though that there are some stark realities too. We had the chance a little bit during the event today to talk about it. I asked you a question, almost similar, about the adoption, but I remember when you first came out with watsonx, I remember your CEO, Arvind Krishna, made some comments that were pretty misinterpreted about the impact to labor. I like you said, something along the lines of who wants to be responsible for this? It’s a gap, meaning there’s a gap, a period of time where companies are going to be able to start to use AI to automate and create efficiencies and processes. But these skills haven’t actually grown yet, meaning we’re in this little interim period where a company can start to automate, they can start to build efficiencies, they can start to build outputs and products with generative AI solutions like those that watsonx can enable. But the people that it’s going to potentially move, there isn’t a new role created yet. So companies know how hard it is to build talent. IBM’s a great example, spends decades building talent, and doesn’t just want to say, oh look, we’ve automated your role. So these are stark realities, they are hard moments and decisions, but at the same time, it’s also a pretty significant competitive advantage. You showed incredibly impressive data. Just like to maybe dig in just a little more on this last question, but how do you see culturally shaping that discussion where you’re like we’ve discovered, we’ve come upon a process that can take seven different Oracle or SAP workloads in an ERP system and we’ve built a full end-to-end automation and AI. There’s 17 people that would’ve touched that. Now it’s three. How are companies navigating talking about that? Rob, you had a good answer, but I’d love to get a little more from you on that.
Rob Thomas: I think you always have to go back to the macro, and there is a famous equation that’s existed for a while that says GDP growth is basically population growth plus productivity plus credit or debt, being able to access financial markets. When you frame it that way to a company, is your focus on growth? Obviously most people say yes. You say, okay, your financials are fine. Population growth is probably not going to happen in only but a few countries around the world if you look out over the next decade or so. So therefore the only thing to drive growth is productivity. So then the question becomes how are you going to get productivity growth? I think that’s where this is the sweet spot of AI. So I think for many people, it’s more about making your existing employees more productive. We shared some scenarios today of 30% increases in productivity in writing code and in customer service. So I think companies are latching onto that. So yes, it might defer some hiring they do. Or it might say, I can keep growing without having to do the normal hiring I would do. But this is really about productivity, which I think has to be at the top of the list for every company right now.
Daniel Newman: Yeah, absolutely. So I want to dig a little bit into research work here, Dario. So we’re going to geek out a little bit, but I want to talk about Granite. So the foundational models, first of all, LLMs and FMs are a very hot topic, very hot commodity in some ways. Obviously differentiation is the word that I think a lot of companies are looking for. So, here comes another company, here comes another open source, here comes another LLM, here comes another FM. Why should the market be excited about what you’re doing with Granite? How are you differentiating to make sure that IBM rises above the noise and isn’t just one of, but really becomes maybe the leader or the main one that enterprises are turning to to really build their generative AI strategies?
Dr. Dario Gil: It’s a very core and simple idea, because we are the only ones that allows enterprises to be value creators with AI as opposed to AI users. Let me unpack what that means. So you can create a great model and you say, here’s my model, it’s behind a black box, here’s an API. Use it. It’s great. You can deliver productivity and improvements around that. That is available to everybody. It’s a new baseline, it’s not a form of competitive advantage. It’s just a new basis of competitiveness. But if you’re an enterprise and you want to be in the value creation journey, meaning at the end foundation models are nothing but a new representation of the data, data is a source of competitive advantages for almost every business. What we allow them is to say, here’s a base capability that is highly performing, that is world-class, we indemnify you, we stand behind that model. Now from that, you can build your own derivative models and fine-tuned models where you have ownership and the guarantees behind how the end-to-end process happens. When that occurs, now we’ve enabled enterprises to have ownership of an asset, which is their own foundation models that embody their data in a way that they can be long-term value creators. There’s nobody else that provides them all of those guarantees.
Patrick Moorhead: Yeah, I don’t read a lot of white papers, but I did read the white paper you published on Granite. The first thing that struck me was, oh wow, they’re actually telling me the type of data and where it comes from, how they parsed it. Some equations that I didn’t fully understand, I had to ask my son who’s a data science major. But that was the first one I’ve seen because most people don’t want to tell you where they got the information from. On one side it’s like, okay, this might be intellectual property. On the other side, maybe you don’t want me to know where I got this data. Was it copyrighted data? You’ve covered that, not only I feel like with your openness and Granite, but also your indemnification along the lines. I did want to swivel that related thought into AI safety and governance. What I’ve learned so far, it means so many different things to different people. It goes from AI going crazy and taking over everything that we do. Another person interprets it as hallucinations, or another person, it’s speech that we don’t want to see out there. You have long been a provider to highly regulated industry, so I am certain that you have a point of view on AI safety and governance. Rob, maybe we’ll start with you. What is your basic position on that?
Rob Thomas: First of all, we are AI optimists, we are not AI doomsayers. We think this is going to do a lot of good for the world.
Patrick Moorhead: We’re in that camp too, so we can all be friends.
Rob Thomas: With watsonx.governance, our focus is how do you help a company understand what is happening in their four walls and in their cloud environments with AI? The simple analogy I would use is it’s a nutrition label for your AI.
Daniel Newman: Love that one.
Rob Thomas: What goes in-
Patrick Moorhead: That was good, by the way.
Rob Thomas: … what comes out? What are the pieces? What does that mean for AI? It’s things like data providence. Where did it come from? Data lineage. Who’s had access to the data? How is the model making decisions? Is it drifting away from what you would expect the outcome to be? These are the kind of reports, to Dario’s point on regulated industries, if you’re in a bank, one of your main jobs to sit in front of a regulator and describe what you’re doing. The clients that are up and running with us right now, they can hand a report to the regulator that says, this is the nutrition label for our AI. That is incredibly powerful. Now, I think this is just starting with the regulated industries, and I think this will be every industry within, can’t predict the timing, two, three, five years, I don’t know. But it’s not because there’s some existential threat, back to the doomsayer point, it’s because it’s just good business to know what you’re doing and why you’re doing it.
Patrick Moorhead: Yeah, that’s good feedback. I’m curious, is IBM using this internally?
Rob Thomas: Of course.
Patrick Moorhead: Okay. Can you talk a little bit about that?
Rob Thomas: We were customer number one for what we called open scale, at the time, which is now evolving, and that will be one of the components in watsonx.governance. Our first use case was actually around hiring and talent. How do you actually filter and look at a lot of resumes, so that you can do filtering, and making sure there’s no bias in how you’re making those decisions? That was a pretty critical use case. Now, we’re not in a regulated industry per se, so we’re not constantly sitting in front of regulators like say a bank, so that’s less our use case. But the technology applies in different scenarios.
Dr. Dario Gil: By the way, on this topic, on things like fairness and explainability and security of the model, there’s a very rigorous science behind this. This is not just English words around that. There’s a huge body of literature. We have been research and scientific leaders around that. You can look at the open source environments, AI Furnace 360 and Explainability 360 that we have crowdsourced with the community because when you look at metrics like Furnace there’s many different definitions and we implement all the algorithms necessary to be able to, when you selected them, that you give us scientific answers to what is happening. So per Rob, as he was saying about the report, in the end, it is scores. It is quantitative scores across the different matrix. So we’re bringing science and rigor to a fundamental question, which is, what’s going on with your AI? Can you trust what is happening? But we want to give it with rigor.
Patrick Moorhead: Yeah. Are you giving your customers the knobs? Because you have very diverse clients. I look at the industry. There’re could be a sports chat website or business that people are chatting back and forth and the type of lingo they’re using versus a bank and how they do that. Are you letting clients control some of these knobs on what’s appropriate and what’s not?
Rob Thomas: They have to, because think about it, you can’t establish a baseline unless it’s based on attributes of their business. So by definition, they have to have some knobs, to use your term. I would also say that what we’re doing with governance is not just for IBM models or watsonx.ai. We think this is an important enough topic that we’re going to make watsonx.governance available to any AI. So you’ve got to integrate it, you’ve got to expose the logs and what’s happening. But I think this is pretty critical because clients are going to be at all different levels of AI, data science, machine learning. There’s still deep learning, there’s generative AI. Governance is going to be important across any type.
Daniel Newman: I think that there’s been one thematic output from the conversations that we’ve had today. It’s been the incredibly symbiotic relationship that is taking place between the research side and the commercial side of the business. At times, that may not be clear, but what you just said, for instance, how you’re going to take governance and you’re going to bring it to the mainstream across all the models, all the AI offerings out there. That’s a great instance of commercialization. But also how you’re taking the work that’s being done in research, in the lab, in the testing environment to build governance even within your own company. There’s always been that. I think over decades, one of the disconnects was that sometimes IBM research and IBM commercials didn’t have that really strong connective tissue. But having you two sitting here today and really sounding in lockstep, meaning this wasn’t rehearsed.
Dr. Dario Gil: Is because we are in lock.
Patrick Moorhead: Don’t normally sit with the two sides-
Dr. Dario Gil: We don’t have to act or pretend.
Daniel Newman: But it warrants acknowledgements. On behalf of The Six Five here, I’d definitely like to congratulate you on all the progress that’s being made. We do look forward to continuing to see, to measure, to validate. That’s our job. You’re telling us trust but verify.
Dr. Dario Gil: Yeah, exactly. But on this topic, and I’m glad you’re highlighting it because you’re so authentic and it’s true. That is the infinitely renewable resource, because when you have that engine going well is on every topic that we choose to bring products to market, we can do it better and with quality. Verification is always good, we believe by that too and been held accountable to that. But something has changed and I’m glad, thank you for sharing that with us too.
Daniel Newman: Yeah, absolutely. Well, Rob and Dario, I want to thank you both so much for joining us here on The Six Five. I’m sure we’ll have you both back. Hopefully it won’t be six months, but either way, continue on with the great progress. We’ll continue to pay attention and let the market know what we think. But signs so far, very good.
Rob Thomas: Great. Thank you for meeting us.
Patrick Moorhead: Appreciate it guys. Thank you.
Daniel Newman: Thank you. All right everybody, you heard it here. The Six Five is on the road. We are in Yorktown at the IBM Research for the Analyst Research Day. Pat, great conversation here, but I think it’s time for us to say goodbye.
Patrick Moorhead: Thanks a lot everybody.
Daniel Newman: See you later.