What To Expect At Intel AI Event: A Discussion With CEO Pat Gelsinger

By Patrick Moorhead - December 27, 2023

Intel will be in New York tomorrow to host its “AI Everywhere” event, which will be much more than a routine launch for new processors. Indeed, the company expects it to serve as a launch event for the entire AI PC category. As CEO Pat Gelsinger told me when I talked with him this morning, “We would assert this as the moment that that category truly gets underway.” (I’m sure AMD would beg to differ, and Qualcomm, for that matter, but more on that below.)

No doubt Gelsinger kept some details under his hat—there are bound to be some fun surprises during the event—but the specific products to be announced are new Intel Core Ultra processors for PCs (codenamed “Meteor Lake”) and 5th Gen Intel Xeon processors for servers (codenamed “Emerald Rapids”). You can think of these SoCs (system on a chip) as fulfilling some of Gelsinger’s strategic promises since he returned to Intel as CEO three years ago.

In this post, I’ll convey what Gelsinger told me about what people can expect from the event, why he thinks it might be another “Centrino moment” and what the AI PC could mean for the tech landscape. Along the way, I’ll share some of his choice comments about Intel’s competitors, along with my own analysis of how well Intel is performing.

Full disclosure: Intel is a client of Moor Insights & Strategy, as are all of its major competitors, but this article reflects my independent viewpoint as an analyst.

Intel’s Product Strategy For AI

The new Core Ultra processor should be Exhibit A for the effectiveness of Intel’s “chiplet” architecture for client computing. As a refresher, these days Intel is able to create more tailored chip designs using a combination of IP blocks called chiplets. This approach allows inclusion of various CPU, GPU, neural processing and specialized ASIC elements all on one device, enabling customization and a broader range of functions. And to get the right kind of chiplet manufactured on the right process. Intel has also invested heavily in AI accelerators, ASICs designed to handle specialized AI tasks. The chiplet architecture allows these accelerators and other blocks of IP to handle what they’re best at—GPUs for computationally intensive jobs, gaming, NPUs for longer-running AI workloads, and so on. More than that, it allows a lot of AI-specific functions to be handled by blocks other than the GPU, where Nvidia has been running rampant lately.

How big of a deal is this? Well, recent benchmarks show Intel’s Gaudi2 AI accelerator beating the ultra-popular Nvidia A100 GPU in raw performance, throughput, time to train, cost per token and power efficiency. This isn’t for all AI training workloads, but Intel has demonstrated supremacy on some specific workloads. And while Intel hasn’t overtaken the high-end Nvidia H100 on raw performance, Intel says it’s already winning that matchup on price-for-performance.

During our conversation, Gelsinger also called the 5th Gen Xeon “a very elegant upgrade” to its 4th Gen predecessor, with something like 40% better performance across a range of workloads, but especially AI workloads. And the company’s roadmap is full of successors to these new products—for both clients and servers—that will debut in 2024 and beyond.

Gelsinger calls Intel’s new approach “the biggest platform change in 20 years, [and] the biggest architectural change in 40 years.” He’s talking about “the whole chiplet design, neural processing, a major shift in the microarchitecture, the CPU and GPU.” So far, these devices’ performance and flexibility seem to be bearing out the wisdom of this major shift. By design, these advances are also closely tied to the new production nodes that Intel has introduced during the huge worldwide expansion of its manufacturing facilities and technologies in the past couple of years.

I’ll have more on this below, but for now let it suffice to say that when Intel talks about pursuing a true end-to-end strategy for “AI Everywhere,” it’s not just talk. It’s backing up its messaging with the sheer breadth and depth of its approach to the market, its design evolution and its manufacturing clout.

What The AI PC Means For Intel—And The Rest Of Us

Gelsinger sees AI changing the way people operate in settings from neighborhood restaurants to the factory floor. He promises that tomorrow’s event will include many use cases, along with demos from a “parade of ISVs.”

I asked him if tomorrow’s launch will be the “Centrino moment” for AI PCs. Two decades ago, Gelsinger and I were executives at Intel and AMD, respectively, when Intel introduced its Centrino chip. For those too young to remember it, that product included onboard Wi-Fi functionality, and it ultimately had a profound effect on how people used laptops and other portable devices. He was quick to point out that Centrino didn’t launch Wi-Fi, which had been around for three years without taking off. But Centrino “assured a use case for Wi-Fi that everybody cared about: ‘Oh, I can get to the internet wherever I am.’”

The biggest impact came in the way it changed users’ behaviors and expectations. For example, people started requesting to stay on the bottom three floors of hotels so they could access the Wi-Fi signal from the lobby. When you went to a coffee shop or got on an airplane, you wanted one with Wi-Fi. Before Wi-Fi took off with Centrino, 80% of PCs were desktops, with many of the rest more accurately called “luggables” than “portables.” Ultimately, Gelsinger said, Centrino “ushered in an entire redefinition of the form factor of the PC.” (My riff on this: think about all the downstream effects this has had on remote work, even to the present day.)

He believes that something similar is in the works with the AI PC. “They’re not going to appear overnight,” Gelsinger says of the new use cases and applications yet to come. But they are coming—and it won’t be long. Soon, he says, “I expect every video call that I’m on to be transcribed, memorized, summarized, language translation, etc.—that’s going to happen. [And] I expect that every PC that is embedded in my manufacturing lines is going to be capable of real-time AI model generation and telemetry.”

The coming changes will likely also change the shape of devices. Gelsinger pointed out that the form factor of the laptop he was using for our video call is still dictated by the size of the keyboard. But when embedded AI allows a user to interact with a PC more productively, then the user can simply touch the screen—or merely point at it—and say a few words to achieve the desired outcome. That could lead to a new generation of PCs that look different from anything we’re using today.

Gelsinger won’t make too many specific predictions about what sorts of changes will happen: “When you start these things, you just don’t know where they’re going.” He makes a fair point. That said, he also notes just how far AI performance has come at the scale of the PC. To take one easy example, a large language model that two years ago might have required something like 100 GPUs to run . . . will be demonstrated at tomorrow’s event running on Meteor Lake.

Thanks to these rapid advances, we can expect AI PCs to get much better as they help us with our work, for example making work calls more productive by summarizing past meetings, alerting users to everything from missed appointments to changes in body language and embodying many other improvements that fit more under “augmented intelligence” than “artificial intelligence.”

As Gelsinger summarizes it, the AI PC is going to bring lots of new answers to “How are you making me better?” This will lead us to a substantial reconception of what it means to be a creator, a designer—or even a human being. He believes a key part of his job now is to “usher in hundreds of new ISVs,” many of which will devise applications that don’t take off, but “a few of which will be prescient” and introduce major new innovations. I agree.

Intel’s Competitive Position In The AI Market

When I asked Gelsinger what he believes differentiates Intel from its competitors in AI, he cited three factors: volume, an open software environment and the true end-to-end approach I touched on earlier. Intel is the largest chipmaker in the world, and the volumes it will ship for Meteor Lake (starting now) and the forthcoming Lunar Lake and Arrow Lake (starting in 2024) should far outstrip anything its competitors can attempt. Qualcomm would beg to differ with its recent Snapdragon Elite X with ‘mega TOPS’ mid-year.

Gelsinger pointed out that Intel’s market share is high in the datacenter, at the edge and in the client, so while everyone else talks about volume, Intel actually delivers it across all parts of the computing landscape. Among other things, this makes Intel an easy choice for ISVs. As he put it, “Let me go to an ISV and say, ‘Hey, there’s a million of those [Qualcomm competing chips], and there’s 150 million of these [Intel chips]. Which do you care about?”

To put this in more specific terms, Intel expects to ship a couple of million Meteor Lake chips before 2023 ends. In 2024, it should produce tens of millions of Meteor Lakes, Lunar Lakes and Arrow Lakes.

Besides the raw volumes of chips it can produce, Intel is a strong advocate for an open software environment. “That’s what gets ISVs motivated,” Gelsinger says, because it not only gives them a big market to aim at, but makes it easy to do so. In this connection, I think that Intel has done great work with its developer tools, which allow its partners to design very specifically to make the most out of the CPU, GPU, NPU and accelerator cores on its chips. NPU-focused designed are the easiest and I think we will see ISVs focused on it for notebooks.

I’m reminded of what Acer COO Jerry Kao told me at Intel’s Innovation event in September: “With Meteor Lake, Intel provided not just the hardware, but software tools like OpenVINO. So with those tools . . . [we can] create a lot of features, which in the past we even don’t think [of]. Even if we think of a feature, we don’t have a tool to create it. . . . I think this is really revolutionary change by Meteor Lake, by the NPU. [Now] we have the capability to make our dreams come true.”

And finally we come to end-to-end characteristics, which grow out of everything discussed above: Intel’s leadership positions in the datacenter, edge and client markets; the sheer volume of chips it can produce; its commitment to open platforms; and long experience in enabling both hardware and software vendors who want to work with its chips. Gelsinger says—and I’m prone to agree—that while Intel still has a lot of work to do, neither of its main rivals Nvidia and AMD can claim anything like the end-to-end footprint it enjoys. That’s just math and fact, not opinion.

Intel’s Reinvention Under Gelsinger: Still Plenty To Do, But Good Work So Far

Back when Gelsinger rejoined Intel and took over the top job, I said that Intel at least had a shot of regaining its leadership in the chip market. So far, he and his team have been making me look good by performing on most of the ambitious plans he announced. That’s not just my own viewpoint, either. I meet regularly with all the big software companies and computer makers that work with Intel, and I can tell you that they are convinced of Intel’s momentum in a way they weren’t 18 months or two years ago.

If any chip company can execute on a strategy to put AI everywhere from the client to the data center and everywhere in between, it’s Intel. Mind you, the other companies in the AI space have their strengths; Nvidia, in particular, has been absolutely crushing it in the market for AI datacenter training and inferencing chips, and my former employer AMD has tough-minded leadership and its own ambitious plans for making a difference in datacenter AI chips as well as AI processors for PCs. But if Intel executes with the precision and gusto that it has shown over the past three years, it will be harder for competitors to keep up with Intel from end to end of the AI chip market. It’s a rapidly growing market and the great thing is that there’s business for every chip company to grow.

Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.