I had the chance to spend a few days with Intel executives at its recent industry analyst summit held at the company’s Santa Clara headquarters, and it was a real eye-opener as it laid out its new technology strategy for the next decade. Net-net, I believe this strategy is right for Intel and its customers, albeit very challenging.
Financially, the company is doing great, popping off consecutive records quarter after quarter, driving double-digit datacenter growth, and getting into new, rich businesses, like memory and flash, self-driving vehicle technologies, FPGAs, and even growing revenue in PCs. I like how Intel has organically and through acquisition positioned itself to participate in much larger markets, from ~$45B in traditional PC and servers to ~$300B in those markets plus memory, networking and carrier, modems, FPGAs and IoT.
Market demands and Moore’s Law drove the change
However, of course, things could be going a whole lot better if the company had executed better on 10nm and had highest-performance deep learning training capabilities right now. The market is shifting as well, which I believe exacerbates the 10nm pressure, and that shift is that we as a society are now creating more data than we can effectively process and do useful things with. Customers want more Intel chips, and Intel is making as many as it can, and something needs to change even with those stellar earnings.
From “tick-tock” to concurrent innovation and development
A shifting market and Moore’s Law realities require a technology strategy evolution at Intel. These two things combined meant the company needs to mature from a “tick-tock” methodology where process tech leadership for monolithic CPU designs requiring staggering volume ramps for success was king. Intel plans on moving to a world where more diverse kinds of compute are comprehended, where end products aren’t as reliant on leading-edge process nodes, and where leading-edge IP (from inside Intel and out) can be added more quickly to end products.
6 pillars of concurrent innovation and development
Intel’s senior vice president of Core and Visual Computing, Raja Koduri, presented the new development model that significantly expands Intel’s current strategy, in my opinion, for the better. It just makes sense. Koduri articulated the strategy regarding 6 strategic pillars that Intel will drive its future design and engineering efforts into the future in no specific hierarchy or priority:
Process – Process technology still matters, but through advanced 3D packaging, Intel wants to use the best process for the targeted IP block, decoupled from a monolithic design’s solitary process and geometry. Think of having a chip with the highest performance logic on one process, I/O on another, memories on another and analog on another, stitched together in a 3D package with minimal power and performance loss. Stacking logic chips seems impossible, and the whole value proposition seems too good to be true.
I became much more of a believer when Intel actually demonstrated, live, a new platform called “Foveros” which included a heterogeneous CPU design which it would be launching in a range of products in 2H 2019. With those schedules, ODMs and OEMs must have them but have been very quiet as I haven’t heard a peep out of them yet.
In the future, I can see Intel extending this Foveros-like architecture to the public cloud datacenter where custom chips are stylish. In the future, it seems that companies like Google, Microsoft, Amazon AWS who are doing custom silicon would want the ability to integrate their secret sauce into the Foveros “system.” Long-term, I would expect industry standards to emerge to more easily build IP blocks to work in many different designs across different designers and fabs. However, that’s a ways off.
The process pillar sets the stage for the rest of the pillars.
Architecture – Intel has come a long way from the CPU as the only center of its universe. Its acquisitions of Altera, Nervana, Movidius, and eASIC acquisition plus the announcement of a discrete 2020 GPU have put some of that to rest. Intel also quietly created and is quite successful with its ASIC accelerators for networking and carriers.
What Intel is now acknowledging is that processors that use scalar, vector, matrix, and spatial architectures are all equally important to customers and that Intel will deploy these capabilities in CPUs, GPUs, FPGAs, and AI accelerators. While I have noticed this very rapid transition at Intel, others have not, and Intel needed to communicate this many times to reinforce internally and externally.
Memory – When you have many high-speed processors in a single package, high-speed, in-package memory becomes crucial to share all that data across processors and reduce latency outside the chip. Watch this space from Intel closely. As we have seen with Optane, Intel will go all-in if it sees an incremental opportunity by adding specialty fabs and adding the control plane to its chipsets.
Interconnect – As Intel fractalizes IP blocks, communication between processors and across packages becomes even more crucial to the overall equation. Also, as more data is being processed and stored, wireless and datacenter interconnects are crucial to moving that data between systems using the advanced packaged silicon. Silicon photonics will be key here as well and Intel has shipped millions of units. I expect Intel to double-down here.
Software – I finally saw an Intel acknowledgment that for every huge leap in hardware, there is twice that performance opportunity in software, and I expect Intel to devote many more resources to software to take advantage of this phenomenon. Don’t get me wrong- Intel has made great strides with openVINO and has added orders of magnitude ML performance enhancements to Xeon, but the company now says it will approach software as aggressively it embraces hardware. I even heard sidebars of talks about “turning Intel into a software company.” We will see.
The most audacious software announcement is called “One API” software, which I like to call the “magic API.” This magic API, based on the learnings from openVINO, is planned to provide tools and libraries that abstract the CPU, GPU, FPGA, and AI accelerators. I cannot say how exceptionally difficult this will be to pull off, but if successful, it will be a defining differentiator for Intel versus any other company for a long time. If successful, what other company could provide the hardware, software and hence solutions across all those compute types?
Security – My biggest takeaway here is consistent with the promises made by Intel over the last year, and that is that security will be more contemplated into architectures, designs and end products. Add to that a security group who can throw the “red flag” and hold up a product moving to the next phase and I like what I hear. As Intel is committed to the four different compute types, memory, interconnects, and software, this means Intel is responsible for the security of the entire solution. Customers who want the “easy button” will like this, a lot.
If your head is spinning like mine, good- it should be as this is the most radical shift I have seen at Intel in the decades I have been following the company for nearly 30 years as a customer, competitor, and analyst. I think the new strategy needs to be measured under two lenses: “is it what the market wants” and “is it achievable.” I believe if it passes the first two, the money will follow.
I think this is the right strategy for Intel right now given the shifting world of data and the realities of Moore’s Law, which, by the way, are the new reality for every silicon provider, not just Intel. If any silicon provider thinks it will be on large, monolithic dies the next five years, the company will likely be out of business. Also, don’t be confused- Intel’s 10nm process characteristics are very equivalent to TSMC’s 7nm process.
From a compute standpoint, some customers like best of breed components in a box and are smart enough to stitch everything together and may even want to do their own chips. Other customers will want an “easier button,” where CPU, GPU, FPGA, NPU, and their own IP are fused together with easier programming tools than they have today on larger workloads. This is where if Intel can execute, will do very well. This is classic “embrace and extend.” Intel is “embracing” acceleration and “extending” to every compute type and stitching it together with its “magic API.”
Now comes the “can they execute” part. This 6-pillar strategy is very bold and ambitious, but it isn’t as if Intel hadn’t been working on many of these areas for a decade or made acquisitions to gap-fill. Intel came to the table with a working Foveros client demo showing what I had never seen from anyone before and said it would ship next year in many flavors. There are hundreds of questions I have on Foveros, but those answers will need to come later. The demo doesn’t guarantee high-volume, multi-segment success over time, but you can’t have that without a demo first.
From a highest-performance ML and DL training standpoint, Intel is significantly upping its game with special Xeon instructions, but it still needs to deliver Nervana training silicon as well as a high-performance GPU to “take sockets.” I am more confident in Intel’s discrete GPU opportunities as its more of a known element than Nervana, but we will see in 2019 how Nervana looks and discrete graphics in 2020.
“One API” which I lovingly refer to as the “magic API” seems like the most difficult thing to pull off, even if openVINO was a successful precursor. Abstracting between scalar, matrix, vector, and spatial workloads seem really, really hard.
While Intel has been doing so well financially, I have sensed a disturbance in the force anticipating what Intel’s next big move would be. So now we know and it is big and bold and really hard. Over the next year, Intel has many disclosures and milestones to hit and I will be digging deeper into the strategy, its deliverables and milestones to give you a better view of what is going on and how Intel is doing and how competitors and customers are reacting.