I had the chance to talk with Intel CEO Bob Swan this week, which made earnings that much more interesting. We chatted about Intel’s long-term objectives and strategies, but before I dive into that, I wanted to touch on what Intel announced yesterday.
Solid Q2 earnings
Intel yesterday served up very solid earnings for Q2 2019 which excited Wall Street after hours, sending the stock up to 6%. Intel reported Q2 revenue exceeded April guidance, which was down 3% to $16.5B. The best part was that Intel raised its full-year revenue outlook to $69.5 billion, up $500 million from April guidance. Overall, I believe Intel performed well in an increasingly competitive PC environment, and challenging datacenter market where CSPs are still chewing through product and enterprise isn’t growing.
While I understand the focus by some is on the Intel “here and now,” my primary audiences, Intel OEMs, ODMs and their customers (not the investment community) care a lot more about the future of Intel. Sure, customers have been concerned with supply consistency, and there are still some mix challenges, but many of those conversations have now shifted to more strategic conversations as they gain more confidence in Intel’s execution.
What I really appreciate about Intel CEO Bob Swan is that he “gets” that Intel had to execute better short-term before he had “permission” to have those longer-term conversations. It has been a while since I have seen an Intel CEO be that customer concerned, and I think that matters. So let me move off of the sort term and get to what I believe matters more, the longer-term.
Intel CEO Bob Swan and his leadership team laid out at its analyst day a $288B 2020 TAM driven by the demand in the data generated by 5G and processed by AI and ML. It’s important to digest that Intel is a lot more than CPUs for PCs and servers which defined it historically. In the last decade, it has made and continues to make significant investments in non-CPU technologies like FPGAs, accelerators (ML, network, security, ADAS), discrete GPUs, memory, Market-wise, it is aggressively attacking the carrier and enterprise network, self-driving and ADAS space and edge compute.
Networking doesn’t get nearly enough market attention where Intel has gone from nonexistent market share to double-digit share. The company may be getting out of 5G mobile modems selling its assets to Apple but is rocking it in the larger 5G markets of base stations and complete network transformations. If you want to see where the big bucks will be made in 5G, check out our sizing study here.
To go after that nearly $300B TAM, Intel needed a technology strategy change, which I covered here back in December. Intel’s “six pillars of innovation” shifted the company from focusing on process and CPUs to six elements: process and packaging, XPU (CPU, GPU, FPGA, ASIC) architectures, memory, interconnect, security, and software. While strategically, I rarely recommend a company “focus on more things,” it’s a valid approach when your company, like Intel, has enough resources, and that focus aligns with its core competencies, like processing.
Software a key to Intel’s future success
One of the least-covered parts of Intel’s Six Pillar strategy is the software. It’s one thing to have the right processor architecture and package, but in my experience, particularly with GPUs and ML accelerators, you’d be leaving 50% of the performance optimization on the table. Raja Koduri, Chief Architect at Intel, is quoted as saying at the latest investor day, “For every order of magnitude performance from new hardware, there are over two orders of magnitude unlocked by software.”
You might be wondering, Intel in software? Yes. Here are a few disclosures the company made recently, which shows the scale of its software investment:
- Over 15,000 software engineers
- #1 contributor to Linux kernel
- Over 1/2 million lines of code modified each year
- Over 100 operating systems optimized
- Top three contributors to Chromium OS
- Over 10,000 high touch customer deployments
- Top 10 contributor to OpenStack
- Over 12 million developers
Why does this matter? I will repeat that with GPUs, FPGAs, and accelerators, it means everything related to performance and optimization. Intel shared with me that it has focused a large portion of its software resources to these areas and the early disclosures are enlightening. Intel says it has increased the per-core performance using software optimizations across workloads in the following areas:
- Deep Learning 10.3X
- Content creation 2.5X
- Java 2.1X
- Networking 2.0X
- Data analytics 1.9X
- HPC 1.9X
- Web browsing 1.7X
When you use optimized hardware and software, the Intel numbers get even more impressive:
- Deep Learning Boost 28X (Xeon with DL Boost versus without)
- In-memory database 8.1X (Intel Optane)
- Java runtime 6.0X (Xeon with AVX 512)
As I hope you can see, you can get huge bumps in software performance through optimizations only, and even higher optimizations when you marry hardware and software together. One other thing I must note is the changed sense of purpose of the software resources at Intel. Based on conversations in the business units, one of the big changes is that software resources are targeted to optimize for workload diversity within the business units rather than being a business itself.
I want to close on discussing Intel’s most ambitious project in software, OneAPI.
In my opinion, Intel’s most ambitious software project is called “oneAPI.” I like to call oneAPI the “magic API,” given its potential. oneAPI, based on the learnings from openVINO, are tools and libraries that abstract the CPU, GPU, FPGA, and AI accelerators. So instead of writing to four or five different API’s, you are writing to one. Today, if you are a company which programs for CPU, GPU, FPGA, or ML accelerator, you are using different toolsets across every single accelerator. On the leading GPU for ML and DL, NVIDIA, you are using NVIDIA’s effective tools that aren’t leverageable across other GPUs or accelerators.
Intel has been making continued progress on oneAPI since announcing it last December. It will include a unified language based on Data-Parallel C++ along with libraries to deliver native code performance. This will allow for greater optimization of middleware and frameworks in areas like AI. It will also include profilers and debuggers that span the software stack to simplify this process across all levels of abstraction. Intel says it will make oneAPI standards-based and will also include source code to source code compatibility to assist with translation from CUDA. Intel’s timeline is ambitious with Intel slated to release the beta for oneAPI before the end of the year.
oneAPI will be very difficult to create, but if Intel can pull it off, it will be very valuable to developers, researchers, and businesses alike, and I believe it will be a competitive advantage.
Quarterly numbers are essential to Wall Street and investors, but to Intel’s customers and their customers, what matters more is the long-term capabilities of the company given consistent supply today. Led by CEO Bob Swan, the company has embarked on an aggressive assault on a nearly $300B TAM markets driven by growth in what the company defines as “data-centric.” 5G, IoT and AI will define the new data created, how it will be processed, and what we will do with it and Intel’s investments are squarely focused on that. With the new business strategy comes a new technical approach which dramatically broadens the view of what processing means and recognizes that software is half the game. Intel has retargeted its software resources at the right time to capitalize on its “XPU” capabilities and has one-upped the industry with its all-encompassing OneAPI.
This is going to be a great ride and will be following closely the whole way.