Intel’s Lisa Spellman kicks off its Data Centric Innovation Summit in Santa Clara. PATRICK MOORHEAD
I had the opportunity of attending Intel’s “Data-Centric” Innovation (DCI) Summit last week with a small group of industry analysts, financial analysts, and press and wanted to share with you my quick thoughts. This event seemed a bit financial-analyst heavy, but there was some very good content for us industry analysts, too. The ability to get one-on-one time with Intel’s highest-level executives who were very accessible made for an overall solid event.
Intel’s data-centric businesses comprise of a wide variety of businesses, including Xeon processors, FPGAs (Altera), AI (Nervana and cross product AI), memory and storage (including Optane), networking and carrier businesses, industrial IoT and even MobilEye. I believe Intel had one thing to accomplish at the event, and that was to communicate the breadth and big opportunity of its DC business, that it has legs and that there’s a plan to address the competition. Did the company accomplish this? There was a lot to digest but I will hit the highlights.
Intel DC lead Navin Shenoy kicked off the day talking about data as the new oil and the fact we’re only using 1% of data and the huge industry opportunity. We have all seen the examples and as an industry, agree, and we have written lots on this. Data is valuable, it needs stored, moved and processed at alarming rates at the deep edge, edge, and the cloud. Shenoy talked about the no-brainer growth drivers end enablers, too, including network transformation, 5G, edge compute, AI and the cloud. Shenoy announced it increased its forecasted TAM from $160 billion in 2021 to $200 billion in 2022 for its DC businesses. Intel guided up all its segments TAMS, including a $20B increase in the data center, $20B increase in storage and memory, $3B in IoT and ADAS, and $1B increase in FPGA, which all which pass my initial smell test as a technology analyst and researcher.
Next, Intel dove into networking, an interesting choice given many people forget how much progress the company has made in this space and just how big the opportunity is with its Intel-estimated $24B silicon TAM. Intel talked about how much internetworking is required inside the datacenter which makes a lot of sense to me given its re-architecture from primarily a north-south configuration to an east-west configuration driven by scale-out designs. Intel quickly did a silicon photonics victory lap by announcing it shipped 1M SP products which I will say, shocked me as the last time I talked with the company the products had been pushed out a year. Full transparency, this could have been me not just paying attention as closely.
Intel’s Navin Shenoy discusses Intel’s networking business. PATRICK MOORHEAD
Alexis Bjorlin then went into an expansion of its NIC business with what the company is calling a “SmartNIC”, code-named Cascade Glacier. Cascade Glacier accelerates network performance and can even do analysis on the data as it is based on Intel Arria 10 FPGAs. Accelerating traffic at the card limits the amount of processing done on a server host CPU which gets slowed down as it has to over many bus hops like PCIe. I am expecting Moor Insights & Strategy networking analyst Will Townsend to dig into the details. Net-net, I am impressed with all the progress I am seeing Intel having in datacenter networking and am equally impressed at what Intel is doing in the carriers, where network transformation and edge computing are happening at full tilt with 5G.
After networking, Intel dove head-long into storage, where it has been driving a ton of business in flash and redefining the storage tier with Optane DC persistent memory, sitting in-between DRAM and SSDs. Optane DC is much faster but more expensive than SSDs but much less expensive but lower performance than DRAM. Intel claims that Intel Optane DC persistent memory-based systems can achieve up to 8 times the performance versus DRAM. Intel got Bart Sano, Google’s vice president of Google Cloud Platforms on-stage to sing Optane DC’s praises in SAP cloud workloads running on GCP and Intel celebrated that day making FCS to Google. Optane DC requires Intel’s next generation of Intel Xeon processors, called Cascade Lake, available in 2019. That same day, Intel made QLC 3D NAND announcements at Flash Memory Summit, where Moor Insights & Strategy storage analyst Steve McDowell and I went in detail. You can read that blog here.
Intel’s Navin Shenoy discusses Intel’s storage business. PATRICK MOORHEAD
Last but not least, Intel talked about Xeons, which is what most think of when they think Intel “datacenter” or “data-centric”. Both of these businesses are doing phenomenally well, with double-digit growth. Intel kicked off this discussion with the 20th anniversary Xeon celebration victory lap followed by giving some details on how Xeon-Scalable is progressing since launch in July 2017. You can see my launch analysis here. Intel announced it shipped more than 2 million units in Q2 2018 and in the first four weeks of Q3, it shipped another 1 million units. I am pretty sure Intel disclosed these numbers so that financial analysts would go and ask AMD how many Epyc server processors it shipped in the same timeframe, a bit of “PR jiu jitsu.”
Intel’s Navin Shenoy discusses Intel’s Xeon Scalable progress business. PATRICK MOORHEAD
Intel then did a fast fly-over of its datacenter “AI” business including a disclosure that in 2017, it registered over $1B in revenue from “customers running AI on Intel Xeon processors in the data center.” I took away a few interesting things from this disclosure. First, I believe Intel must have confidence in growing that number a lot as financial analysts will pester it mercilessly every quarter for it. Also, this number is a lot lower than what NVIDIA includes in its datacenter business, which is a combination of AI, ML, and DL, but also VDI and remote desktop workloads. Secondly, I found the number conservative as it doesn’t include inference done on Altera FPGAs or Movidius chips. All I can make of this conservatism is that it intends to blow away the future number with Xeons and doesn’t need FPGAs or Movidius to do this. At this point, Intel went into how it intends to increase this “AI” revenue number.
Intel’s Navin Shenoy discusses Intel’s AI 2017 revenue. PATRICK MOORHEAD
The first thing that Intel wanted to start out with was that since 2014, it had improved AI training and inference performance by 200X on specific AI workloads. These pass my smell test, but I want to be clear, none of this is intended to compete head-on with NVIDIA’s latest and greatest training scores. Intel intends Nervana to start doing this in 2019. Intel’s Xeon improvement has been delivered through a combination primarily of on-Xeon accelerators and a lot of software optimizations, primarily useful for inference. This part was the setup for AI and ML improvements in the next three generations of Xeons.
Intel’s Navin Shenoy discusses Intel’s AI performance on Xeon. PATRICK MOORHEAD
Intel first pulled a rabbit out of its hat related to Cascade Lake AI-capabilities, the next Intel Xeon Scalable processor that is 14nm and the first to feature Intel Optane DC persistent memory. Intel disclosed at the summit that the part has a new set of AI features called “Intel DL Boost”. While not the catchiest of names, it does exactly what it says it will do, and that is to boost inference workloads. Intel threw out an 11X improvement on Res-Net 50 than the prior Xeon Scalable processor. For those workloads that don’t need the fastest inference or want to run inference and apps on the same server or at off-hours, this solution could make a lot of sense. Datacenters will compare this to running their inference on special GPU or FPGA cards in PCIe slots.
Intel’s Navin Shenoy discloses Intel’s Cooper Lake. PATRICK MOORHEAD
Coming surprisingly quickly after Cascade Lake in 2019 comes the new “Cooper Lake.” Cooper Lake is based on 14nm and contains new DL Boost features for training, called “Bfloat16.” Intel said also that it contains improvements to I/O and Intel Optane DC persistent memory. I haven’t been briefed yet, but I am guessing the “I/O improvements” will focus on Xeon Scalable’s ring bus architecture and maybe even improved single socket bandwidth to counter AMD’s Epyc single socket architecture and work better with Optane. While I find it hard to imagine the Bfloat16 adders competing with a high-end discrete NVIDIA or AMD graphics card for the highest performance ML and DL training, I could see Xeon attacking lighter training loads that need super low latency or real-time learning.
I see Cooper Lake as the gap filler for those who need its enhanced I/O, ML and storage features, but who can’t wait for 10nm Ice Lake in 2020. Datacenters will have to get more focused on their workloads to help make that decision. Some will complain about Intel slotting in Cooper Lake and for pushing out Ice Lake, but I’m glad to see Intel give customer choice.
Intel ended its DCI show by reminding everyone that it doesn’t just do chips, it does platforms and full solutions, the epitome of vertical integration short of shipping its own, branded engineered systems. Last year Intel raised the bar with its Select Solutions, essentially workload-focused bundles of tested and performance optimized Intel hardware and software. It’s not as far as engineered systems from the big OEMs, but close. Intel is doing Select Solutions to increase the likelihood that customers buy more Intel hardware, but also decreases the time to market of these solutions. Consequently, this also creates barriers to entry for anyone who dreams of getting into the datacenter chip space. Intel introduced three more Select Solutions for AI, blockchain and SAP Hana. Note all these solutions include Xeons, Optane and Intel SSDs.
Intel’s Navin Shenoy discusses Intel’s Select Solutions. PATRICK MOORHEAD
I believe Intel accomplished nearly everything it set out to do on the day. It showed without a doubt that Intel’s “data-centric” TAM opportunity goes beyond Xeon server processors and into incremental networking, storage, carrier and AI opportunities. It didn’t come right out and explain overtly how it would compete with AMD’s server resurgence or NVIDIA’s training, but I don’t think that was the time or the place as Intel’s opportunity TAM and SAM is a superset of those. I do think those alluded “I/O” features Intel hinted at in Cooper Lake, special instructions and Select Solutions will be directed at AMD’s Epyc, and we know that Nervana and a “future datacenter GPU in 2020” is planned to compete with NVIDIA’s highest-end Volta and beyond. I will wait until I see some third ML inference and training benchmarks on Cascade Lake and Cooper Lake before commenting on its competitiveness. All in all, it was a good day for Intel.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.