I’m a bit remiss in getting this out but thought it was important to talk about Intel’s Data-Centric Day in San Francisco beyond my tweetstorm and numerous press citations. The big story from the event is that Intel is now the full datacenter technology provider to beat, with huge investments in compute, storage, and networking. This is the new Intel. Also, it’s important to reiterate that Intel is all-in on heterogeneous compute across CPU, GPU, FPGA, and ASICs. Let’s take a closer look at what was announced.
First, Intel announced the general availability of its 2nd generation Intel Xeon Scalable processor, which it says is optimized for AI, 5G, and data processing from the edge to the cloud. I think this is a fair characterization with the proof to back it up. In addition to improved performance, the new processors feature advancements in AI inference, network functions, persistent memory bandwidth, and security.
The 2nd generation boasts Intel’s Deep Learning Boost technology (DL Boost for short), which is geared towards AI inference workloads at the edge, enterprise, and datacenter. The most interesting things for me about the new Xeons is the addition of DL Boost—when latency counts, it will make these processors well-suited for specific inference workloads like Recommendation Engines at places like Amazon and Netflix. Not too many know that CPUs already dominate ML inference usage; this just gave datacenters another reason to continue doing this for certain workloads (side note: this isn’t Intel’s big discrete AI accelerator play—that’s slated for the end of 2019 and 2020). While I’m at it, I will also remind everyone that DL Boost isn’t intended to compete with, let’s say, NVIDIA’s V100, which is a training beast. The friction with NVIDIA will happen versus its lower-end inference cards like the P4.
The 2nd generation Xeons also feature support for Intel’s Optane DC persistent memory, a very big deal for scale-up database workload with a threefold increase in memory capacity over the first generation Xeon (up to 36TB of system-level memory, when combined with DRAM in an eight socket system). For applications like SAP HANA, this could radically improve TCO and speed. I’m really excited about Optane Persistent and its ability to disrupt the storage/memory tier. I think there’s a good chance it will become a future “no-brainer” in big data apps Intel also says it managed to boost Xeon’s raw performance by 33% in the mid-range. This was surprising to me; I think it could put the company in a better competitive position and I’m looking forward to third-party competitive comparisons.
In addition to the launch of the 2nd Gen Xeon processors, Intel announced over 50 (yes, 50) workload-optimized Xeons. These include the new 12 memory channel Xeon Platinum 9200 processor, a real beast, with up to 56 cores per socket, which should deliver some incredible DDR memory bandwidth” for HPC, AI, and high-density infrastructure workloads. Intel may have made fun of other packaging designs previously, but here we have Intel doing a very similar thing.
Intel also announced a family of network-optimized Xeons, a joint effort between the company and Comms SPs to increase subscriber capacity and reduce NSV infrastructure bottlenecks. I believe the Xeon’s new tuning and management capabilities, like Speed Select and Resource Director, will also be strongly accepted by cloud and communications service providers. This was a great way to differentiate its offerings, and I hope to see more features like this from Intel for specific apps and workloads in the future.
Additionally, Intel announced the Xeon D-1600, an SoC geared towards dense environments with limited space and power. Intel says the D-1600 will assist its customers along the way towards 5G and the intelligent edge. The D-1600 will have a lot of competition on its hands and am looking forward to seeing how this space grows. Our 5G research suggests a massive boost in spending on the edge, be it for 5G or smart factories or cloud gaming.
Also announced at the event Intel’s next generation of FPGAs, called Agilex at a 10nm design. Geared towards edge, networking, and datacenters, this new family of FPGAs will provide application-specific optimization and customization to data-intensive infrastructure. Agilex is CXL capable, which needs sockets to plug into and also and features an easier ASIC ramp with the E-ASIC tie. I like the ramp to ASICs as I was always concerned with losing the socket to ASICs from companies like Broadcom. With E-ASIC, I can see Intel really putting the hurt on Broadcom.
I believe the industry will be surprised at Intel’s 5G base station market share when the dust clears. Intel has been quietly gaining networking market share, over the last eight years. Networks used to have five architectures, but are now down to 2 (x86 and Arm), as they continue to “cloudify.” Intel has benefitted from cloudification and reduction to two architectures, and I believe this will continue, in part, due to new technologies like the Agilex FPGAs, specialized NFV SKUs, Xeon SOCs (like the D-1600 announced today), and a full family of network adapters (including the new Ethernet 800 Series, also announced at the event). Capable of 100GbE, the new Ethernet 800 series is geared towards moving huge amounts of data in communications, cloud, storage, and more.
There were a lot of announcements to digest at Intel’s Data-Centric Day, but the big story is that Intel is now the full datacenter technology provider to beat. Make no mistake—it has made all the requisite investments in compute, storage, and networking, and is now a force to be reckoned with in the sector, not just x86 compute. With its optimizations for AI, edge, and 5G, the 2nd generation Xeon Scalable Processors mark a new chapter for the industry giant. I’m looking forward to seeing what comes next.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.