Every analyst and pundit will have an opinion on the launch of Intel’s 3rd Generation Xeon Scalable Platform. Many will pull up a chart that compares Ice Lake’s top-level specifications against its competition and give a thumbs up or thumbs down. I’m going to look at this launch a little differently and explain why I think Intel’s Ice Lake launch is impressive. Hint – it’s not just about cores.
First – the datacenter infrastructure market has changed
The datacenter infrastructure market is not what it was a few short years ago. Call it cloudified. Call it software-defined. Call it whatever you want, but the era of a datacenter populated exclusively powered by traditional 2P/2U rack server populated with Xeons is gone.
The cloud’s impact has been such that IT organizations look at infrastructure and expect it to “just support” the emerging workloads being relied on to drive the business, without as much regard for the performance characteristics and requirements of said workload. From traditional virtualized infrastructure to ERP to HPC and AI, IT expects its underlying server infrastructure to support every workload seamlessly and efficiently. Yes, IT has been trained to look at infrastructure as fungible resources.
Contrast this dynamic with reality. The modern workloads powering the datacenter have unique requirements. Workloads like AI and HPC benefit from compute microarchitectures that are aligned to software architectures, CPU instructions that allow modern applications to run faster and more efficiently and memory footprints that enable more data to be stored closer to compute. Furthermore, application accelerators that help offload processing for those workloads are a necessity.
Yes, the world has changed – data is generated everywhere, by everything. The successful business is an organization that can collect, analyze and transform data faster than the competition – exactly what makes Intel’s launch so interesting.
Second – Intel has positioned itself well
Before hitting on what was announced, it’s essential to make a note of a theme Intel has promoted for some time: “Move faster. Store more. Process everything.” The company has aligned its portfolio to these three pillars for the last few years, as seen in the graphic below.
This is a very clean and simple way of positioning products and solutions across Intel’s comprehensive portfolio. It is simple for IT Solutions providers to position and deliver services and simple for IT consumers to map products and solutions to their specific needs.
More importantly, the above demonstrates Intel’s understanding of my initial point – the needs of the modern IT organization go beyond datacenter racks populated by industry-standard servers. While these servers are critical, many of the workloads powering today’s business, such as AI and big data analytics, require more to run optimally.
Third – what was announced.
In the following few paragraphs, I will broadly cover each of these categories before going a little deeper into Ice Lake’s details.
Move Faster: The focus of Intel’s Ethernet 800 Series Network Adapters is flexibility – the ability to prioritize traffic based on workloads, greater support for high throughput and low latency storage for workloads such as HPC and cloud.
Store More: Intel Optane Persistent Memory 200 is about performance. Using Optane, Intel claims performance of around twice as fast as the previous generations’, for workloads such as graph analytics. This matters in the realms of search engines, fraud detection and social networks – the very things IT consumers worry about every day.
Optane’s SSD products can deliver on absolute best speed (P5800X) or capacity (D5-P5316). Traditional enterprise workloads that crave performance (VDI, HCI, database) can measurably benefit from the P5800X. On the other end of the equation, workloads that require access to large datasets quickly would be good candidates to utilize the D5-P5316 (e.g., CDN, AI, HPC). Consider this – with the DS-P5316, a 1U form factor can store up to 1PB.
Process Everything: The Agilex focus for this launch appears to be performance improvements and flexibility. Intel claims several impressive performance stats, including 2x better fabric performance per watt, 50% faster performance for processing video applications and much faster performance for enabling 5G fronthaul gateway apps.
What about Ice Lake?
For a lot of IT folks, this is the million-dollar question. All of these other products are important, but IT professionals succeed or fail based on server architectures.
First, Xeon has many improvements that bring it closer to parity with AMD’s EPYC processor. There are areas where Intel has leapfrogged AMD, such as memory capacity.
The table above is not meant to be any real comparison to illustrate Intel or AMD superiority in one area or another. More so, this table is included to demonstrate that Intel understands where its vulnerabilities have been relative to AMD and has taken steps to address those shortcomings.
One good example of this is in support of single-socket SKUs. Since the launch of EPYC, AMD has claimed significant leadership in the single-socket space. Indeed, the EPYC single-socket SKUs did not scale back on features or capabilities. These were fully-featured CPUs for organizations looking to capitalize on the economic benefits of single-socket servers. With the 3rd Gen Xeons, Intel has responded in kind. Is this a validation of AMD’s approach? Sure. More importantly, this signals a very welcome change in how Intel is listening (and responding) to the market.
One other example is Ice Lake’s improved security. This is an area where I believe the company has made substantial progress relative to its competition. In past generations of Xeon, SGX was considered a complex technology to adopt as its memory enclave support was considered too small and applications required modification to support. In Ice Lake, memory enclave support has been expanded considerably (512GB), with relatively simple tooling to support applications and data. TME (Total Memory Encryption) is the encryption of physical memory, preventing DIMMs’ removal and scraping.
In addition to these changes to the microarchitecture, Intel has continued to drive support for the acceleration of workloads through instruction sets that are seamless to a customer. For example, AVX-512 may mean nothing to an IT person. They simply want to know that a CPU can make their HPC workload run faster. The table below does a good job of summarizing the instructions Intel has built into its Gen3 Xeon and how this impacts specific workloads.
What this all means…
I believe the launch of the 3rd Gen Xeon Scalable processor is a significant milestone for Intel for two reasons:
- The company has strengthened its position against an immediate threat posed by EPYC in the datacenter by closing the feature gap and doubling down on crucial enterprise workloads. This is an important step in maintaining credibility with enterprise IT professionals focused on topline speeds and feeds.
- The company has positioned itself well for the future as it continues to build out a portfolio of IP and products that address today’s emerging workloads – the mainstream of tomorrow’s datacenter.
Intel specifically used the term 3rd Generation Xeon Scalable Platform in the opening of its launch. I think this is very appropriate as it takes a fully optimized and integrated platform to drive the applications and workloads that populate the datacenter – networking, storage and compute to move, store and process everything.
Strangely, despite the improvements across the portfolio in general and to Xeon in particular, my biggest takeaway from this launch is that Intel is listening to its customers and getting ready for where the market is going to be over the next few years. Anybody who has read “The Innovator’s Dilemma” understands how difficult it is for market leaders to stay ahead of the next wave of innovation. I believe Intel has figured this out.