Last week, Intel announced its 4th Gen Intel Xeon W-3400 and W-2400 processors for professional workstations. These new desktop processors come on the heels of Intel’s long-awaited overall launch of the 4th Gen Xeon processor, code-named Sapphire Rapids. You can read Moor Insights & Strategy senior analyst Matt Kimball’s coverage of Intel’s Sapphire Rapids launch for the competitive data center market here. You can also see a video I did here with Lisa Spelman, Intel Corporate Vice President, and General Manager of Xeon Products.
Sapphire Rapids come to market only after an unfortunate delay, but I also recognize that this is Intel’s next-generation distributed architecture that includes new compute tiles, DDR5 and PCIe5.. I explained some of this delay, and in the bigger picture how Intel is in the middle of an architectural revolution, in my coverage of Intel’s Q4 2022 and Q1 2023 guidance. For now, the most important thing is that Intel’s distributed architecture enables it to deploy a couple of very cool features that could make it worth the wait. Let’s look at some of these new features and the importance of these new Intel Xeon W processors for professional users.
The state of the workstation market in 2023
The workstation market has grown considerably over the past couple of years, not only because the pandemic left many remote workers time for content creation and side projects, but also because of performance improvements stemming from advances in computing. You can read more about workstation market conditions in a recent Moor Insights & Strategy Research Paper by senior analyst Anshel Sag here. The gist is that the workstation market is very mature with relatively few players. That said, the market is nonetheless changing significantly because of accelerating demand for computing power across many professional domains, along with the continued growth of work centered on content creation.
One example of a technology that’s creating huge demand for computing power is machine learning (ML). Right now we see this in all the headlines about ChatGPT and the rollout of Microsoft’s Bing AI and Google’s Bard, but in time generative AI will become engrained in a huge number of workflows in virtually every area of business. While large language models (LLMs) and intensive AI workloads are currently on the cutting edge, data scientists and AI developers will continue to need further advancements in computing technologies to address the rapidly expanding applications of AI. Right now, these are in the datacenter, but I see these moving to the client computer.
Meanwhile, professional content creation segments like 3-D graphic design, video effects, game development, CAD, and CAE are becoming more accessible in terms of both the lower level of skill needed to operate them and the higher level of computing performance available for them. Computers are becoming more performant, and with the assistance of AI and the availability of new software tools, the skills gap for multimedia content creation is becoming smaller. In these professional content creation segments where time means money, raw performance does not always equal saving the professional user time. In some cases, it is the platform features and capabilities that save the professional user time and money in creating and developing.
Intel’s innovative features and capabilities for workstations
The company’s new offering is the intel Xeon W-3400 processor with up to 56 cores and 112 threads, 4TB of eight-channel DDR5 memory, 112 PCIe 5.0 lanes and Intel Deep Learning Boost (DLB). The 56 cores all fit on the same processor, as opposed to having a dual-socket system. This high core count is achieved thanks to Intel’s advanced packaging with four compute tiles interconnected by Intel’s embedded multi-die interconnect bridge (EMIB)—a direct result of Intel’s new distributed architecture. Intel is reporting 28% more single-threaded performance and a 120% boost in multi-threaded performance generation over generation. While this claim is not specific to any one segment or workstation for the new Sapphire Rapids desktop workstation processors, it drives the point home that there has been a significant improvement over the last generation.
I am more impressed with the platform features and capabilities that address the specific needs of professional users. These new desktop workstation processors support 4TB of memory and fast error correcting code (ECC) DDR5 RDIMM memory over four channels and up to 105MB of Intel Smart Cache. The higher memory bandwidth and higher speeds are crucial for memory-intensive applications like video editing, where an entire video project can sit in the system’s memory with real-time scrubbing. Likewise, large ML models that are memory-intensive can train faster thanks to the high memory bandwidth of DDR5. Smart Cache also cuts down on latency for complex workloads like code compilation (again, think large AI training models) and large 3-D rendering jobs.
The Sapphire Rapids workstation processors also benefit from Intel Advanced Matrix Extension (Intel AMX) accelerators for real-time inferencing and training performance. AMX is an x86 instruction extension that uses matrices to accelerate AI and ML workloads. As it is in close proximity to the CPU and not required data to cross PCIe, it is very low latency.
Intel says that the 4th Gen Xeon processors with Intel AMX achieve up to 10x higher PyTorch real-time inference and training performance. PyTorch is an ML framework for applications like computer vision and NLP. This performance jump is massive and represents a big time and cost savings when building LLMs. Considering how large LLMs are becoming, time spent training them is a huge factor; the more time developers can save, the better. Again, most of the LLM action is in the cloud, but I fully expect the capability to make it to the client computer.
Intel’s 4th Gen Xeon processors also support Wi-Fi 6E and 2.5GbE wired connectivity for fast data transfers, as well as Intel vPro technology for hardware-enhanced technologies. These technologies are critical for professional creators and developers as they enable the secure handling of larger quantities of data. While I can’t imagine a hard core workstation user implementing wireless comms in favor of wired, I am sure a few do and it’s good it’s 6E. Don’t forget to buy the WiFi 6E-capable router.
Intel’s Sapphire Rapids processors introduce Intel’s new distributed architecture and some incredible I/O, which enables higher performance and more features that address growing needs in content creation and other compute-hungry segments.
More memory and more bandwidth means that professional creators and developers spend less time rendering and training content. DLB and Intel AMX introduce better AI and ML capabilities that could be considerable time and cost savings when the software supports it. I believe these features and capabilities will earn Intel strong support from software vendors and OEMs—and strong approval from professional end users who need serious advancements in computing power.
Note: Moor Insights & Strategy co-op Jacob Freyman contributed to this article.