Micron, Kobayashi Maru And The Future Of The Data Center

Over the years, we have all witnessed the vendor slide that predicts the explosion of data. Regardless of number and timeframe, the takeaway is always the same - an unmanageable amount of data sometime in the future.

For technology providers, it appears to be a Kobayashi Maru no-win scenario. Customers want more data, faster, more securely, using less energy, and of course, cheaper. James T. Kirk was able to beat the Kobayashi Maru by reprogramming the simulation. Rock stars are “reprogramming the simulation” with innovation to delay or even avert the data doomsday scenario in the technology industry.

I was recently fortunate to sit down with two of those industry rock stars - Jeremy Werner, Corporate Vice President and General Manager of the Micron Storage business unit, and Raj Hazra, Senior Vice President and General Manager of the Micron Compute and Networking business unit. 

It is not just about the processors

Processors drive the future of the data center. At least, that is what we were programmed to think for the last 30 years. And yes, there is no doubt that processors are an essential part of the data center, but without access to data, processors will sit idle. 

Turning exponentially growing data into insight, action, and value is the end game. The data economy depends on insight, and that priority is changing the data center. 

While the focus has been on the compute side, there is a growing opportunity for memory and storage hierarchy innovation. Micron is at the forefront of significant memory and storage innovation where data, not compute, is at the system's center.

Memory and storage hierarchy innovation

The simple truth is that system memory bandwidth cannot keep pace with CPU core growth, especially with heterogeneous computing that features a CPU coupled with accelerators including GPUs, FPGAs, and ASICs to address data-centric workloads.

A focus of innovation is to find ways to overcome the CPU-memory and memory-storage bottlenecks. 

New advances in flash

The 176-layer NAND is the most advanced memory and storage technology in production today. Micron’s fourth generation of stacked design achieves the capacities that can store terabytes of photos, videos, and music. Micron's data cells are stacked in hundreds of layers, providing space for billions of bits of information on a single chip. 

Micron recently announced a new portfolio of fourth-generation solid-state drives named the Micron 7400 SSD with NVMe. The PCIe Gen4 performance SSD delivers almost a million IOPs in just six watts, twice the throughput and input/output operations per second (IOPs) per watt. It is a fully vertically integrated product in that Micron develops the memory, the DRAM, the controller, and the firmware.

The SSD solution comes in three different Enterprise and Data Center Standard Form Factors (EDSFF), representing the next generation, custom-designed form factor to optimize flash in the data center.  Hard drives continue to be used heavily for colder data. As flash technology has become more affordable, many hard drive applications have moved to flash. Most high-performance applications rely on flash in the data center; optimizing the form factors for flash allows better performance, reduction of the footprint, and energy consumption. 

The new SSD solutions are available in various form factors, from 400GB to 7.68 TB of storage. The 7400 SSD is backward compatible with PCIe Gen3 and supports Open Compute Project (OCP) deployments.

Best-in-class silicon technology

In the silicon business, everything starts with best-in-class silicon technology. It has taken several years of hard work for Micron to assume the leadership position in both Dynamic random-access memory (DRAM) and non-volatile memory (NAND) process technologies simultaneously for the first time. 

Micron’s 1-alpha DRAM is the smallest linewidth geometry memory available today. Producing this technology requires highly specialized tools, utilizing precision materials, and operating in an enormous ultra-clean semiconductor fabrication plant.

High Bandwidth Memory (HBM) has a faster connection to CPUs than the current socket-based scheme. HBM utilizes a single system-on-a-chip (SoC) design with stacked memory dies sitting on top of an interposer layer that extends sideways to link to a processor. The arrangement provides lower latency and greater bandwidth connection than the DRAM socket-based scheme. HBM has become vital to complex, high-performance heterogenous SoCs.

CXL is a big deal

Far memory is a tier between DRAM and Flash that provides lower cost per GB than DRAM and higher performance than flash. Far memory addresses the fundamental bottleneck in CPU-centric platform design, made possible by an open industry standard called Computer Express Link (CXL) that connects shared memory to processing devices such as CPUs, GPUs, and app-specific accelerators.

CXL is garnering much industry support due partly to its capability to provide the architectures critical to running AI workloads at scale. CXL enables the ability to add both capacity and memory bandwidth without being constrained to what you can directly attach.

I have talked personally with the largest hyperscalers in the world and, although they won’t talk about it publicly, all of them are rearchitecting their datacenters to support it. Imagine having memory as fungible as compute and storage are today. Need more memory? Get a new bank of memory without a requirement for a CPU. 

Intelligently manage data placement

Micron also recently announced a redesigned heterogeneous memory storage engine called HSE 2.0. The storage engine intelligently manages data placement across disparate memory and storage media types. Unlike traditional storage engines written for hard disk drives, HSE targets high throughput and low latency of SCM and SSDs. It enables IT to efficiently use data and place it in different ways for different accessibility, optimizing performance, cost, and data availability. 

Wrapping up

Traditional data center systems can longer keep up. The solution is to deploy heterogeneous compute with innovation across the memory and storage hierarchy, made possible by a new high bandwidth low latency CXI interconnect. 

It was awe-inspiring to feel the passion from these two industry rock stars. We are entering the golden age of memory and storage. The team at Micron is redefining the memory and storage hierarchy and overcoming bottlenecks in compute memory and storage connections.

The pace of innovation is such that five years from now, we'll look back at our current data centers and say, "wow, we did reprogram the Kobayashi Maru simulation, and the world continued." That is exciting. 

Note: Moor Insights & Strategy writers and editors may have contributed to this article. 

Patrick Moorhead

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.