Data has always been a strategic resource since the beginning of IT. It’s simple- organizations or sensors collect it, analysts mine the data, analyzing it from every conceivable dimension—all in an attempt to turn that data into something tangible to make money or save money. The world is producing more data than ever before, and the velocity and volume of data flooding into the data center is overwhelming. This is becoming so evident that even I do an eye-roll when the stats roll in so I won’t rehash it. The reality is that cycle isn’t slowing down- it’s actually speeding up, and storage systems are getting full, so something needs to happen, right? Read on…
A changing storage landscape
At the same time, the science and practice around analytics and machine learning are advancing at an exponential rate. Compute and memory are cheaper and more powerful than at any point in the history of computing. Affordable GPUs from NVIDIA and Advanced Micro Devices, along with widely available open source analytics and deep learning software, are putting the power of deep analytics into the hands of any organization that wants it. Organizations across the spectrum are leveraging these newfound capabilities to look at data—which used to be either archived or discarded—to find previously unknown, or hard-to-identify insights. The strategic value of data is escalating.
This revolution in processing data dramatically changes how IT organizations need to think about data center architecture. Nowhere is the impact of new data processing models more impactful than in storage architecture. It’s one thing to replace existing compute resources with new machines capable of mining and analyzing data; many times just it calls for a simple rack substitution. It’s a completely different beast, however, figuring out how to keep those resources fed with a steady stream of data to process.
The past five years have seen a tremendous amount of innovation in the storage industry. Storage has evolved from boxes full of spinning disks arrays into a range of solutions, topping out at super-fast Flash storage with wire-speed encryption and deduplication. It’s not slowing down in 2018—the rapid evolution in storage continues with a range of new technologies emerging. You can read our senior storage analyst Steve McDowell’s storage industry predictions here, if interested. I wanted to go ahead and touch on some of the most impactful.
Where we’re at and where we’re headed
Given the increase in compute and memory capacities, along with the commoditization of virtualization technologies, storage systems have increasingly taken on a more diverse workload. Hyperconverged Infrastructure (HCI) and converged infrastructure (CI) architectures have evolved out of this world. HCI and CI allow enterprises to consolidate compute and storage in ways unheard of five years ago. This blurs the lines between IT responsibilities and requires you to think about data center architecture in a much more holistic fashion than was previously required.
This year we will see flash storage get faster as the Intel expands its Optane capabilities further into SSDs and likely into server-class memory. Optane is a new flash technology that provides both more endurance (by orders of magnitude), and more performant latencies and throughput than 3D NAND-based technologies today.
New interconnects designed to keep up with this massive flow of data are enabling the OEMs who build systems around technologies like Optane to leverage the capabilities of these devices. NVMe, a low-latency controller-less interface for flash memory, will be broadly adopted by most storage system providers in 2018. Similarly, we’ll see NVMe over Fabric (NVMe-F) begin to emerge in offerings, bringing flash-speed to external storage. NVMe-F will run across a host of transports, everything from InfiniBand to 100Gbps Ethernet. It will get confusing fast.
It’s not just about SSDs speed with Optane. The technology is heading directly into memory slots on PCs and servers alike, referred to as Storage Class Memory. Storage Class Memory brings the persistence that is usually associated with hard drives and SSDs, couples it with speeds that approach main memory, and gives application developers new models of persistence for in-memory analytics and deep learning algorithms. Persistent storage stops being just about disk drives and SSDs.
Hardware alone doesn’t solve the problems around managing stored data fast and effectively. Leveraging storage class memory and the faster NVMe and NVMe-F flash memory will require massive operating system and application software support—which we’re going to be seeing more and more of in 2018. Microsoft, leveraging Intel’s persistent memory reference code, began supporting storage-class memory in Microsoft Windows Server 2016 and the Windows 10 Anniversary update. That support is being updated in the next release of Windows Server. The most recent Linux kernels also support these technologies. Microsoft SQL Server also takes advantage of storage class memory, as does Oracle. The in-memory compute world is rapidly garnering support.
The OEMs will give you product perspective. It may not be obvious, but engaging an infrastructure partner, like an Intel, should be a critical part of any strategic conversation about storage. It is exactly this kind of technology partner—one who is enabling the underlying technology in products from virtually every storage solution, and is driving the workaround industry standards—that will provide context and insight that spans what the OEMs can offer.
Storage matters in 2018, maybe more than in any recent year. Data is a strategic resource, and the mechanisms levered to take advantage of that data will become core to the success to your business. Data will drive your competitive advantage. Don’t go it alone—it’s imperative that a competitive data strategy includes broad conversations with data partners. Get the perspective from the storage world as you define your future.
Note: Storage analyst Steve McDowell contributed heavily to this column.