Datacenter Storage Matters More Than Ever In 2018

By Patrick Moorhead - January 28, 2018

Data has always been a strategic resource since the beginning of IT. It’s simple- organizations or sensors collect it, analysts mine the data, analyzing it from every conceivable dimension—all in an attempt to turn that data into something tangible to make money or save money. The world is producing more data than ever before, and the velocity and volume of data flooding into the data center is overwhelming. This is becoming so evident that even I do an eye-roll when the stats roll in so I won’t rehash it. The reality is that cycle isn’t slowing down- it’s actually speeding up, and storage systems are getting full, so something needs to happen, right?  Read on…

A changing storage landscape  At the same time, the science and practice around analytics and machine learning are advancing at an exponential rate. Compute and memory are cheaper and more powerful than at any point in the history of computing. Affordable GPUs from NVIDIA and Advanced Micro Devices, along with widely available open source analytics and deep learning software, are putting the power of deep analytics into the hands of any organization that wants it. Organizations across the spectrum are leveraging these newfound capabilities to look at data—which used to be either archived or discarded—to find previously unknown, or hard-to-identify insights. The strategic value of data is escalating. This revolution in processing data dramatically changes how IT organizations need to think about data center architecture. Nowhere is the impact of new data processing models more impactful than in storage architecture. It’s one thing to replace existing compute resources with new machines capable of mining and analyzing data; many times just it calls for a simple rack substitution. It’s a completely different beast, however, figuring out how to keep those resources fed with a steady stream of data to process.
Traditional storage architectures have evolved to meets the needs of traditional enterprise data processing workflows. When these workflows change or are supplemented with new data-intensive applications (such as analytics, or processing Internet of Things (IoT) data), all of the underlying assumptions need to be challenged to ensure that your organization is deploying an architecture that will scale with your needs.
The past five years have seen a tremendous amount of innovation in the storage industry.  Storage has evolved from boxes full of spinning disks arrays into a range of solutions, topping out at super-fast Flash storage with wire-speed encryption and deduplication. It’s not slowing down in 2018—the rapid evolution in storage continues with a range of new technologies emerging. You can read our senior storage analyst Steve McDowell’s storage industry predictions here, if interested. I wanted to go ahead and touch on some of the most impactful.  Where we’re at and where we’re headed Given the increase in compute and memory capacities, along with the commoditization of virtualization technologies, storage systems have increasingly taken on a more diverse workload. Hyperconverged Infrastructure (HCI) and converged infrastructure (CI) architectures have evolved out of this world. HCI and CI allow enterprises to consolidate compute and storage in ways unheard of five years ago. This blurs the lines between IT responsibilities and requires you to think about data center architecture in a much more holistic fashion than was previously required. This year we will see flash storage get faster as the Intel expands its Optane capabilities further into SSDs and likely into server-class memory. Optane is a new flash technology that provides both more endurance (by orders of magnitude), and more performant latencies and throughput than 3D NAND-based technologies today. New interconnects designed to keep up with this massive flow of data are enabling the OEMs who build systems around technologies like Optane to leverage the capabilities of these devices. NVMe, a low-latency controller-less interface for flash memory, will be broadly adopted by most storage system providers in 2018. Similarly, we’ll see NVMe over Fabric (NVMe-F) begin to emerge in offerings, bringing flash-speed to external storage. NVMe-F will run across a host of transports, everything from InfiniBand to 100Gbps Ethernet. It will get confusing fast. It’s not just about SSDs speed with Optane. The technology is heading directly into memory slots on PCs and servers alike, referred to as Storage Class Memory. Storage Class Memory brings the persistence that is usually associated with hard drives and SSDs, couples it with speeds that approach main memory, and gives application developers new models of persistence for in-memory analytics and deep learning algorithms. Persistent storage stops being just about disk drives and SSDs. Hardware alone doesn’t solve the problems around managing stored data fast and effectively.  Leveraging storage class memory and the faster NVMe and NVMe-F flash memory will require massive operating system and application software support—which we’re going to be seeing more and more of in 2018. Microsoft, leveraging Intel’s persistent memory reference code, began supporting storage-class memory in Microsoft Windows Server 2016 and the Windows 10 Anniversary update. That support is being updated in the next release of Windows Server. The most recent Linux kernels also support these technologies. Microsoft SQL Server also takes advantage of storage class memory, as does Oracle. The in-memory compute world is rapidly garnering support.
Intel Optane, HCI, CI, NVMe, Object Store, and other new technologies are just the building blocks that will allow your IT organization to craft a unique solution for your data needs. Defining those solutions is where it gets really interesting. A new storage tier will become prevalent in 2018, and it will revolutionize the way that application architects think performance. Top-tier storage will begin to leverage storage class memory, while the second-tier leverages SSDs. Meanwhile, the lower tiers will focus on more traditional storage technologies. Data will be moved around quickly between these tiers. Data will be ingested, processed, analyzed for insights, then stored for a while before its revisited and analyzed yet again.
The takeaway 
What does all this mean for traditional IT departments? In short, it means disruption. It means a rethinking of storage architectures and a re-examination of your partners. Taking inventory of the data needs of business-critical applications within the context of the new data tiering will be critical to an enterprise’s success. Your vendors may change. Traditional storage solutions from vendors like Dell Technologies, Hewlett Packard Enterprise, IBM, and NetApp will maintain their place, but they will be increasingly supplemented by early-moving technology companies. All Flash Arrays are now mainstream, putting new players like Pure Storage, Inc. on the map. HCI solutions are invading datacenters, giving rise to vendors like Nutanix, Inc., HPE’s Simplivity, and Dell Technologies’ VCE group. All of this is being enabled by the new technologies described in this article.
Enterprise IT can’t assume that they will understand all of the nuances of the technology that is emerging. At the same time, you can’t completely trust your traditional OEMs to keep you abreast of datacenter industry trends. Conversations across the board become critical. A smart organization has a number of external data partners spread across the spectrum of technology providers. Data partnerships should encompass your business-critical application partners (like Microsoft, Oracle, SAS, or SAP), server partners (like Hewlett Packard Enterprise and Dell), and traditional storage partners (like IBM, NetApp). Additionally, partnerships should include emergent technology leaders (like Pure Storage and Nutanix) and core technology providers who work across the spectrum.
The OEMs will give you product perspective. It may not be obvious, but engaging an infrastructure partner, like an Intel, should be a critical part of any strategic conversation about storage. It is exactly this kind of technology partner—one who is enabling the underlying technology in products from virtually every storage solution, and is driving the workaround industry standards—that will provide context and insight that spans what the OEMs can offer. Storage matters in 2018, maybe more than in any recent year. Data is a strategic resource, and the mechanisms levered to take advantage of that data will become core to the success to your business. Data will drive your competitive advantage. Don’t go it alone—it’s imperative that a competitive data strategy includes broad conversations with data partners. Get the perspective from the storage world as you define your future. Note: Storage analyst Steve McDowell contributed heavily to this column.
Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.