This blog was written by Steve McDowell, storage and HCI practice leadfor Moor Insights & Strategy.
The cloud computing world is one filled with sudden pivots, swift turns, and sharp boomerangs. The early hopes that cloud computing, with its attractive cost/benefit equations and ease of management, would replace the enterprise data center were short-lived.
As enterprises migrated workloads to the various cloud providers, lessons were learned, and a new reality set in. Workloads require data, and data has gravity. It’s not a simple matter to move an application to the cloud and hope that your existing storage architecture provides the right set of services to support it. You have to deploy storage architectures designed to bridge on-premises infrastructure with the cloud. Storage in a multi-cloud environment is not a place for the meek.
The complexity of blending on-premise and cloud has come into sharp focus over the past several months as every player in the enterprise IT value chain moves into new territory. The word “cloud” itself has become nebulous, as public cloud providers move infrastructure and services on-premises, traditional OEMs enter the capacity-on-demand business, and software increasingly becomes the defining glue tying it all together.
Google’s Cloud Next developer conference was held this week in San Francisco, where the company announced a set of software capabilities called “Anthos” to manage applications and workloads across private data centers and Google Cloud services. Anthos even promises to support workloads on its competitors’ clouds, Amazon Web Services and Microsoft Corporation’s Azure. Anthos is, not surprisingly, based on containers and Kubernetes, with a storage story that relies on vendors efficiently supporting the Container Storage Interface (CSI).
Deploying solutions such as Google’s Anthos, or even Amazon’s Outpost on-site cloud offering, requires complex integration with a steady eye towards balancing compute and storage. These multi-cloud implementations are not turn-key, instead relying on tight coordination between partners to deploy an enterprise-ready solution.
While the cloud providers and server OEMs vacillate on what they each believe is the right balance of on-premises and cloud technologies, each protecting high-margin turf in the process, we should talk about IBM, who has emerged as an unlikely lighthouse for the data-driven multi-cloud world. IBM , if you’re not aware, is the number four public cloud provider worldwide. The company doesn’t get down in the dirt and fight for every bit of business in the same way that Amazon and Google do. IBM instead focuses its cloud efforts where it has always focused its business: servicing the needs of the enterprise.
Alone among cloud providers and storage technology vendors, IBM has never vacillated on its vision. IBM has always viewed multi-cloud as an intelligent blend of infrastructure and cloud, managed by a comprehensive blend of software and services. It has made it easy for its customers to deploy multi-cloud solutions, whether those cloud workloads are containerized or leverage more traditional virtual machine technology. Last week in Rome, IBM’s storage team continued its efforts to enable enterprise IT’s multi-cloud journey.
All about the software
The data-driven multi-cloud world revolves around the software stack, which makes the capabilities real and manageable. For IBM, that software is its Spectrum Storage suite of products.
The most eye-catching of IBM’s storage announcements is its enhancements to IBM Spectrum Virtualize, now offering support for Amazon AWS public cloud. IBM Spectrum Virtualize for Public Cloud allows for hybrid multi-cloud mobility to and from AWS and IBM clouds, with non-disruptive migration to, from, and between clouds. IBM Spectrum Virtualize for Public Cloud is hosted on a pair of AWS EC2 compute instances, where it can virtualize and manage EBS block storage, and snapshot to and from S3 storage. The software provides data mobility from IBM’s Storwize family, FlashSystem 9100, SVC, and VersaStack. It’s an across-the-board solution play for IBM.
During its Rome event, IBM also updated its Spectrum Scale data-management tool to increase performance for SMB and NFS, while also supporting new levels of scalability and resiliency.
One of the more intriguing announcements from IBM is its new software support for blockchain technology within IBM storage solutions. As blockchain evolves into a critical capability for managing chains of trust, I can see many applications leveraging this technology in a multi-cloud world. I’m anxious to see this evolve and understand how enterprises leverage the capability.
One of the nice things about IBM’s storage offerings is the blueprints the company provides to help enterprise IT and IBM’s partners quickly deploy solutions with confidence. IBM Spectrum Virtualize for Public Cloud extends the library of blueprints, with new offerings defining workload mobility with VMware’s VMW +0% NSX, business continuity, and cyber-resiliency with “air-gapped” snapshots.
Software is the central nervous system of the multi-cloud infrastructure, but that software can only ever be as capable as the hardware resources its tasked to manage. Performance in the storage world is defined by the capabilities offered by the blend of flash memory and the NVMe interconnect. IBM was a very early adopter of NVMe-based flash storage, deploying its custom FlashCore modules to deliver very high-throughput, low-latency, solutions into its performance product.
The thing about multi-cloud is that it doesn’t always require the highest-performing arrays. Deploying multi-cloud solutions requires the performance needed for a given workload, with enough scalability to survive future evolutions of that workload. To that end, IBM announced upgrades to its Storwize V5000 family, bringing enhanced capabilities to the lower-end of its storage offerings, and delivering end-to-end NVMe to its V5100 series.
The new IBM Storwize V5100F and v5100 bring NVMe to a previously unattainable price-point. The arrays deliver nearly 2.5x more performance than the previous V5030F, offer 9x more cache than previous iterations, and have support for server-class memory. The densities are equally compelling, with the arrays able to deliver up to 2PB of flash in only 2U. That capacity can scale-up to 23PB, and scale-out to 32PB with 2-way clustering enabled. The IBM Storwize V5100F redefines how you should think about affordable performance and density.
IBM also updated its Storwize 5100, bringing new levels of scalability and density to the lower-cost range of its offerings. The updated IBM Storwize V5010E double the IOPS of its predecessor while scaling to 12PB. The updated IBM Storwize V5030E also offers a nice bump, delivering 20% better max IOPs, with scalability up to 32PB.
IBM also provided updates to its FlashSystem A9000/A9000R to provide better support for multi-tenant environments. The updated FlashSystem now allows sharing of physical storage resources among multiple virtual networks, while also supporting VLAN tagging on its iSCSI ports. These features should lead to better security and an overall reduction of costs in multi-tenant environments. These are critical enhancements for MSPs and others who share resources between disparate user groups.
Tying together all of IBM’s storage portfolio is its rich suite of Spectrum Storage software, designed to integrate IBM storage infrastructure with the multi-cloud world. The combination of IBM Spectrum Storage software and the updated arrays gives you an end-to-end solution ready for containerized, AI-driven workloads. At the same time, this set of updates gives IBM one of the broadest ranges of NVMe-enabled flash storage in the industry.
As the traditional server OEMs and the public cloud providers hone in on a set of architectures for the data-driven multi-cloud world, it is apparent that the solution was right in front of us the entire time. IBM has blended cloud and infrastructure from the very early days of its cloud offerings. The company delivers the most cohesive set of services and solutions that scale on-premise and cloud hosted workloads.
IBM’s storage team, in particular, has been aggressive in driving this vision. Its line of storage arrays are some of the most competitive in the industry, but when you couple those arrays with the power of the IBM Spectrum Storage software suite, it becomes unbeatable. IBM stands nearly alone in offering a comprehensive range of storage solutions that span data center hardware, private cloud, and public cloud. Its recent embrace of Amazon AWS and other public cloud competitors is a strong move that benefits IBM’s enterprise customer base. Choice is always good.
Steve McDowell is a Moor Insights & Strategy Senior Analyst covering storage technologies.