IBM lives at the intersection of disruptive technology and real-world solutions. There are no two more disruptive technologies in the enterprise today than artificial intelligence and containerized applications.
Artificial intelligence is in the enterprise. Nearly every IT organization has either already deployed, or is preparing to deploy, some sort of AI solution. The problem with AI in the data center is that we’re all still figuring out exactly what to do with it. Today’s experiments will blossom into tomorrow’s major deployments. AI requires multi-dimensional performance, scalability to meet the needs of applications over time and sophisticated tools to manage the onslaught of data that feeds AI.
Containers live in a similar space. The technology provides the means to safely and reliably deploy applications and server-less workloads to both end-users and dev-ops teams. Containers can become overwhelming, and orchestration tools such as Kubernetes and Red Hat ’s OpenShift help alleviate that complexity. Missing from the container ecosystem, however, is means by which to intelligently manage the underlying storage.
IBM this past week made announcements that will help IT organizations build lasting solutions that scale and mature with AI implementations and container deployments. Let’s take a look at what the company is doing.
The appliance approach allows for very fast storage deployments. Instead of installing a FlashSystem array, with separately installed Spectrum Scale software, there is now a single point of installation. IBM demonstrated a turnkey deployment that took less than three hours, from delivery to serving data.
The Elastic Storage System 3000 is designed from the ground-up for scalability, meeting the needs of AI and analytics deployments of nearly any size. The system scales from a base 40Gbps to multiple terabytes of bandwidth, all with the low latency that’s inherent in NVMe storage.
Simplicity in the IT world is generally a good thing. IBM delivered just that. Combining the proven high-throughput NVMe performance of IBM’s Flash System 9100 with IBM’s highly scalable Spectrum Scale software is a combination that just makes good sense. I’m not aware of another offering on the market that contains the functionality that the Elastic Storage System 3000 delivers in an appliance form-factor.
Managing storage for container data is complex business. It’s important to remember (and easy to forget!) that container data is enterprise data and should be treated as such. Kubernetes, and its cousin Red Hat OpenShift, require a storage strategy that relies on shared storage and integration with advanced data services. This is an often-overlooked component in many container deployments.
Kubernetes provides the Container Storage Interface (CSI) to connect external data services to containers running within a cluster. This provides raw data services. It’s up to storage vendors to provide drivers that integrate with CSI to allow mapping between containers within a Kubernetes cluster and their storage arrays. Nearly all storage vendors at this point have basic CSI driver support. Integration with complex data services is a different story.
IBM leapfrogs its competitors by providing enterprise-grade data protection for Kubernetes and OpenShift deployments. IBM’s Spectrum Protect Plus now utilizes the CSI snapshot interface to allow developers to backup, recovery and retain persistent volumes using predefined policies in Kubernetes and, soon, OpenShift environments. IBM Spectrum Protect Plus allows enterprises to treat container data as enterprise data, with the same data protection capabilities demanded by an organization’s most critical data.
The new Spectrum Protect capabilities extend IBM’s already aggressive delivery of CSI drivers to allow its storage solutions to be efficiently deployed into hybrid multi-cloud environments. IBM’s CSI work delivers security through intelligent volume mapping, dynamic provisioning, persistent data volumes, and infrastructure agility to IT teams looking to deploy Kubernetes or OpenShift.
This is just the beginning. IBM is aggressively attacking the OpenShift market, and we’re seeing rapid integrations between IBM products and OpenShift and Kubernetes. It won’t surprise me at all to see the IBM Spectrum Storage suite embracing containers across the board as the next year unfolds.
IBM’s storage team likes to make big multi-part announcements, and this announcement day was no different. Beyond what I’ve mentioned already, the company released a new virtual tape library, an update to Spectrum Scale to support erasure coding, an update to Spectrum Discover that allows it discover what’s in your backups, a myriad of software updates and even new storage-as-a-service offerings.
These are all solid announcements that both round out IBM’s portfolio and keep it fresh, but it’s IBM’s new storage appliance and its embracing of containers that has me most excited. The impact of AI on storage architecture is just beginning to be understood, and IBM’s Elastic Storage System 3000 is an appliance that allows IT organizations to easily and quickly scale data services as AI and analytics solutions evolve.
Containers need an intelligent storage strategy. It’s a hard, unsolved problem. IBM’s integration of CSI with its Spectrum Storage software begins to take us there with enterprise-class data services. The industry will follow, but once again IBM leads the way.