Last week Amazon.com AWS held its re:Invent conference. Amazon always delivers, and as such, it is always one of my favorite events. The company takes the concept of “give the customer what the customer wants” to an extreme when it comes to cloud services. There is nothing that is off-limits. This year was no exception—Amazon flooded the event with a barrage of announcements across its range offerings. Storage is my swim-lane, so let’s delve into the new and different ways the company is serving storage to its AWS customers.
For most of its existence, Amazon.com AWS provided basic block and object storage. In fact, its S3 object storage is now so ubiquitous among the unstructured data crowd, that S3 semantics are the most common interface to object storage solutions from nearly everyone in the industry.
Layering full-featured storage services atop Amazon AWS elastic block storage (EBS) and object storage (S3) has long been left up to others. When you deploy an instance on Amazon, there are a plethora of choices from third parties for file and storage services of all flavors.
IT organizations who deploy solutions on a multi-cloud architecture have a choice of options from every tier-one OEM. Hewlett Packard Enterprise continues to innovate its HPE Cloud Volumes Product, while Dell EMC matures its Isilon offerings with cloud-enabling features. IBM Corporation has very strong private cloud offerings, which it is increasingly marrying with the public cloud. Pure Storage has gone the furthest of the storage vendor community, delivering full storage array functionality into Amazon’s AWS offering.
Now, Amazon is delivering new cloud-based file services that will, in some cases, replace third-party software solutions. In other cases, the new Amazon features will serve as a solid base upon which to build differentiated solutions for Amazon cloud customers.
Amazon FSx for Windows File Server is Amazon.com AWS’s first foray into native file services for Windows, providing a strong foothold into the lucrative and dominant Windows market. This directly competes with Microsoft Corporation’s Azure Files offering. Amazon FSx for Windows File Server seamlessly integrates with Microsoft Active Directory and the rest of the Microsoft Windows ecosystem. It truly should be plug-and-play.
Amazon FSx for Windows File Server complements its existing elastic file system (EFS), which supplies NFS filesystem support for Linux and other hosted clients. Expanding its EFS offering, Amazon announced a lower-cost version it calls AWS EFS Infrequent Access. With it, organizations can place cold data for a lower cost than existing EFS services, but without the recovery constraints that Amazon Glacier introduces.
Amazon has been delving further and further into machine learning, artificial intelligence, and high-performance computing with its cloud offerings. The company makes it extremely easy to spin up almost supercomputer-class computing resources for a few days of analysis. What’s been missing, however, is storage support for high-performance computing.
Amazon’s introduction of FSx for Lustre closes that gap. The offering brings the power of the Lustre parallel filesystem to Amazon Web Services. Lustre is the dominant file system supporting high-performance computing workloads, powering 60% of the current TOP100 supercomputing sites. Amazon delivers Lustre to high-performance applications, dramatically reducing the friction of placing those workloads into a cloud environment. This gives practitioners the ability to focus on applications, relying on the presence of the underlying filesystem.
Businesses deploying multi-cloud solutions now have a range of compatible filesystem offerings from Amazon that make the company even harder to ignore if you’re building a high-performance system in the cloud.
Glacier Deep Archive
Amazon has long offered its S3 Glacier solution to satisfy the need for long-term archival storage. Using it is simple. Software treats it as it does S3 objects but limits read-level accessibility. Data, once written, isn’t expected to be read again, but it’s there if you need it. Expanding its Glacier offering, Amazon introduced Amazon Glacier Deep Archive for much longer-term storage.
The new offering makes Glacier a more attractive option for replacing or supplementing an organization’s backup strategy. I think cloud-dominant workloads will find Glacier a natural place to place backups. Following that will be greenfield applications, where there is no existing back-up, and Amazon is already a part of the workflow. With Glacier’s ease-of-use, we will also likely see IT organizations doing more backups, and in places where tape solutions weren’t implemented for whatever reason.
IT organizations looking to make the switch away from traditional tape need to understand the economics of the situation. Tape is attractive and relatively cheap. While Amazon’s storage service itself may be inexpensive, it still requires migration of that data to the cloud. That happens on an on-going basis. Bandwidth costs can become a surprising issue. Transfer times can be another.
IT organizations also have to ask themselves if they are willing to lock all of their data away in a vault that somebody else owns. When switching cloud vendors, moving workloads on and off the cloud is a much easier problem to solve than, say, migrating a petabyte of archived data. That’s not a problem you have with tape.
Amazon is setting the stage to dominate in cloud-based data protection. It’s an architecture that makes a lot of sense, and many IT groups will move over quickly—especially those already heavily investing in the cloud. The data protection software vendors like Veeam and Commvault already support Glacier, which eliminates much of the friction.
Bottom line: Amazon has a nice offering, but tape has a heavy legacy. It’s more than about redirecting your backup software to deposit data on the cloud. Tape services decades-old workflows, workflows which aren’t really broken. Is there a significant enough operational or economic benefit to IT groups to migrate? Is it worth committing to Amazon for the very long term? These are the real questions. For some, it will be an easy transition. Others will stick with tape.
Amazon continues to break down the barriers to cloud adoption. Easing the pain of delivering data to applications is a critical enabler, both to onboard new customers and to drive continued differentiation against Microsoft Azure and Google Cloud Services.
The multi-cloud world that today’s IT organizations live in requires solutions that scale from on-prem to the public cloud. Amazon’s offerings simply allow for higher levels of abstraction for innovative storage companies to build more robust storage solutions, without having to worry about low-level details of block and object storage.
We will see data protection companies, for example, leveraging the range of Amazon Glacier offerings to provide a richer set of archive capabilities to their customers. The storage community will build similar services around FSx for Lustre and FSx for Windows File Service. The DIY community now has a greater set of primitives from which to deploy solutions.
Amazon just released a great set of storage offerings. There is plenty to make Amazon cloud users and the storage industry at large happy, as it continues to look for areas to innovate and deliver true multi-cloud solutions. Amazon continues to show dominance, while IT organizations now have a richer set of features from which to deploy.
Note: This blog contains contributions from Patrick Moorhead, President, and principal analyst, Moor Insights & Strategy.