Storage is one of the critical elements around which cloud adoption either fails or succeeds. We talk about “data gravity,” the idea that workloads’ portability is constrained by the bulk of that workload’s data. It’s data gravity that keeps applications locked into an infrastructure that may not be the most effective one to meet an evolving enterprise’s needs.
Today’s multi-cloud world complicates the management of data and storage. It can be time consuming and expensive to migrate large pools of data to the public cloud. On-prem applications can suffer unacceptable latencies when pulling data from the cloud, while cloud-based applications can be equally inefficient when relying on data that are stored elsewhere.
Cloud adoption hinges on the ability to put data where it makes the most sense. The latency between storage and applications, wherever those applications live, needs to be managed. Data-protection is table-stakes. All of this has to be enabled through a simple to use management interface, but one flexible enough to integrate into API-driven automation tools.
Amazon hosted its second AWS Storage Day last week, delivering a slew of updates that impact nearly every one of its storage offerings. There’s far more than can be covered here (for more detail, you should read the official AWS blog post describing the announcements), but let’s touch on the high points.
Cloud is all about erasing complexity while delivering a push-button experience. That’s not always an easy thing to provide when dealing with storage, where the underlying properties can be complicated, and automation crucial to delivering IT services at scale.
Amazon has a long history of delivering manageability tools that offer simplicity without sacrificing flexibility. AWS’s existing tools are solid, and Amazon is now focused on features that address targeted pain-points.
A great example of this is the new capability to assume ownership of S3 objects from other (authorized) AWS accounts. This is now built into AWS Cloud Formation and is available across all regions.
AWS FSx for Lustre brings on new quota management capabilities, while FSx for Windows File Services now supports DNS aliases. AWS Data Lifecycle Manager lets you create robust policies to manage machine images and associated EBS snapshots. There is now the ability to associate an AWS Elastic File System to an EC2 instance from the launch wizard.
These features may seem small. When taken together, they bring a higher-level of manageability to Amazon’s storage offerings. Each of them addresses an existing pain point.
The ability to place data on the right storage technology can have massive implications for the cost of IT operations. Amazon has long been at the forefront of intelligent storage tiering in the cloud. Its S3 cold-storage tiers provide some of the easiest-to-use and most cost-effective long-term storage solutions available, whether cloud or on-prem.
Amazon introduced new intelligent tiering to its S3 offering. The S3 Intelligent Tiering class now supports automatic archiving to AWS’s newly introduced Archive Access and Deep Archive Access storage tiers. This new capability allows S3 to automatically move objects between the various storage tiers based on access patterns. This will enable data to live where it’s most cost-effective for the workloads accessing that data.
The Archive Access storage tier provides multiple SLA levels for data retrieval, ranging from minutes to hours, depending upon the selected class of service. Amazon’s new Deep Archive Access storage tier is much colder, with service classes allowing data retrieval options ranging from twelve to forty-eight hours.
The scariest part of storage in the cloud is the cost and complexity of moving data around. Amazon understands this, as the company delivers some of the most impressive data movement and management tools available anywhere. If you don’t believe that, take a look at the AWS Snowmobile, which can carry 100PB of data from your data to the cloud in a 45-foot shipping container.
I didn’t see any big rigs during AWS storage day, but there were several enhancements to its Snow family. The AWS Snowball devices now support importing virtual machine images as AMIs, a compelling capability to bring on-prem workloads into the cloud. Amazon also now allows you to import Windows 2012 and Windows 2016 images while also allowing those images to be launched on Snowball devices.
Beyond Snowball, Amazon Storage Gateway has added new features for scalability, security, and performance. Tape and volume gateways can now support caches of up to 64 terabytes. That’s a 4X increase over previous versions. The tape and volume gateways now also allow for fine-grained bandwidth control, which can be scheduled to enable optimal network utilization.
Enterprise applications live and die by performant access to data, when and where that data is needed. AWS has impressively delivered on a cadence of capabilities and features that make cloud-based storage reliable and straightforward.
Storage doesn’t have to be an inhibitor to cloud adoption. Amazon continues to make it easier for an IT architect to leverage the cloud where and when it makes the most sense. Today’s feature-rich multi-cloud world gives IT organizations an unprecedented level of choice. Amazon continues to make AWS an easy choice when thinking about the public cloud.
Note: This analysis was edited by Moor Insights & Strategy founder and president Patrick Moorhead.