When I started my firm eight years ago, I used to receive a bunch of shots when I said the cloud world was headed hybrid. Now that reality has set in, gone are the drunken-sailor days of notions that 100% of the data would be processed and stored in the public cloud. The realities of the needs of lowest latency applications, mission-critical workloads, and data gravity, have driven the need for the hybrid cloud. There was no announcement that better exemplified this than AWS Outposts, from the company that invented the commercial public cloud. Last week AWS finally gave us more information on Outposts in an illuminating new blog post by Amazon AWS’s VP of Compute Services, Matt Garman. But before that, some background.
Outposts announced at re:Invent 2018
It’s been a while since AWS re:Invent 2018 when Amazon made big waves with its announcement of AWS Outposts (see my initial coverage here and my colleague Rhett Dillingham’s take here). The fully-managed service offering available at the rapidly approaching end of 2019 promises to bring AWS custom-designed infrastructure to the enterprise datacenter, allowing businesses to utilize the same native AWS services, software, infrastructure, management tools, and deployment models they utilize within the AWS or VMware cloud.
What is an AWS Outpost rack? Anthony Liguori (AWS)
AWS’s big on-prem move seeks to reduce the complexity of hybrid cloud, since customers on these clouds will no longer have to manage disparate multi-vendor IT environments. Some additional details were shared at the announce, including the fact that Outpost will be available in two different offerings: VMware Cloud on AWS that runs on Outposts and AWS Outposts that allows customers to utilize the same native APIs used in AWS. It was a tantalizing premise, but the announcement left me with many more questions. Without further ado, let’s dissect the news.
Getting down to the nitty-gritty
Some of my lingering questions were revolved around which EC2 instances would be supported. AWS revealed in the blog post that AWS Outposts can be utilized to launch a number of Amazon EC2 instances:
- EC2 C5– AWS targets this instance for high-performance web servers, high-performance computing (HPC), batch processing, ad serving, highly scalable multiplayer gaming, video encoding, scientific modeling, distributed analytics and machine/deep learning inference.
- EC M5– AWS targets this “balanced” instance for general-purpose workloads like web and application servers, backend servers for enterprise applications, gaming servers, caching fleets, and app development environments.
- EC2 R5– AWS targets this instance for memory-intensive applications such as high-performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real-time big data analytics, and other enterprise applications
- EC2 I3en– With the “lowest price per GB of SSD on EC2” AWS targets data-intensive workloads such as relational and NoSQL databases, distributed file systems, search engines, and data warehousing.
- EC2 G4– Using up to eight NVIDIA T4’s this EC2 instance is targeted to machine learning training & inferencing, video transcoding, and “other demanding applications.”
With this broad compute assortment, AWS isn’t tiptoeing into the space; it’s driving a bulldozer through the wall into the space.
These can be configured with and without local storage. Additionally, AWS announced that Outposts can launch Amazon EBS volumes locally.
When originally announced, AWS said that Outposts would have “the same breadth and depth of features” as AWS cloud. Given that could number somewhere in the thousands, I was skeptical of the claim and curious about just what services would be supported by Outposts. AWS shed some light on this in the blog and revealed a number of services it says will be locally supported on AWS Outposts at the time of launch. These Amazon-branded services will include:
- Amazon EKS (elastic K8 services) and ECS (elastic container services) clusters for container-based apps
- Amazon RDS instances for relational databases. This is bad news for Oracle as many customers have been waiting for the hybrid RDS capability to replace Oracle.
- Amazon EMR (elastic map reduce) clusters for data analytics
- Amazon SageMaker, AWS’s end to end solution to build, train and deploy ML will be introduced “soon”
- Amazon MSK, AWS’s fully managed Apache Kafka as a service for real-time streaming data apps, will be introduced “soon”
The blog also provided some helpful clarification on what sort of workloads Outposts is geared towards, versus its Snowball Edge offering, another hybrid offering designed to integrate on-prem resources with AWS. You can check out this great explainer on AWS Snowball here. And it’s not great just because I am in the CNBC video.
While Snowball Edge is designed for environments with little to no connectivity (think cruise ships or remote mining locations), Outposts is geared towards on-prem, connected environments. The potential for Outpost deployments span across financial services, healthcare, manufacturing, media and entertainment, telecom, and more. These verticals often require compute-intensive, graphics-intensive, storage-intensive applications, which stand to benefit from the ultra-low latency (single-digit millisecond) promised by AWS Outposts.
Another potential benefit of Outposts is that they are updated and patched alongside AWS regional operations—which means there’s no need to upgrade and patch on-prem infrastructure, and no downtime for maintenance.
Real-life Outposts customer use case
To better illustrate the potential of Outposts, AWS also shared an unnamed early customer’s success story with the offering in an industrial setting.
The customer was already utilizing AWS to run centralized decision-making applications—essentially to determine what work needs to be executed at which site. After deploying an Outpost rack at one of its sites, and connecting the rack to its nearest AWS region, the customer had total jurisdiction over its virtual network. According to AWS, it was then able to select its IP address range, create subnets, configure route tables and network gateways, and more, just as it did with its Amazon VPC.
Additionally, by creating a subnet and associating it with the Outpost, it was able to extend its regional VPC to Outpost. The customer launches instances on Outpost utilizing the same API call as it does in the public region, which can run in its pre-existing VPC and communicate with public region instances using private IP addresses. AWS claims that applications running on these instances will perform identically as they do in the public region, given the hardware is the same as what is used in their public region datacenters. The customer can also create a local gateway within the VPC that allows Outpost to direct traffic to its local datacenter networks. According to AWS, the customer intends to utilize Outposts to standardize tooling across on-prem and the cloud and automate deployments and configurations across its many facilities.
This is exactly how I had envisioned enterprises using Outposts and it’s super-exciting.
This isn’t AWS’s first hybrid rodeo as it has many different hybrid methods like the Snowball family, Direct Connect, Route 53, Storage Gateway, AWS Backup, DataSync, Transfer for SFTP, Directory Services, IAM, OpsWork, CodeDeploy, Systems Manager, and Greengrass. I believe Outposts raises the offering monumentally. Outposts is big compute, memory and storage on-prem, managed via AWS.
With AWS Outposts launching soon, it was really good to hear some more details about the offering. I was glad to get clarity around the EC2 instances and impressed with the services that will be available at launch. Additionally, it was helpful to hear in detail about a real-life customer use case of this promising new offering. Like I said before, AWS isn’t tiptoeing into the space, it’s driving a bulldozer through the wall into the space.
I’ll continue to watch with interest as AWS Outposts hits the market later this year—stay tuned for further coverage.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.