AWS re: Invent 2020 is the Amazon Web Services annual user conference that focuses on cloud strategies and operations, security and developer productivity, and IT architecture and infrastructure. Typically, the conference spans seven different properties in Las Vegas and welcomes over 60,000 attendees over a few days. Of course, this year, like many others, the event is 100% virtual, taking place over the period of three weeks, offering a series of keynotes, product launches, and training sessions between November 30 and December 18.
AWS re:Invent is arguably the industry’s most-important enterprise conference because of Amazon’s dominance in the public cloud market, massive ecosystem, and its tendency to launch dozens (hundreds) of products and highlight its roadmap for innovation. This year was no exception as AWS unveiled a staggering number of new products and services.
In this article, I explain the most impactful non-compute (non-EC2) announcements and the reasons why you should care.
1/ AWS Proton – Automated Management for Container and Serverless Deployments
AWS Proton is a new service designed to help organizations automate and manage infrastructure provisioning and code deployments for serverless and container-based applications. Platform teams can use Proton to connect and coordinate all the different tools needed for infrastructure provisioning, code deployments, monitoring, and updates.
For the more technically minded, the process of defining a service template involves the definition of cloud resources, continuous integration, and continuous delivery (CI/CD) pipelines, and observability tools. AWS Proton will integrate with commonly used CI/CD pipelines and observability tools such as CodePipeline and CloudWatch. It also provides curated templates that follow AWS best practices for everyday use cases such as web services running on AWS Fargate or stream processing apps built on AWS Lambda.
Why this matters
AWS Proton is number one on my list because it is a game-changer for managing microservices’ deployment. Maintaining hundreds – or sometimes thousands – of microservices with constantly changing infrastructure resources and CI/CD configurations is a nearly impossible task for even the most capable platform teams.
AWS Proton gives platform teams the tools to provide developers an easy way to deploy code using containers and serverless technologies.
The platform team can build a stack comprised of templates that define and configure AWS services using a microservice including identity, monitoring, a CI/CD pipeline template that establishes the compilation of the code, and the testing and the deployment process. It is everything that is needed to deploy a microservice except for the actual application code. The platform team will publish the stacks defining various use cases for microservices to the Proton console. When a developer is ready to deploy code, he picks the template that best suits the use case, plugs in the parameters, deploys, and Proton will do the rest. It will provision the AWS services specified in the stack, using provided parameters, push the code through the CI/CD pipeline that compiles, tests, deploys the code to AWS services, and set up the monitoring and alarms.
2/ Babelfish for Aurora PostgreSQL
In the 1978 novel “The Hitchhiker’s Guide to the Galaxy, ” Babel fish, invented by Douglas Adams,” was a small, bright yellow fish, which, when placed in a person’s ear, made them able to understand any language. It seems an apt name for a new capability that makes it easier for customers to migrate from SQL Server to Amazon Aurora PostgreSQL.
Also included in the announcement was the next version of Aurora Serverless. For customers that don’t want to deal with the work associated with self-managing database capacity, Amazon Aurora Serverless v2 scales to hundreds of thousands of transactions in a fraction of a second, delivering up to 90% cost savings compared to provisioning for peak capacity.
Babelfish for Aurora PostgreSQL, a new capability for Amazon Aurora, allows customers to run SQL Server applications directly on Amazon Aurora PostgreSQL with little to no code changes. It does that by providing a translation layer for SQL Server’s proprietary SQL dialect (T-SQL) and communications protocol so that businesses can switch to AWS’ Aurora relational database at will ( but still have to migrate existing data). It provides translations for the dialect and SQL commands, cursors, catalog views, data types, triggers, stored procedures, and functions.
Finally, AWS shared its plans to open source Babelfish for PostgreSQL under the permissive Apache 2.0 license and make it available on GitHub. Together these innovations make Amazon Aurora even more attractive for a wide range of workloads and bring the benefits of Amazon Aurora and PostgreSQL to more organizations.
Why this matters
Number two on my list because it is a disrupter. This new database product is directed at Microsoft’s SQL Server and making it easier — and cheaper — for SQL Server users to migrate to the AWS cloud. The promise is that companies won’t have to replace database drivers or rewrite and verify database requests to make this transition.
Open-source databases have become more prevalent in recent years as businesses turn away from proprietary software to reduce costs. I’m not saying that’s always the right move- it is not, but many are. PostgreSQL is one of the most popular open-source databases in the market today, and many companies want to migrate relational databases to it — or at least use it in conjunction with existing databases.
Babelfish is not another migration service. It enables PostgreSQL to understand database requests—both the command and the protocol—from applications written for Microsoft SQL Server without changing libraries, database schema, or SQL statements. Faster ‘migrations’ with minimal developer effort with applications designed to use SQL Server functionality behaving the same on PostgreSQL as on SQL Server.
Many customers still have legacy databases developed and used for many decades, typically with a well-trained and funded support staff to run and manage them. These commercial databases offer high performance and advanced availability features but are expensive, complex to operate, and have a high lock-in.
Customers often face a dilemma when managing database capacity. The choice is to over-provision and waste money or under-provision capacity and risk application downtime.
Since its launch in 2018, Amazon Aurora Serverless has been used by tens of thousands of customers as a cost-effective database option for applications that have infrequent, intermittent, or unpredictable traffic (e.g., test and development workloads). Because it is serverless, customers don’t have to worry about managing database capacity.
Amazon Aurora Serverless v2 provides the ability to scale database workloads to hundreds of thousands of transactions in a fraction of a second. With Amazon Aurora Serverless v2, customers only pay for the capacity consumed, which can save them up to 90% of database costs when compared to the cost of provisioning for peak capacity. Amazon Aurora delivers the highest-grade commercial databases’ performance and availability at one-tenth of the cost, making it the fastest-growing service in AWS history.
I can’t speak to the “translation accuracy” from Microsoft to AWS SQL, but if it’s high, this could be a disrupter. When you look at the cost of SQL Server on Azure versus AWS, you just know this was coming. This won’t be the end of tools for AWS customers to migrate off of Oracle, Microsoft, and IBM, it’s just the beginning.
3/ Larger & Faster io2 Block Express EBS Volumes with Higher Throughput
Earlier this year, Amazon launched io2 volumes with 100x higher durability and 10x more IOPS/GiB, as an excellent fit for I/O-hungry and latency-sensitive applications, including high-performance, business-critical workloads.
Amazon is opening up a preview of io2 Block Express volumes designed to deliver even higher performance.
Built on the new EBS Block Express architecture, the volumes will give you up to 256K IOPS & 4000 MBps of throughput, and a maximum volume size of 64 TiB, all with sub-millisecond, low-variance I/O latency.
Throughput scales proportionally at 0.256 MB/second per provisioned IOPS, up to a maximum of 4000 MBps per volume. You can provide 1000 IOPS per GiB of storage, twice as many as before. The increased volume size & higher throughput means that you will no longer need to stripe multiple EBS volumes together, reducing complexity and management overhead.
Why this matters
These volumes are going to deliver fantastic performance for your SAP HANA, Microsoft SQL Server, Oracle, and Apache Cassandra workloads. Mission-critical transaction processing applications such as airline reservation systems and banking that once mandated using an expensive and comparatively inflexible SAN (Storage Area Network) can now live in the cloud.
By re-engineering the AWS EBS stack to enable on-premises SAN performance levels in the cloud, traditional on-premises storage area networking vendors should rightly feel threatened.
4/ Amazon Connect – Now Smarter and More Integrated With Third-Party Tools
Amazon Connect makes it easy for customers to build contact centers in the cloud.
Announced was a new set of capabilities powered by AWS’s machine learning technology.
Connect Wisdom provides contact center agents with information to solve issues in real-time. To serve customers with the best possible experience, agents need a wide range of product and service information in real-time. Unfortunately, this information resides in various databases and silos inside the company or third-party software. As a result, agents lose much time trying to access the information required to help customers. Wisdom ingests and organizes knowledge content that agents need (e.g., FAQs, help articles, PDFs) from homegrown databases and third-party knowledge repositories pre-built connectors for Salesforce and ServiceNow. Wisdom uses Natural Language Processing (NLP) to detect customer issues during the call and subsequently recommends relevant content stored in the knowledge repositories.
Amazon Connect Customer Profiles gives agents a unified profile of each customer to provide more personalized service. Content about customer activity and experiences, like product and service information, is often spread across various databases and user interfaces in homegrown applications and third-party services. In some cases, agents need to toggle between as many as ten different applications to find customer information like contact details, purchase history, and ticket status. By having more of a customer’s relevant information and a more holistic picture —in one place, customer service agents can provide more thoughtful guidance and service to end-users. Customer Profiles scans and matches the customer records across multiple applications for unique identifiers like phone numbers or account IDs when a customer calls.
Real-Time Contact Lens for Amazon Connect offers a new capability for contact center managers to impact customer interactions during a call. Real-Time Contact Lens gives managers the ability to know when customer interactions are going poorly, giving them the ability to intervene before harming the brand.
Amazon Connect Tasks automates, tracks, and manages tasks for contact center agents, improving agent productivity by up to 30%. Agents often rely on memory or hand-written notes to keep track of these tasks and follow-up items, which reduces productivity and creates a risk of errors and omissions. Amazon Connect Tasks helps companies improve agent efficiency, automate repetitive work, and lower costs.
Amazon Connect Voice ID delivers real-time caller authentication using machine learning-powered voice analysis. Contact centers had to use knowledge-based authentication processes. Callers had to answer multiple questions based on personal details like social security number, date of birth, and mother’s maiden name, which is time-consuming and open for fraud. Voice ID provides real-time caller authentication without disrupting the natural conversation. With Voice ID, callers have the option to authenticate themselves by using voice, offering them an additional layer of security against fraud and saving them from the hassle of having to answer multiple questions to verify identity.
Three of these new capabilities are available today: Contact Lens Real-Time, Customer Profiles, and Tasks. You must register to the preview program to test Wisdom and Voice ID.
Why this matters
Historically call center solutions have been hard to scale and expensive but, most notably, missing the two most transformational technology advances of the last 15 years: cloud and machine learning.
A call center solution should be easy to scale, be cost-effective, with capabilities that put the relevant information about products and customers in front of agents in real-time. The ability to enable agents to be more productive with tasks outside of calls allows them to start calls much quicker. And utilizing machine learning under the covers can impact calls in real-time before there’s any harm done to a brand.
It is no surprise that Amazon Connect is one of the fastest-growing services in the history of AWS. And what’s also interesting is during the pandemic, over 5,000 new Connect contact centers have started using the service to start-up call centers remotely, to help them deal with the fact that customer service agents are now remote.
This isn’t yet a turnkey SaaS solution, yet, but looks to be formidable as a PaaS solution to build on that requires developers to create an end product. Stay tuned. I predict AWS gets traction on this or turns it into a full-fledged SaaS solution where all enterprises need to do is “bring their own users.”
5/ Amazon Monitron and AWS Panorama Appliance, two applications of machine learning
Amazon Monitron is a condition monitoring service designed to monitor equipment and send signals to the engineering team when the equipment could be breaking down. If industrial companies know when equipment is breaking, it allows them to implement a predictive maintenance program.
AWS has a new hardware device, the AWS Panorama Appliance, which, alongside the AWS Panorama SDK, can turn existing on-premises, “dumb” cameras into computer visio super-powered surveillance devices.
Along with computer vision models that companies can develop using Amazon SageMaker, the new Panorama Appliance can run those models on video feeds from networked or network-enabled cameras.
Why this matters
These make my list as game-changers for industrial companies that want to be performing predictive maintenance and saving money.
Manufacturing and industrial companies could change the customer experience and plant operations using machine learning but often lack the equipment or the talent to make that happen. Manufacturing companies know that it’s always less expensive to fix something before it breaks, saving the cost of money downtime. Many companies don’t know how to take data from the sensors, send it to the cloud, and build machine learning models.
Monitron gives those customers a gateway device to send data to AWS, who in turn will build custom machine-learning models. The machine learning model will build a picture of what “normal” looks like and highlight anomalies sent back via a mobile app. Companies then know when to perform predictive maintenance. That’s a big deal.
Panorama addresses a related challenge of computer vision. It is a new hardware appliance that allows organizations to add computer vision to existing on-premises smart cameras.
There are many scenarios where split decisions need to be made, for example, on production lines or even interactions between people when trying to be socially distanced. With no time to send information to the cloud for an answer, decisions are on the ground in real-time. What is needed are smart cameras powerful enough to run sophisticated computer vision models at the edge. Also, customers don’t want to rip out all existing cameras that are have installed.
AWS Panorama appliance connects to the network and recognizes video streams from other facility cameras, up to 20 concurrent streams per Panorama appliance. There are pre-built models optimized by industry inside of Panorama that does computer vision.
Panorama is a new way for customers to inspect parts on manufacturing lines, ensure that safety protocols are followed, or analyze traffic in retail stores.
While it’s impossible to do justice to a massive event like AWS re: Invent in a single article, I hope I have helped you by highlighting my top list of most impactful non-compute announcements and the importance of each. Stay tuned for my EC2 announcement analysis.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.