Alexa Live Was Its Biggest Developer Launch, Ever

Two weeks ago, Amazon held its virtual Alexa Live 2020 event for Alexa voice developers, device makers, and business leaders alike—anyone in the business of leveraging Amazon’s highly popular smart assistant. Alexa boast a develop community of approximately 700,000, many whom look to this event for a roadmap to help guide their efforts in the coming year. 

At the event, Amazon plotted out a course for the future of Alexa, unveiling a whole slew of new capabilities, APIs, and tools for the intelligent platform. As a tech analyst, I believe the rise of these intelligent voice assistants is one of the more revolutionary technologies to come of age in recent memory, and these things just keep getting smarter and more capable. It’s an exciting area to cover. Notably, with 30 different announcements, this marked what I believe is the biggest basket of Alexa news ever released at the same time. Let’s take a look at some of the more impactful news items to come out of the event.

Alexa becomes a better conversationalist

The Alexa announcements can more or less be broken down into four different buckets. The first group pertains to making Alexa interactions less deterministic and more natural—conversation based, as opposed to being dependent on strict phrasing of and wording of commands. To that end, the company announced it is in the process of implementing deep neural networks, or DNNs, for the purpose of taking Alexa’s natural language comprehension to the next level. By utilizing this technology, the company says it is seeing, on average, a 15% improvement in accuracy for natural language understanding. That is a significant improvement.

On a related note, Amazon also lifted the curtain on Alexa Conversations, an AI-driven approach to dialog management for natural language that is now in beta. Businesses can utilize this solution to develop skills for their customers that allow them to converse in a more “natural, unconstrained” manner. In other words, Alexa will in theory be able to comprehend customer queries regardless of the phrasing, or order of phrases, spoken. Developers can create more canonical dialogues, in which Alexa learns all the different possible paths a conversation might take, with the ability for the customer to correct and clarify if the assistant misunderstands. Obviously, the end goal for these voice assistants is to be able to have totally natural, easy, essentially “human” conversations with them. Any solution that punts the ball further down the court on that goal is a winner in my view.

Amazon also announced that its name-free interactions, or NFI, toolkit is now in preview. This offering, according to Amazon, will allow users to give Alexa more signals to consider when launching their skills. As many as five different launch phrases can now be added to a skill.

APLs, APIs and more for increased immersiveness

The next bucket of news falls in the category of solutions that enable developers to make Alexa experiences more immersive for customers. Amazon’s new APL (Amazon Presentation Language) for audio mixing falls into this group. APLs, in short, let users create interactive visual experiences for supported Amazon devices—think videos, slideshows, animations, and things of that ilk. Amazon says the new APL will give organizations the ability to mix audio for their immersive experiences at runtime, including music, sound effects and more. This stands to save customers the hassle, time and cost of having to pre-mix the audio for their experiences. 

Also, in the realm of enabling more immersive experiences, Amazon unveiled its new Alexa Web API for Games. This solution purports to enable users to create animated, multimodal games for Alexa on screen devices, utilizing their design language of choice between HTML5, Web Audio, CSS, JavaScript and WebGL.

A new feature called Skill Resumption is also now in preview. This is basically what it sounds like—after a customer switches from one skill to perform another activity with Alexa, they will be able to return to the skill they were originally running. Resumption can be triggered either automatically, when new information is available, or by voice command.

Amazon also announced the latest update to its APL, designed to aid developers in building more immersive visual experiences. Version 1.4 implements drag-and-drop UI controls, editable text boxes, and the ability for users to navigate back to the previous screen.

Connectivity

The third category of solutions centers around those that make it easier to connect devices to Alexa. This category basically consists of a new module for Alexa Connect Kit (ACK), a chipset announced last year that device makers can connect to any device’s MCU to make it smart and connected via Alexa. Amazon says the new module, ACK Module with Espressif Chipset, is up to 50% cheaper than previous models. ACK is an invaluable tool in that it allows device makers to turn their products into Alexa-compatible devices without having to write an Alexa skill, manage cloud services, or deal with developing network and security firmware. 

This is a big deal, I believe under-reported as I believe this greatly reduces the overall, long-term cost of adding Alexa to a device. 

Driving business with Alexa

The last bucket of these new technologies revolve around solutions that help developers drive their business to the rest of the business. The big one here is Alexa for Apps, also in preview, which Amazon says will enable customers to merge Alexa skills with mobile iOS and Android applications. Amazon claims this will provide users with the best of both voice and hand control—the ease and convenience of hands-free interactions, with the ability to get more detailed and granular using traditional touch control if desired. There is also a new ability currently in beta in which users can add links to launch their skills via their mobile apps, websites, and advertisements with the click of a button

The last piece of news that stuck out to me was the new ability for customers to perform in-skill purchases via Echo screen devices, voice control, or on Amazon.com. Amazon says this is targeted at customers who desire a more traditional purchasing experience. I see this as a good move that will make a certain subset of users feel more comfortable using Alexa for their shopping needs. Comfortable customers are happy customers and Amazon has historically been the best the past 15 years at reducing buying friction. Think “one click to purchase.”

Wrapping up

It was a good day for news at Alexa Live 2020, and I think all the new features and offerings announced are worthwhile value adders to Alexa’s proposition to end users and hence its developers. There’s a reason Amazon has been so successful in this category, and I believe it has everything to do with the way it continues to listen to iterate and improve on Alexa as underlying technology improves. Alexa continues to perfect its natural language comprehension and give users even more options and flexibility in how they design and implement skills. I can’t wait to see how these new skills are put to use.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.