Yesterday, Amazon hosted its 3rd annual Alexa Live event for its 900,000+ developers and countless Alexa device users. Last year’s Alexa Live event had many big changes that plotted the future for the Alexa ecosystem. Alexa Live is joined by developers, device makers, startups, entrepreneurs, press, and analysts to see what Alexa has next for its community.
One statistic that Amazon threw out that caught my attention is that one in four Alexa Smart Home interactions are now initiated by Alexa rather than the customer. This statistic reveals the direction of the future of voice, and I believe Amazon hit it right on the money when giving its vision statement. Jeff Blankenburg, Chief Technology Evangelist, said Alexa’s vision statement, “to be an ambient assistant that is proactive, personal, and predictable, everywhere customers what her to be.” In the background, Alexa is made to be ambient while also naturally assisting customers and not being the next distraction. The unique challenge with Alexa is to traffic and guide information to the user from end-to-end without compromising ambiance. To do this, Amazon says it is working to make Alexa’s ambient experience ubiquitous, multimodal, and smarter.
Making Alexa ubiquitous
Amazon announced new interactive and customer-engaging features (Alexa Presentation Language), APL Widgets, and Featured Skill Cards. APL Widgets lets customers interact with content on the home screen with glanceable, self-updating views of skill content. Featured Skills lets developers put their skills on the home screen alongside what is already shown on the Echo Show home screen. These are great features that enhance the multimodal experience of the user and developer. Users should be able to engage with the skills used most and discover new Skills in a seamless interaction on the home screen.
Amazon announced its Name Free Interaction Toolkit (NFI) at last year’s Alexa Live event, and this year it has made some striding improvements to the toolkit. The toolkit helps developers get their skills out to users by flagging or signaling a skill based on a user’s request. Amazon says the toolkit has boosted traffic and doubling it in some cases for useful skills.
NFI Toolkit has a new feature that lets skills be the responses to Alexa’s popular discovery-oriented utterances like “Alexa, tell me a story” or “Alexa, I need a workout.” The NFI Toolkit also has a new personalized skill suggestion feature for users to frequent skills users find most helpful. An example Amazon gave was a customer asking, “Alexa, how did the Nasdaq do today?” and it responds with, “You’ve previously used CNBC skill. Would you like to use it again?” I highlight this example because it brings a personal and ubiquitous experience to skills without being overwhelming.
Amazon is also extending its Name Free Interactions feature to support extra discovery of skills in interactions that can use multiple skills. I think this feature is another great way to enhance customer interaction and increase discoverability.
Another interactive feature Amazon added to Alexa is the Spotlight feature on Amazon Music. Amazon says users can now connect directly with fans by uploading messages to promote new music and interact with fans. Amazon also created Interactive Media Skill components and Song Request Skill components that shorten interaction times with Radio, Podcast, and Music providers and give users extra modes of interaction. Users will either love or hate these features, given most primarily want to listen to music, and music isn’t necessarily an interactive activity.
Making Alexa multimodal
Amazon has announced its new Food Skills APIs that quickly enable users to create food delivery and pickup experiences. One of the toughest choices when going out to eat is deciding on a place to eat. Having local food offers and suggestions by Alexa should make the experience much easier for users, and in some cases, better for restaurants, stores, delivery services to get products and services out.
Amazon also has two new features that go hand-in-hand—Event-Based Triggers and Proactive Suggestions. Alexa users can build proactive experiences that trigger skills when an event or activity happens. Alexa also has improved routines with Custom Tasks that lets users customize routines inside of skills. Amazon also includes a feature that lets users send experiences that start on the Alexa device to a connected smartphone. These features open up the multimodal capabilities of Alexa, and I think users are going to find Alexa to be a crucial part of their days.
Alexa is also opening its Device Discovery feature to include additional Alexa-compatible devices connected to the same network. This feature allows device makers to integrate Device Discovery into other smart home devices to create a connected home. Amazon has also upgraded Alexa Guard to connect to smart safety devices like smoke, carbon monoxide, and water leak detectors around the home that can then send notifications.
Making Alexa Smarter
Amazon says it has doubled engagement of skills since it made Alexa Conversations generally available. It announced that it is expanding Alexa Conversations to be available in public beta in German, all English Locales, and a developer preview in Japan. It is also announcing Alexa Skill Components to help developers build skills faster by plugging foundational Skill code into existing voice models and code libraries.
Amazon is also making it easier for users to connect their accounts to a product or service skill or sign up using Voice Forward Account Linking and Voice Forward consent. Amazon said it had upgraded Alexa Skill Design Guide that codifies lessons learned from Amazon’s developers and broader skill build community.
Amazon has included other features that make creating skills and implementing services and products into the Alexa ecosystem much easier:
- Alexa Entities lets skills retrieve information from Alexa’s skill graph.
- Customized Pronunciations lets developers add custom pronunciation to skill models.
- Sample Utterance Recommendation Engine uses grammar induction, sequence-to-sequence transformers, and data filtering to recommend utterances for a developer’s skills.
- Skill A/B Testing lets developers perform A/B tests, make data-driven launch decisions.
- Service and Test-Generation Tool helps developers test capabilities for consolidated batch testing.
What’s great about these new features is that Amazon understands it does not have to do all the work in making Alexa smart. Amazon only needs to give developers the tools and opportunity to implement smart interactions and user experiences. I think these tools successfully give developers the tools to do so.
Ambient computing is one of the toughest things to get right but I believe the most valuable in the long-run. It could take another five to ten years of work to accomplish it on a global scale.
Amazon’s Alexa Live Event somehow brought more to the table than last year’s event. A large portion of creating an ambient experience that is ubiquitous, multimodal, and smart is in the hands of developers, device makers, entrepreneurs, and the Alexa community. To create an ambient experience, Amazon must create the tools and opportunities for these partners to do their part.
Amazon created seamless interactions between skills and users with Feature Cards and APL Widgets. It is giving skills more opportunity to be interactive and discoverable with the NFI Toolkit. Amazon is making many interactions and experiences between users and Alexa a big part of people’s day with Food APIs, Event-Based Triggers, and Proactive Suggestions. Amazon is successfully making skills easier and more accessible to developers, and I think the Alexa ecosystem, from end-to-end, can appreciate the feasibility.
Ambient computing is the “win” and Amazon, based on what I saw at Live, is getting us closer to this reality. It’s a two-horse race with Google, and it appears Amazon is in the current lead.
Note: Moor Insights & Strategy co-op Jacob Freyman contributed to this article.