Amazon has introduced a new tool for companies looking to build a branded voice assistant. Here’s some insight from the “Rain” agency:
Although the voice ecosystem has long been dominated by the likes of Amazon, Google, and Apple, many brands are exploring the arena of custom owned assistants that give brands more control over data and user experience without a third-party intermediary. With Amazon’s latest announcement, the line is blurring between big tech voice assistants and brand-owned custom assistants.
A handful of companies have already created devices that allow for the coexistence of their own assistant alongside a mainstream assistant. European telecommunications companies including Vodafone Spain, Orange, and Deutsche Telekom have all taken this approach and released devices integrated with both their custom assistant and Alexa. By using Amazon’s Alexa Custom Assistant initiative, many more companies will be thinking about how they can leverage the best of both worlds, bringing a generalist assistant alongside a specialist.
Here’s an article from ‘voicebot.ai” about Amazon’s big announcement…
With voicebot.ai reporting that Clubhouse has surpassed 10 million members – I am among them – I put together this 12-minute video explaining how Clubhouse works and my ten cents about whether you should try it. With a few bonus tips if you do indeed give it a “go”…
Walmart is expanding its use of voice technology. The company announced today its taking its employee assistance voice technology dubbed “Ask Sam” and making it available to associates at over 5,000 Walmart stores nationwide. The tool allows Walmart employees to look up prices, access store maps, find products, view sales information, check email and more. In recent months, Ask Sam has also been used to access COVID-19 information, including the latest guidelines, guidance and safety videos.
Ask Sam was initially developed for use in Walmart-owned Sam’s Club stores, where it rolled out across the U.S. in 2019. Because of its use of voice tech, Ask Sam can speed up the time it takes to get to information versus typing a query on the small screen. This allows employees to better engage with customers instead of spending time on their device looking for information.
I’m a big music lover – so I was excited to see this voicebot.ai piece indicating that Spotify might be building voice activation into their service. Here’s an excerpt:
The screenshot shared by Wong shows a new Voice sub-menu in the Spotify app where users grant permission for Spotify to use their microphone. Spotify will apparently only listen for the wake word when the app is open on the screen. That’s a big hint as to how Spotify might envision people using the voice service. The only time people are likely to keep Spotify open on their device is when they can’t hold it, such as when they are driving. The voice assistant may also be tied to the device for playing music and podcasts and cars that Spotify announced it was working on a year ago.
The extent of the voice assistant isn’t known, but presumably, it will include search and playback controls. Spotify has yet to share any information about its plans for a voice assistant publicly, so there’s no timeline either, but the foundation is there in the app.
The past few days I’ve been blogging about this podcast, in which Voicebot.ai’s Bret Kinsella talks with John Kelvie from Bespoken about how “domains” will replace voice apps. I wanted to offer one last excerpt from John’s blog, pulled from the bottom about how companies that are building their own voice assistants might be better served doing something else:
The devices in column one are inevitable and in some cases are already essential. Column two? Many may seem silly but some nonetheless will prove indispensable.
And these are JUST the devices with voice-capabilities embedded – the march of voice continues to be the march of IoT. Voice is our point of control for the ubiquitous computing power that exists around us. If you imagine a world in which the average cell phone owner has just ONE of each of the above items, the coming wave of voice-enabled devices looks like a tsunami. And if you factor in the devices under their control (thermostats, lights, power switches, appliances, etc.), it becomes even more staggering.
And the very good news is third parties have a huge role to play – the big guys need to provide the platforms and the device access, but they cannot do all the fulfillment. The future of the ecosystem is everyone playing nicely together in this new query-centric, domain-centric world, in which first and third-parties work together seamlessly.
For the platforms, it’s the chance to employ, at massive scale, the wisdom of the crowd – the wisdom of every brand, app builder, API and website on earth. What an amazing achievement it will be.
For third parties, it’s the opportunity to meet users, wherever they are, whatever they are doing – properly done, they will be just a short trip of the tongue away.
A few weeks ago, I blogged about this free 48-page playbook by “360i” about what you should know about voice from a marketing perspective. To help instruct their clients in how to approach voice opportunities, 360i has built a “Voice Search Monitor” over the past year to study how the major voice assistants respond to various scenarios. They’ve witnessed changes over time with this monitor – and learned that, at least as of right now, Google knows a lot more than Alexa (5x more on average). Google Assistant prefers to draw from location-based data for retail queries, whereas Alexa relies on its own top-matching product recommendations.
This excerpt from 360i’s playbook shows the kinds of questions that 360i’s “Voice Search Monitor” is asking the major voice assistants to see how they tick:
– Did the assistant have an answer or not?•Was the answer good, bad, or incomplete?
– How do the assistants respond to commands vs. informational questions?
– Does performance and relevance differ by topic or industry vertical?
– Does the language used to express the same intent create different results?
– How does personalization and history impact the experience and results?
– How does this change over time for the same questions?
Geez. It was only last week that I was complaining about mass media not covering voice enough. They must have heard me. Yesterday, the “USA Today” ran this article entitled “Hey, Google, Siri or Alexa: Which voice assistant handles these 100 questions best?” – here’s an excerpt:
Still, survey after survey shows that Google is the smartest of the personal assistants, with Amazon’s Alexa a close No. 2, and Apple’s Siri behind. Once again, we sat down to ask a series of questions – 100 of them – to the assistants to see how they would fare. But we did it differently this time. I wondered: How would each fare if we asked a different set of questions? Namely, what if we asked Amazon’s suggested Alexa queries to Google and vice versa with Siri?
This survey had a different victor, Alexa, with only 22 wrong replies out of 100, to 25 for Google and 43 for Siri. More importantly, it showed that each voice platform has a unique set of differences. Apple’s maligned Siri is best for basics (sending a text, composing an e-mail, adding calendar items), while Google is usually the smartest for math and trivia, best in smart home for quick setup and ease of use. Amazon has way more skills and different things you can do than the other two.
The article notes some notable voice fails – and successes – and this tip to remember when providing an utterance: “If at first you don’t succeed, keep trying. Often it’s the phrase, the diction, or, there might be a skill in the Alexa or Google Home app to enable the command, like I found for renting a car, hailing a cab and using Open Table for reservations.”
I haven’t spent any time trying to figure out how the speech recognition & other algos work for voice assistants. But I had read that Amazon’s voice devices were integrated with Wolfram|Alpha’s knowledge engine a few months ago. So I was eager to read this piece by Joe Murphy, vocalize.ai’s CEO, about whether the combination truly has enabled Alexa to answer more questions and whether it’s doing so with greater accuracy.
Here’s an excerpt from Joe’s article revealing his research results finding that the claim that Alexa has gotten smarter is partially true – but there’s some memory loss:
The newly announced Alexa and Wolfram|Alpha integration raises an interesting question. Is Alexa really getting smarter? How do we even quantify the claim? At Vocalize.ai we set out to find the answer and what we discovered is actually quite surprising. It turns out that over time, Alexa was getting the capability to answer more questions, but she was also forgetting answers that she used to know. How is it possible that Alexa could forget facts and answers? To be honest, we are not yet sure, but the data is clear… Alexa is both providing new answers and at the same time, forgetting previously known answers.