In the last few months, Google has rolled out several features to position Google Assistant as your go-to for quickly completing tasks on mobile devices. Memory, which is in testing with employees, allows Android users to use voice commands to save anything from pictures and links to screenshots and reminders. April’s Assistant updates let users find their phones with their Nest speaker or smart displays, as well as automatically fill out payment information on the Google Android app for food pickup. Up until now, most of these voice experiences require users to invoke the wake word “Hey Google.”
But this week, the company has been testing a new feature where mobile users can manage alarms and timers without using a wake word — functionality that Google smart display owners have already been enjoying. By eliminating the traditional wake word in these instances, tech companies do open the door to questions about just how many phrases their assistants are passively listening for, and under what circumstances.
With the momentum behind hands-free music streaming on smart speakers, brands like Pandora and Spotify are elevating their voice strategies through custom assistants tailored to their apps and users. Voice Mode, Pandora’s mobile voice assistant capability, was introduced in 2019 for listeners to control their music with commands.
Now this week, Spotify is officially rolling out its voice assistant within its app, along with developing a voice-activated hardware device for the car. As these two platforms’ assistants evolve, not only are advanced music platform interaction and recommendations made possible, but advertising and emotional sentiment capabilities are on the horizon. As a result, we see voice tech becoming more than just a tool to choose your favorite song with a simple command, but a way to enable personalized experiences, changing the way people use streaming services.
Amazon has introduced a new tool for companies looking to build a branded voice assistant. Here’s some insight from the “Rain” agency:
Although the voice ecosystem has long been dominated by the likes of Amazon, Google, and Apple, many brands are exploring the arena of custom owned assistants that give brands more control over data and user experience without a third-party intermediary. With Amazon’s latest announcement, the line is blurring between big tech voice assistants and brand-owned custom assistants.
A handful of companies have already created devices that allow for the coexistence of their own assistant alongside a mainstream assistant. European telecommunications companies including Vodafone Spain, Orange, and Deutsche Telekom have all taken this approach and released devices integrated with both their custom assistant and Alexa. By using Amazon’s Alexa Custom Assistant initiative, many more companies will be thinking about how they can leverage the best of both worlds, bringing a generalist assistant alongside a specialist.
Here’s an article from ‘voicebot.ai” about Amazon’s big announcement…
With voicebot.ai reporting that Clubhouse has surpassed 10 million members – I am among them – I put together this 12-minute video explaining how Clubhouse works and my ten cents about whether you should try it. With a few bonus tips if you do indeed give it a “go”…
Walmart is expanding its use of voice technology. The company announced today its taking its employee assistance voice technology dubbed “Ask Sam” and making it available to associates at over 5,000 Walmart stores nationwide. The tool allows Walmart employees to look up prices, access store maps, find products, view sales information, check email and more. In recent months, Ask Sam has also been used to access COVID-19 information, including the latest guidelines, guidance and safety videos.
Ask Sam was initially developed for use in Walmart-owned Sam’s Club stores, where it rolled out across the U.S. in 2019. Because of its use of voice tech, Ask Sam can speed up the time it takes to get to information versus typing a query on the small screen. This allows employees to better engage with customers instead of spending time on their device looking for information.
I’m a big music lover – so I was excited to see this voicebot.ai piece indicating that Spotify might be building voice activation into their service. Here’s an excerpt:
The screenshot shared by Wong shows a new Voice sub-menu in the Spotify app where users grant permission for Spotify to use their microphone. Spotify will apparently only listen for the wake word when the app is open on the screen. That’s a big hint as to how Spotify might envision people using the voice service. The only time people are likely to keep Spotify open on their device is when they can’t hold it, such as when they are driving. The voice assistant may also be tied to the device for playing music and podcasts and cars that Spotify announced it was working on a year ago.
The extent of the voice assistant isn’t known, but presumably, it will include search and playback controls. Spotify has yet to share any information about its plans for a voice assistant publicly, so there’s no timeline either, but the foundation is there in the app.
The past few days I’ve been blogging about this podcast, in which Voicebot.ai’s Bret Kinsella talks with John Kelvie from Bespoken about how “domains” will replace voice apps. I wanted to offer one last excerpt from John’s blog, pulled from the bottom about how companies that are building their own voice assistants might be better served doing something else:
The devices in column one are inevitable and in some cases are already essential. Column two? Many may seem silly but some nonetheless will prove indispensable.
And these are JUST the devices with voice-capabilities embedded – the march of voice continues to be the march of IoT. Voice is our point of control for the ubiquitous computing power that exists around us. If you imagine a world in which the average cell phone owner has just ONE of each of the above items, the coming wave of voice-enabled devices looks like a tsunami. And if you factor in the devices under their control (thermostats, lights, power switches, appliances, etc.), it becomes even more staggering.
And the very good news is third parties have a huge role to play – the big guys need to provide the platforms and the device access, but they cannot do all the fulfillment. The future of the ecosystem is everyone playing nicely together in this new query-centric, domain-centric world, in which first and third-parties work together seamlessly.
For the platforms, it’s the chance to employ, at massive scale, the wisdom of the crowd – the wisdom of every brand, app builder, API and website on earth. What an amazing achievement it will be.
For third parties, it’s the opportunity to meet users, wherever they are, whatever they are doing – properly done, they will be just a short trip of the tongue away.
A few weeks ago, I blogged about this free 48-page playbook by “360i” about what you should know about voice from a marketing perspective. To help instruct their clients in how to approach voice opportunities, 360i has built a “Voice Search Monitor” over the past year to study how the major voice assistants respond to various scenarios. They’ve witnessed changes over time with this monitor – and learned that, at least as of right now, Google knows a lot more than Alexa (5x more on average). Google Assistant prefers to draw from location-based data for retail queries, whereas Alexa relies on its own top-matching product recommendations.
This excerpt from 360i’s playbook shows the kinds of questions that 360i’s “Voice Search Monitor” is asking the major voice assistants to see how they tick:
– Did the assistant have an answer or not?•Was the answer good, bad, or incomplete?
– How do the assistants respond to commands vs. informational questions?
– Does performance and relevance differ by topic or industry vertical?
– Does the language used to express the same intent create different results?
– How does personalization and history impact the experience and results?
– How does this change over time for the same questions?