Hat tip to Ahmed Bouzid and his wonderful “Lingofest” event. Here is a presentation by Cynthia Holcomb entitled “Why has voice shopping failed to exist?“…
Hat tip to Ahmed Bouzid of Witlingo for pointing out this podcast by Voice Tech and audio specialist Suze Cooper and Twitter marketing expert Madalyn Sklar about all you need to know about social audio…
Here’s a teaser from the “Rain” agency about their weekly note:
Millions of calls are answered in call centers per day, generating conversations rife with insights into customer behaviors and preferences. This week, we take a look into how companies are analyzing conversations to enhance the customer experience. Voice technology companies like Observe.AI and CallMiner are deploying tools that analyze the interactions (including sentiment and even silences) between customers and call center representatives, providing recommendations on how these employees can improve their service.
With information on what a customer is searching for, brands like Spotify and Amazon are hoping to personalize content in real time — setting the stage for how conversation analysis might be used in phone calls, voice experiences, and more to elevate marketing.
Here’s a note from the “Rain” agency:
In the last few months, Google has rolled out several features to position Google Assistant as your go-to for quickly completing tasks on mobile devices. Memory, which is in testing with employees, allows Android users to use voice commands to save anything from pictures and links to screenshots and reminders. April’s Assistant updates let users find their phones with their Nest speaker or smart displays, as well as automatically fill out payment information on the Google Android app for food pickup. Up until now, most of these voice experiences require users to invoke the wake word “Hey Google.”
But this week, the company has been testing a new feature where mobile users can manage alarms and timers without using a wake word — functionality that Google smart display owners have already been enjoying. By eliminating the traditional wake word in these instances, tech companies do open the door to questions about just how many phrases their assistants are passively listening for, and under what circumstances.
Here’s the news from the “RAIN” agency:
With the momentum behind hands-free music streaming on smart speakers, brands like Pandora and Spotify are elevating their voice strategies through custom assistants tailored to their apps and users. Voice Mode, Pandora’s mobile voice assistant capability, was introduced in 2019 for listeners to control their music with commands.
Now this week, Spotify is officially rolling out its voice assistant within its app, along with developing a voice-activated hardware device for the car. As these two platforms’ assistants evolve, not only are advanced music platform interaction and recommendations made possible, but advertising and emotional sentiment capabilities are on the horizon. As a result, we see voice tech becoming more than just a tool to choose your favorite song with a simple command, but a way to enable personalized experiences, changing the way people use streaming services.
This voicebot.ai article talks about something awesome! Leveraging Alexa as a social tool to connect with family & friends by sharing songs. Wild.
Here is an excerpt from the piece:
Alexa users listening to a song on an Echo smart speaker or their smartphone can ask the voice assistant to share the music with any contact who owns an Echo or has the Alexa mobile app. The recipient will get a notification, and Alexa will ask if they want to hear the song and what device they want to play the song on and allowing them to send a reaction back to the sender.
The music will play on the recipient’s default music streaming service if the song is available there, or use another streaming service if not. Alexa can tap an enormous playlist across Amazon Music, Apple Music, iheartradio, TuneIn, and Radio.com, but in case none of those have the song being shared, Alexa will offer to play a music station that might be relevant to the song title or artist.
Here is a note from the “Rain” agency:
Voice is a natural channel for conveying our emotions and feelings. However, voice technology is still trying to crack analyzing sentiment and how these data points can inform the creation of emotionally intelligent voice experiences and assistants. We have seen voice assistants like Amazon Alexa and Google Assistant expand their speaking styles to include different emotions in certain responses, reflecting more humanlike interactions.
However, monitoring emotions on the consumer side is still a nascent technology. Amazon has made some steps toward realizing this with its health and wellness wearable Halo, which tracks users’ tone through their voices to make them more aware of their communication styles. This week, we’ve seen a new update in emotion recognition with Spotify’s patent approval of technology that analyzes listeners’ moods. Even though the patent only points to a small amount of features and a targeted use case, we are beginning to see how voice technology might leverage sentiment to provide more relevant recommendations and experiences for consumers in many contexts.
Here’s the intro from this “voicebot.ai” article:
Facebook is building a virtual assistant to digest, summarize, and read articles for users, according to a Buzzfeed report on a closed company meeting. The TL;DR tool, internet slang for “too long, didn’t read,” would use AI to condense articles into bullet points and read them out loud for users who want to skip reading it for themselves. The social media giant also discussed other planned AI projects, including creating a neural sensor to detect thoughts as they form and turn them into commands to AI assistants.
Here’s the intro from this voicebot.ai article:
Voice assistants may soon need to pay Wikipedia to find answers to some of the questions users pose. The Wikimedia Foundation, the umbrella organization that encompasses Wikipedia and its sibling wiki-projects, is launching Wikimedia Enterprise to start packaging and selling Wikipedia’s content to Apple, Amazon, Facebook, and Google, including their respective voice assistants, as first reported by Wired.
Amazon has introduced a new tool for companies looking to build a branded voice assistant. Here’s some insight from the “Rain” agency:
Although the voice ecosystem has long been dominated by the likes of Amazon, Google, and Apple, many brands are exploring the arena of custom owned assistants that give brands more control over data and user experience without a third-party intermediary. With Amazon’s latest announcement, the line is blurring between big tech voice assistants and brand-owned custom assistants.
A handful of companies have already created devices that allow for the coexistence of their own assistant alongside a mainstream assistant. European telecommunications companies including Vodafone Spain, Orange, and Deutsche Telekom have all taken this approach and released devices integrated with both their custom assistant and Alexa. By using Amazon’s Alexa Custom Assistant initiative, many more companies will be thinking about how they can leverage the best of both worlds, bringing a generalist assistant alongside a specialist.
Here’s an article from ‘voicebot.ai” about Amazon’s big announcement…