According to this Voicebot.ai article, since 2018, hearable ownership by U.S. adults has risen about 23% and voice assistant use through hearables grew by 103% from 21.5 million in 2018 to 43.7 million in 2020. The data show that hearables and voice assistant adoption are complementary technology trends.
Since it’s here in my backyard, I’ve got to blog about it. Here’s an excerpt from this article by Voicebot.ai:
Planet Word combines stories with technology in ten learning galleries. An interactive conversation with a wall of words relates the history and development of English, using what visitors say to pick out words to spotlight with the embedded lights. Technology is also crucial to an exhibit with smart paintbrushes for drawing words. Visitors can also practice virtual conversations with speakers of rare languages.
On the performative end, visitors can show off their own speech-giving talents in a soundproof room with a teleprompter that plays eight famous speeches or in a poetry nook in the library, as well as visit a karaoke area for learning about songwriting and performing their favorites. Outside, artist Rafael Lozano-Hemmer installed a metallic weeping willow that continually plays 364 voices in almost as many languages. The museum is inside the Franklin School, a very appropriate choice as it was there that Alexander Graham Bell told Watson he needed him in the first-ever wireless voice transmission.
Here’s some commentary from the “Rain” agency:
Digital conversations with friends and colleagues have traditionally revolved around text – typing on our keyboards or phones to communicate messages. Although the pandemic has created new demand for video conferencing, screen fatigue has started setting in, leaving space for a new kind of communication platform driven by voice. Several companies have started to populate this new audio ecosystem, trying to leverage voice conversations for personal and professional use.
From Discord to Clubhouse, these kinds of voice-driven platforms are becoming more common, and now we’re seeing mainstream platforms like Twitter recognizing value here as well. As many of us continue to work remotely, audio chat is emerging as a unique way to maintain human connection and rapport.
Here’s the intro from this article from “The Verge”:
Twitter plans to take on Clubhouse, the invite-only social platform where users congregate in voice chat rooms, with a way for people to create “spaces” for voice-based conversations right on Twitter. In theory, these spaces could provide another avenue for users to have conversations on the platform — but without harassment and abuse from trolls or bad actors, thanks to tools that let creators of these spaces better control the conversation.
The company plans to start testing the feature this year, but notably, Twitter will be giving first access to some of the people who are most affected by abuse and harassment on the platform: women and people from marginalized backgrounds, the company says.
In one of these conversation spaces, you’ll be able to see who is a part of the room and who is talking at any given time. The person who makes the space will have moderation controls and can determine who can actually participate, too. Twitter says it will experiment with how these spaces are discovered on the platform, including ways to invite participants via direct messages or right from a public tweet.
Here’s the intro from this Voicebot article:
Voicebot’s biannual Smartphone Voice Assistant Consumer Adoption Report considered new questions in 2020 around consumer interest in and experience with voice interaction within mobile apps. A key finding is that consumers have strong interest in voice interactivity within mobile apps and more experience with these features than many people realize. Just over 45% of consumers said they would “very much” or that “it would be nice” to have voice assistant features within their favorite mobile apps. This figure compares to just 25% that said they were not interested.
I used to blog more about the challenges of building skills – and how to make it easier for skills to be discovered by folks once you launch them. Here’s a nice piece about skill discovery from voicebot.ai, along with this excerpt:
Zevenbergen’s good fortune to rise to the top of the search results for “What’s my horoscope?” would not be wasted. He had already built user retention elements into his Google Action. First, Zevenbergen wanted to fulfill the intent of the user very efficiently. He had a goal of giving the horoscope as quickly as possible. For new users that simply required determining their birthday. Not only was there a target of delivering the full horoscope within 10 seconds, the Action tells new users that they will receive their horoscope within 10 seconds. It sets expectations and removes a potential concern about how much the user may be committing to with this particular voice experience.
Second, he found that shorter, more concise horoscopes were leading to more completed sessions. There may be an opportunity to convey many paragraphs worth of horoscope goodness but that’s often the opposite of what people want when interacting on a smart speaker. They want the facts. Ensuring users heard the entire horoscope before abandoning the session also gave him a captive audience that was still around when the Action offered to add “What’s my zodiac sign” to a routine or notification. “What’s my zodiac sign” is now getting about 5,000 opened notifications from Google Assistant each day. If you compare that to the DAUs for the Action you will conclude that nearly 85% of daily user sessions are driven by this single technique.
The “Rain” agency just dropped this four-part report that dives deep into the “what,” “why,” and “how” of brands building owned virtual assistants (OVAs). Check it out…
Here’s the intro from this voicebot.ai article:
Amazon’s new Alexa Print feature extends the voice assistant into the physical realm with a slew of new commands that allow the AI to offer a physical response to a question or request. Alexa can print calendars, coloring books, recipes, and puzzles by voice command, a third dimension to the digital audio, and screen responses available on smart speakers and smart displays. The update also allows voice app developers to augment their Alexa skills with printing commands, first promised by Amazon a year ago.
This voicebot.ai article notes that Apple has reduced its prices dramatically for smart speakers, coming out with a “HomePod Mini” for $99. As someone who spent a bundle a decade ago for a home stereo system from Sonos, it’s amazing to see how prices have dropped. Not to mention the dazzling array of features these smart speakers have. Love the intercom feature so that you can talk to others in another room. No more shouting upstairs for your partner…
You can pay for so many things by voice now that blogging about it seems a little silly. But it’s pretty cool that so many gas pumps are now Alexa-enabled – “Alexa Fuel” – as noted in this voicebot.ai article. Here’s an excerpt:
Drivers can now ask Alexa to handle fuel payments at more than 11,500 Exxon and Mobil gas stations in the U.S. The program, first previewed by Amazon at CES in January, skips the need to use a card or touchpad, relying only on voice commands and some access to the voice assistant.
Getting Alexa to pay for the gas just requires a driver to have some way of communicating with Alexa. That can include cars with Alexa built-in, an Echo Auto device in the car, or just the Alexa app on a smartphone. When they park the car at the pump and ask the voice assistant to pay for gas, Alexa will determine what gas station they are at and the pump number, activating the pump remotely, so the driver simply has to insert the nozzle and start refueling their car. The transaction uses a customer’s existing Amazon Pay account, so there’s no extra sign-in needed, although the user can add a voice PIN if they want. Financial tech giant Fiserv supports the underlying communication between Alexa and the pump and facilitates the actual digital payment.