As noted in this “Verge” article, Google is starting to “roll out gradually” a feature allowing you to customize voice detection sensitivity on Google Assistant devices. Here’s the article:
Google is starting to “roll out gradually” a feature allowing you to customize voice detection sensitivity on Google Assistant devices, a spokesperson confirmed to The Verge. Although the feature has not been widely released yet, Mishaal Rahman, editor-in-chief of XDA Developers, was able to access the feature by tinkering with the Google Home app’s code, he told The Verge.
Screenshots that Rahman posted to Twitter show the “‘Hey Google’ Sensitivity” feature displaying a slider that allows you to increase or reduce the sensitivity with which Google Assistant devices pick up the command “Hey Google.” Last September, Google confirmed there was an update coming that would let you adjust listening sensitivity. The new setting is meant to decrease accidental activations of your Assistant.
In this 11-page report, RAIN and PulseLabs looked into the how over 1400 people are using voice assistants during the pandemic. Here’s the highlights:
– More People are Looking to Voice for News & Info – Voice requests for updates about the coronavirus increased by 250% in the month of March, indicating that people are increasingly looking to their voice assistants for news and a variety of facts about current events.
– Voice Searches Carry Rich Emotional Valence – Spoken searches and commands can carry more emotion and sentiment, valuable for brands in any industry. For example, we found that people confide in Alexa, asking questions like “Alexa, what are the chances I’ll be infected?,” “Alexa, I’m scared,” and “Alexa, am I going to die?”
– Spikes in At-Home Voice Use Presents Big Potential Value for Brands – The conversation on voice can yield valuable insights across industries. As one key example, we found a 50% increase on voice apps related to ordering and delivering food. And questions about recipes have gone up by 41%. Analysis of these utterances confirms the intuition that people are cooking and ordering food more than before, while also providing clues about which brands and experiences they prefer.
– Accuracy is Paramount for Trust – Over recent months, both Alexa and Google Assistant have taken pains to ensure that reputable, recognized sources provide answers to coronavirus-related queries through a strong emphasis on 1st party experience. The volume, variety, and seriousness of the queries seen in this report validate the importance of those efforts.
This NY Times article describes the many uses of smart speakers that often aren’t taken advantage of – here’s an excerpt:
All the major smart speakers can connect to your phone and be used as a speakerphone. Even in the most well-wired offices, it’s often hard to be heard and understood on conference calls, and your smart speaker may be able to help. Using a HomePod, Google Home speaker, or Echo device as a speakerphone has two main advantages: It likely has a louder speaker than your smartphone and, often, an array of multiple microphones designed to pick up hard-to-hear speech from different angles of a room.
Each manufacturer has instructions on how to turn its smart speaker into a speakerphone (here they are for Apple’s HomePod, Google Home and Amazon’s Echo devices). Each device has its own way of connecting to your phone and contacts. Amazon Echo and Google Home speakers connect through their apps, while Apple iPhones can connect to HomePods over AirPlay or automatically just by holding it near the top of the speaker.
This might not be ideal in a large corporate setting, but for smaller offices or remote settings, having a multipurpose speaker that can be used for music and other tasks, as well as a conference call speakerphone other times might be just the ticket to beat the bad call quality that comes with other speakerphones.
This Voicebot.ai podcast provides ten short interviews from the CES conference. At the 22:01 mark, Bret talks to XAPPmedia’s Pat Higbie who discusses how speaking to voice apps is so much different than a human-to-human conversation. Among Pat’s comments were these:
– According to a panelist at CES, there are 3000 ways that people have asked to set alarms. So it’s difficult to predict how humans will ask for even a simple function to be performed.
– With voice, you are giving a simple command for an area that has a complex syntax
– Every time someone tweaks their voice app to accommodate new ways that human can ask for something, you run the risk that break what you’ve built. Some you have to be mindful of your existing syntax,
– Right now, there’s a lot of information out there about good design but not a lot about the engineering necessary to pull it off. In essence, there currently is a lack of engineering talent that knows how to deal with complex syntax
– Multimodal use of voice is rising and there’s a lot of work still ahead for that too. Providers will have to account for those using screens – and those not using them – when they design.
As would be expected with all of us home, voice is being used more now than ever (see this “pandemic use” article). Meanwhile, the latest Voicebot.ai annual stats show that voice on smart speakers is up a third in a year. Here’s an excerpt from their article:
– The U.S. smart speaker installed user base is now 87.7 million adults, up 32% over a year earlier
– This equates to a population adoption rate of 34.4%
– Smart speaker user base growth rate slowed in 2019 over 2018 despite adding more than 20 million new users
This Voicebot.ai podcast provides ten short interviews from the CES conference. At the 4:28 mark, Bret Kinsella talks to Audioburst’s Gal Klein. Audioburst helps with the discovery problem inherent in audio by extracting the relevant bits from podcasts and talk radio in response to what you’re looking for. This TechCrunch article helps to explain how that works (as well as Bret’s interview with Gal).
This Voicebot.ai article is exciting. Amazon is expanding Alexa’s voices and speech style choices for voice app developers – their text-to-speech service – known as “Polly” has more than a dozen new voices and styles from which to choose. I imagine that will continue to grow.
Even more exciting is that Alexa will sound more natural when speaking for longer periods. Here’s an excerpt from the article:
A lot of interaction with Alexa involves short responses or rote lines. That starts to sound strange when the voice assistant speaks for more than a few seconds. Alexa’s new long-form speaking stye is designed to address that disconnect and make using Alexa feel as comfortable as talking to another human. Since people don’t speak the same way when uttering a sentence as they do when expounding for multiple paragraphs, the addition is likely to be popular with voice apps that read magazines, books, or transcribed conversations from a podcast out loud. For now, this style is only an option for Alexa in the United States.
“For example, you can use this speaking style for customers who want to have the content on a web page read to them or listen to a storytelling section in a game,” Alexa developer Catherine Gao explained in Amazon’s blog post about the new feature. “Powered by a deep-learning text-to-speech model, the long-form speaking style enables Alexa to speak with more natural pauses while going from one paragraph to the next or even from one dialog to another between different characters.”