Voice is a natural channel for conveying our emotions and feelings. However, voice technology is still trying to crack analyzing sentiment and how these data points can inform the creation of emotionally intelligent voice experiences and assistants. We have seen voice assistants like Amazon Alexa and Google Assistant expand their speaking styles to include different emotions in certain responses, reflecting more humanlike interactions.
However, monitoring emotions on the consumer side is still a nascent technology. Amazon has made some steps toward realizing this with its health and wellness wearable Halo, which tracks users’ tone through their voices to make them more aware of their communication styles. This week, we’ve seen a new update in emotion recognition with Spotify’s patent approval of technology that analyzes listeners’ moods. Even though the patent only points to a small amount of features and a targeted use case, we are beginning to see how voice technology might leverage sentiment to provide more relevant recommendations and experiences for consumers in many contexts.
Facebook is building a virtual assistant to digest, summarize, and read articles for users, according to a Buzzfeed report on a closed company meeting. The TL;DR tool, internet slang for “too long, didn’t read,” would use AI to condense articles into bullet points and read them out loud for users who want to skip reading it for themselves. The social media giant also discussed other planned AI projects, including creating a neural sensor to detect thoughts as they form and turn them into commands to AI assistants.
Voice assistants may soon need to pay Wikipedia to find answers to some of the questions users pose. The Wikimedia Foundation, the umbrella organization that encompasses Wikipedia and its sibling wiki-projects, is launching Wikimedia Enterprise to start packaging and selling Wikipedia’s content to Apple, Amazon, Facebook, and Google, including their respective voice assistants, as first reported by Wired.
Amazon has introduced a new tool for companies looking to build a branded voice assistant. Here’s some insight from the “Rain” agency:
Although the voice ecosystem has long been dominated by the likes of Amazon, Google, and Apple, many brands are exploring the arena of custom owned assistants that give brands more control over data and user experience without a third-party intermediary. With Amazon’s latest announcement, the line is blurring between big tech voice assistants and brand-owned custom assistants.
A handful of companies have already created devices that allow for the coexistence of their own assistant alongside a mainstream assistant. European telecommunications companies including Vodafone Spain, Orange, and Deutsche Telekom have all taken this approach and released devices integrated with both their custom assistant and Alexa. By using Amazon’s Alexa Custom Assistant initiative, many more companies will be thinking about how they can leverage the best of both worlds, bringing a generalist assistant alongside a specialist.
Here’s an article from ‘voicebot.ai” about Amazon’s big announcement…
Here’s the intro from this voicebot.ai article: “Twitter has expanded the beta for its Spaces social audio platform to Android devices. The social media giant had previously limited Spaces to iOS devices, but people using Android can now apply to try out Spaces as Twitter pushes to refine the platform for wide release.”
Meanwhile, here’s a voicebot.ai podcast about the experiences of some experts with Clubhouse – the social audio app that I blogged about recently…
To-date the voice assistant landscape has primarily been driven by smart speakers and mobile assistants, but new devices are quickly taking hold. Brands have always sought to connect with customers on-the-go, and assistant technologies are now enabling a new way to provide value in their everyday lives. Smart hearables and wearables, including earbuds, watches and even glasses have all been embedded with Alexa and Google Assistant so that customers can easily ask for information and perform tasks anytime, anywhere. The most recent example of this is Amazon’s new Echo Buds feature focused on collecting fitness data. However, this trend has been maturing for some time now.
With voicebot.ai reporting that Clubhouse has surpassed 10 million members – I am among them – I put together this 12-minute video explaining how Clubhouse works and my ten cents about whether you should try it. With a few bonus tips if you do indeed give it a “go”…
The voice test for COVID-19 developed by Vocalis Health will accurately determine infection 81.2% of the time, according to the results of a major clinical study conducted by the Israeli startup in India last year. The test, named VocalisCheck, is being pitched as a way to augment the existing tests, saving the more traditional chemical tests for those who are at higher risk of infection.
The Vocalis diagnostic test, which runs on a smartphone or computer, asks the user to count from 50 to 70. The audio is translated into a visual representation of their voice, a spectrogram, made up of 512 features vocal biomarkers. Vocalis applies artificial intelligence to compare the spectrogram to a composite image from the voice of many people proven to have COVID-19. Vocalis has been gathering public voice samples since April and started coordinating with the Israeli Ministry of Defense to get spectrograms of those who definitely had been infected.
Amazon recently added a new feature to its Alexa voice assistant that lets you find the nearest place to get a Covid-19 test. It works on phones and through the Amazon Echo smart speaker. I think it’s best on a phone or on an Echo with a screen since it shows you a list of the nearby locations and how far each place is.
As part of Amazon’s long-term goal to make talking to Alexa more natural, a new “infer your intent” functionality has been built. Here’s an excerpt from this article from the “Verge”:
Finding new ways to use Amazon’s Alexa has always been a bit of a pain. Amazon boasts that its AI assistant has more than 100,000 skills, but most are garbage and the useful ones are far from easy to discover. Today, though, Amazon announced it’s launched a new way to surface skills: by guessing what users are after when they talk to Alexa about other tasks.
The company refers to this process as “[inferring] customers’ latent goals.” By this, it means working out any questions that are implied by other queries. Amazon gives the example of a customer asking “How long does it take to steep tea?” to which Alexa will answer “five minutes” before asking the follow-up: ”Would you like me to set a timer for five minutes?”