It’s wild. Just by talking to a smartphone or computer – counting up from 50 to 70 – you can be diagnosed with reasonable certainty as to whether you have Covid! Wild!
Here’s the excerpt from this voicebot.ai article:
The voice test for COVID-19 developed by Vocalis Health will accurately determine infection 81.2% of the time, according to the results of a major clinical study conducted by the Israeli startup in India last year. The test, named VocalisCheck, is being pitched as a way to augment the existing tests, saving the more traditional chemical tests for those who are at higher risk of infection.
The Vocalis diagnostic test, which runs on a smartphone or computer, asks the user to count from 50 to 70. The audio is translated into a visual representation of their voice, a spectrogram, made up of 512 features vocal biomarkers. Vocalis applies artificial intelligence to compare the spectrogram to a composite image from the voice of many people proven to have COVID-19. Vocalis has been gathering public voice samples since April and started coordinating with the Israeli Ministry of Defense to get spectrograms of those who definitely had been infected.
Here’s the intro from this CNBC piece:
Amazon recently added a new feature to its Alexa voice assistant that lets you find the nearest place to get a Covid-19 test. It works on phones and through the Amazon Echo smart speaker. I think it’s best on a phone or on an Echo with a screen since it shows you a list of the nearby locations and how far each place is.
As part of Amazon’s long-term goal to make talking to Alexa more natural, a new “infer your intent” functionality has been built. Here’s an excerpt from this article from the “Verge”:
Finding new ways to use Amazon’s Alexa has always been a bit of a pain. Amazon boasts that its AI assistant has more than 100,000 skills, but most are garbage and the useful ones are far from easy to discover. Today, though, Amazon announced it’s launched a new way to surface skills: by guessing what users are after when they talk to Alexa about other tasks.
The company refers to this process as “[inferring] customers’ latent goals.” By this, it means working out any questions that are implied by other queries. Amazon gives the example of a customer asking “How long does it take to steep tea?” to which Alexa will answer “five minutes” before asking the follow-up: ”Would you like me to set a timer for five minutes?”
The annual list of predictions from experts for the voice industry from ‘voicebot.ai’ is always one of the more fascinating reads in this space. The predictions are organized by topic – my favorite topic is “voice moves to mobile devices of all sorts.” The topic of “personalization, emotion recognition & context” blows my mind. Check them all out!
And this “Voice Report” from ‘Rain’ lists the top 9 trends that agency is seeing…
Here’s a piece from voicebot.ai about how Amazon Alexa has a new functionality that allows for “taking turns” and preferences – see this excerpt:
There is turn taking today when conversing with Alexa. The user speaks then Alexa speaks. That is followed by the user and back to Alexa and so forth. It’s highly structured and doesn’t accommodate interruptions, tangents, or trackbacks very well. The current model is decidedly unlike how humans interact in conversation. Natural turn taking is definitely more accommodating to the vagaries of human conversation.
As good as the natural turn taking demo was, the feature that will probably have a bigger impact is the ability to teach Alexa your preferences. This is long overdue. For Alexa to be a truly personal assistant, it needs to know personal preferences. This knowledge can help make Alexa more useful every day. Prasad demonstrated this feature as well telling Alexa what he meant by certain phrases. However, the practical benefits of Alexa remembering your preferences are easily overshadowed in a two-minute demo by the scope of changes required to support natural turn taking.
According to this Voicebot.ai article, since 2018, hearable ownership by U.S. adults has risen about 23% and voice assistant use through hearables grew by 103% from 21.5 million in 2018 to 43.7 million in 2020. The data show that hearables and voice assistant adoption are complementary technology trends.
Since it’s here in my backyard, I’ve got to blog about it. Here’s an excerpt from this article by Voicebot.ai:
Planet Word combines stories with technology in ten learning galleries. An interactive conversation with a wall of words relates the history and development of English, using what visitors say to pick out words to spotlight with the embedded lights. Technology is also crucial to an exhibit with smart paintbrushes for drawing words. Visitors can also practice virtual conversations with speakers of rare languages.
On the performative end, visitors can show off their own speech-giving talents in a soundproof room with a teleprompter that plays eight famous speeches or in a poetry nook in the library, as well as visit a karaoke area for learning about songwriting and performing their favorites. Outside, artist Rafael Lozano-Hemmer installed a metallic weeping willow that continually plays 364 voices in almost as many languages. The museum is inside the Franklin School, a very appropriate choice as it was there that Alexander Graham Bell told Watson he needed him in the first-ever wireless voice transmission.
Here’s some commentary from the “Rain” agency:
Digital conversations with friends and colleagues have traditionally revolved around text – typing on our keyboards or phones to communicate messages. Although the pandemic has created new demand for video conferencing, screen fatigue has started setting in, leaving space for a new kind of communication platform driven by voice. Several companies have started to populate this new audio ecosystem, trying to leverage voice conversations for personal and professional use.
From Discord to Clubhouse, these kinds of voice-driven platforms are becoming more common, and now we’re seeing mainstream platforms like Twitter recognizing value here as well. As many of us continue to work remotely, audio chat is emerging as a unique way to maintain human connection and rapport.
Here’s the intro from this article from “The Verge”:
Twitter plans to take on Clubhouse, the invite-only social platform where users congregate in voice chat rooms, with a way for people to create “spaces” for voice-based conversations right on Twitter. In theory, these spaces could provide another avenue for users to have conversations on the platform — but without harassment and abuse from trolls or bad actors, thanks to tools that let creators of these spaces better control the conversation.
The company plans to start testing the feature this year, but notably, Twitter will be giving first access to some of the people who are most affected by abuse and harassment on the platform: women and people from marginalized backgrounds, the company says.
In one of these conversation spaces, you’ll be able to see who is a part of the room and who is talking at any given time. The person who makes the space will have moderation controls and can determine who can actually participate, too. Twitter says it will experiment with how these spaces are discovered on the platform, including ways to invite participants via direct messages or right from a public tweet.
Here’s the intro from this Voicebot article:
Voicebot’s biannual Smartphone Voice Assistant Consumer Adoption Report considered new questions in 2020 around consumer interest in and experience with voice interaction within mobile apps. A key finding is that consumers have strong interest in voice interactivity within mobile apps and more experience with these features than many people realize. Just over 45% of consumers said they would “very much” or that “it would be nice” to have voice assistant features within their favorite mobile apps. This figure compares to just 25% that said they were not interested.