It’s wild. Just by talking to a smartphone or computer – counting up from 50 to 70 – you can be diagnosed with reasonable certainty as to whether you have Covid! Wild!
Here’s the excerpt from this voicebot.ai article:
The voice test for COVID-19 developed by Vocalis Health will accurately determine infection 81.2% of the time, according to the results of a major clinical study conducted by the Israeli startup in India last year. The test, named VocalisCheck, is being pitched as a way to augment the existing tests, saving the more traditional chemical tests for those who are at higher risk of infection.
The Vocalis diagnostic test, which runs on a smartphone or computer, asks the user to count from 50 to 70. The audio is translated into a visual representation of their voice, a spectrogram, made up of 512 features vocal biomarkers. Vocalis applies artificial intelligence to compare the spectrogram to a composite image from the voice of many people proven to have COVID-19. Vocalis has been gathering public voice samples since April and started coordinating with the Israeli Ministry of Defense to get spectrograms of those who definitely had been infected.
Here’s the intro from this CNBC piece:
Amazon recently added a new feature to its Alexa voice assistant that lets you find the nearest place to get a Covid-19 test. It works on phones and through the Amazon Echo smart speaker. I think it’s best on a phone or on an Echo with a screen since it shows you a list of the nearby locations and how far each place is.
According to this voicebot.ai article, Amazon has done something interesting with its foray into the wearable fitness market – it’s new “Halo” wristband judges your tone of voice – but yet it’s not powered by Alexa!
Recently, I blogged about how voice may help detect whether you have Covid-19. This Voicebot.ai article notes a new study that indicates that voice may be able to help detect heart issues. A vocal biomarker – using artificial intelligence – may be able to identify those with a high risk of heart failure without requiring a physical exam. Telemedicine continues its roll…
In this 11-page report, RAIN and PulseLabs looked into the how over 1400 people are using voice assistants during the pandemic. Here’s the highlights:
– More People are Looking to Voice for News & Info – Voice requests for updates about the coronavirus increased by 250% in the month of March, indicating that people are increasingly looking to their voice assistants for news and a variety of facts about current events.
– Voice Searches Carry Rich Emotional Valence – Spoken searches and commands can carry more emotion and sentiment, valuable for brands in any industry. For example, we found that people confide in Alexa, asking questions like “Alexa, what are the chances I’ll be infected?,” “Alexa, I’m scared,” and “Alexa, am I going to die?”
– Spikes in At-Home Voice Use Presents Big Potential Value for Brands – The conversation on voice can yield valuable insights across industries. As one key example, we found a 50% increase on voice apps related to ordering and delivering food. And questions about recipes have gone up by 41%. Analysis of these utterances confirms the intuition that people are cooking and ordering food more than before, while also providing clues about which brands and experiences they prefer.
– Accuracy is Paramount for Trust – Over recent months, both Alexa and Google Assistant have taken pains to ensure that reputable, recognized sources provide answers to coronavirus-related queries through a strong emphasis on 1st party experience. The volume, variety, and seriousness of the queries seen in this report validate the importance of those efforts.
This Amazon Alexa blog contains a host of resources related to coping with the coronavirus, including how to stay healthy, informed, connected and entertained. Here’s an excerpt about staying healthy:
– Two new Alexa routines can help you adjust to new schedules. The “Stay at Home” routine starts your day with a fun fact, notifies you to grab lunch and plan dinner. The “Work from Home” routine notifies you when it’s time to start work, when to get up and stretch, and when to start wrapping up for the day. Each routine can be easily enabled through the Alexa app.
– Using Centers for Disease Control and Prevention (CDC) guidance, our Alexa health team built a U.S. experience that lets you use Alexa to check your risk level for COVID-19 at home, using just your voice. Ask, “Alexa, what do I do if I think I have COVID-19?” or “Alexa, what do I do if I think I have coronavirus?,” and Alexa will ask a series of questions about your travel history, symptoms, and possible exposure. Based on your responses, Alexa will provide CDC guidance given your risk level and symptoms.
– In Japan, you can also use Alexa to check your risk level at home. Based on your responses, Alexa will provide Japanese Ministry of Health, Labor, and Welfare guidance matching your risk level and symptoms.
– Customers in Australia, Brazil, Canada, France, India, the UK, and the U.S. can now ask Alexa to sing a song for 20 seconds, and she’ll help you keep time while you scrub your hands with a tune.
Here’s the intro from this interesting article from voicebot.ai:
Identifying people infected with COVID-19 by the sound of their voice sounds far-fetched, but enterprise voice assistant developer Voca.ai has started collecting the data that could lead to one. The startup partnered with Carnegie Mellon University to launch Corona Voice Detect this week, soliciting people to record their voices for an eventual open-source dataset and potential voice test for the disease.
Corona Voice Detect at the moment consists mainly of a website where people can record themselves speaking a few sentences. Users fill in a few details about their location, age, how they are feeling, and if they have been diagnosed with the coronavirus. The information is then anonymized and added to a growing dataset for analysis.
“We ask people to use the platform and record themselves every day. They say if they have the virus and how they are feeling,” Voca.ai co-founder Alan Bekker told Voicebot in an interview. “In viruses like the coronavirus that harm the respiratory system, there’s a high probability we might find a pattern in the way a person speaks using voice biomarkers research. We only launched a few days ago and are getting thousands of recordings an hour from Italy, the U.S., Asia, Israel, and all over. There are 20,000 to 30,000 people who have recorded so far.”
This Voicebot.ai article provides the sad news that Amazon has removed any Alexa skills relating to coronavirus because some of them had misleading information. So much misinformation out there about this critical topic…
This Voicebot.ai podcast with the Mayo Clinic’s Dr. Sandhya Pruthi and Joyce Even is interesting for those helping their organizations get into voice because the Mayo Clinic was a first mover and these speakers share some details about how they got started. The points include:
1. The Mayo Clinic is a content-driven organization. It was already involved in educating the public & medical staff through multiple mediums, including chat bots.
2. They started with a first aid skill to try it out. And since then, they’ve been constantly been building on that. They didn’t start with a concrete plan, just generally going with the flow. Taking content built for Web or print and converting it for voice is an art & science. Shorter answers required and the need to predict how a question will be asked.
3. Conducted a pilot where patients would be instructed by nurses after the doctor was done with them that they could ask a voice assistant about wound care upon discharge. An example of how you can use a patient’s “down time” when they are alone back in a room to get more educated about their condition. Highly successful from both the medical staff and patient’s perspective. Now they’re planning on rolling out a pilot for the emergency room.
4. The speakers noted that some patients are either loathe to ask their doctor certain questions (eg. they worry they would look stupid to ask or due to privacy concerns) or they forget their questions when the doctor comes in. Oftentimes, the family also has a lot of questions. The voice assistant can help with efficiency & education.
5. Amazon asked Mayo Clinic to provide first-party content (ie. content that is part of Alexa’s core; you don’t have to ask for Alexa to open a Mayo Clinic skill). That took some work to convert the third-party content they had developed into first person content.
6. A content team leads voice at the Mayo Clinic. Bret remarked that’s unusual as it typically is a team from marketing, product or IT.
7. The Mayo Clinic voice doesn’t have a persona. They eventually may have one – or maybe even multiple personas depending on the type of interaction (eg. audience is particular type of patients, their own doctors, etc.) – but it may be unnecessary and they won’t do that. Still early days.
8. The Mayo Clinic has a digital strategy that stretches out to 2030. A few possibilities about how voice may evolve are interactions with a voice app that is empathetic (eg. it will get to really know you & can cater to your needs); voice apps that are more proactive by reaching out & being more engaged (eg. “did you take your meds?”); and freeing up providers to be more efficient by dramatically cutting down on the four hrs they spend per day doing medical records today.
In this FutureEar podcast, Dave Kemp talks with Valencell’s Ryan Kraudel about how PPG sensors in wearables & hearables are ushering a whole new way of keeping track of your health. Here’s an excerpt from Dave’s blog about this topic:
One of the biggest shifts that these type of biometric-laden wearables will usher in is the ability for people to start assembling their own, individualized longitudinal data sets for their health. Previously, metrics such as heart rate and blood pressure were captured during the few times of the year when one visits the doctor. AirPods, Hearing Aids, Apple Watches and so forth, might soon be able to collect these type of metrics on the minute, every hour that you’re wearing the device. So, rather than having two or three data points in your data set for the year, the user would have tens of thousands, painting a far more robust picture of one’s health and creating individual benchmarks that machine learning algorithms can work off of to detect abnormalities in one’s health.
For decades, we’ve largely treated our health in a reactionary manner; You go see your doctor when you’re sick. Now, we’re entering into a phase that offers much deeper biometric insights from massively proliferated consumer wearables, allowing for a more proactive approach. Each individual would have their own baseline of metrics that are established through the constant usage of consumer wearables outfitted with biometric sensors. The user would then be signaled whenever there’s a deviation from the baseline.