VoiceReallyMatters.com

A layperson’s exploration of all things voice

Category Archives: Healthcare Use

June 3, 2020

Your Voice May Predict Heart Problems

Recently, I blogged about how voice may help detect whether you have Covid-19. This Voicebot.ai article notes a new study that indicates that voice may be able to help detect heart issues. A vocal biomarker – using artificial intelligence – may be able to identify those with a high risk of heart failure without requiring a physical exam. Telemedicine continues its roll…

May 21, 2020

How Consumers Are Using Voice During the Pandemic

In this 11-page report, RAIN and PulseLabs looked into the how over 1400 people are using voice assistants during the pandemic. Here’s the highlights:

More People are Looking to Voice for News & Info – Voice requests for updates about the coronavirus increased by 250% in the month of March, indicating that people are increasingly looking to their voice assistants for news and a variety of facts about current events.

Voice Searches Carry Rich Emotional Valence – Spoken searches and commands can carry more emotion and sentiment, valuable for brands in any industry. For example, we found that people ​confide​ in Alexa, asking questions like “Alexa, what are the chances I’ll be infected?,” “Alexa, I’m scared,” and “Alexa, am I going to die?”

Spikes in At-Home Voice Use Presents Big Potential Value for Brands – The conversation on voice can yield valuable insights across industries. As one key example, we found a 50% increase on voice apps related to ordering and delivering food. And questions about recipes have gone up by 41%. Analysis of these utterances confirms the intuition that people are cooking and ordering food more than before, while also providing clues about which brands and experiences they prefer.

Accuracy is Paramount for Trust – Over recent months, both Alexa and Google Assistant have taken pains to ensure that reputable, recognized sources provide answers to coronavirus-related queries through a strong emphasis on 1st party experience. The volume, variety, and seriousness of the queries seen in this report validate the importance of those efforts.

April 14, 2020

Alexa Works Overtime on Covid-19

This Amazon Alexa blog contains a host of resources related to coping with the coronavirus, including how to stay healthy, informed, connected and entertained. Here’s an excerpt about staying healthy:

– Two new Alexa routines can help you adjust to new schedules. The “Stay at Home” routine starts your day with a fun fact, notifies you to grab lunch and plan dinner. The “Work from Home” routine notifies you when it’s time to start work, when to get up and stretch, and when to start wrapping up for the day. Each routine can be easily enabled through the Alexa app.

– Using Centers for Disease Control and Prevention (CDC) guidance, our Alexa health team built a U.S. experience that lets you use Alexa to check your risk level for COVID-19 at home, using just your voice. Ask, “Alexa, what do I do if I think I have COVID-19?” or “Alexa, what do I do if I think I have coronavirus?,” and Alexa will ask a series of questions about your travel history, symptoms, and possible exposure. Based on your responses, Alexa will provide CDC guidance given your risk level and symptoms.

– In Japan, you can also use Alexa to check your risk level at home. Based on your responses, Alexa will provide Japanese Ministry of Health, Labor, and Welfare guidance matching your risk level and symptoms.

– Customers in Australia, Brazil, Canada, France, India, the UK, and the U.S. can now ask Alexa to sing a song for 20 seconds, and she’ll help you keep time while you scrub your hands with a tune.

April 9, 2020

Detecting Covid-19 Through Your Voice?

Here’s the intro from this interesting article from voicebot.ai:

Identifying people infected with COVID-19 by the sound of their voice sounds far-fetched, but enterprise voice assistant developer Voca.ai has started collecting the data that could lead to one. The startup partnered with Carnegie Mellon University to launch Corona Voice Detect this week, soliciting people to record their voices for an eventual open-source dataset and potential voice test for the disease.

Corona Voice Detect at the moment consists mainly of a website where people can record themselves speaking a few sentences. Users fill in a few details about their location, age, how they are feeling, and if they have been diagnosed with the coronavirus. The information is then anonymized and added to a growing dataset for analysis.

“We ask people to use the platform and record themselves every day. They say if they have the virus and how they are feeling,” Voca.ai co-founder Alan Bekker told Voicebot in an interview. “In viruses like the coronavirus that harm the respiratory system, there’s a high probability we might find a pattern in the way a person speaks using voice biomarkers research. We only launched a few days ago and are getting thousands of recordings an hour from Italy, the U.S., Asia, Israel, and all over. There are 20,000 to 30,000 people who have recorded so far.”

March 2, 2020

The Mayo Clinic’s Voice Experience as a First Mover

This Voicebot.ai podcast with the Mayo Clinic’s Dr. Sandhya Pruthi and Joyce Even is interesting for those helping their organizations get into voice because the Mayo Clinic was a first mover and these speakers share some details about how they got started. The points include:

1. The Mayo Clinic is a content-driven organization. It was already involved in educating the public & medical staff through multiple mediums, including chat bots.

2. They started with a first aid skill to try it out. And since then, they’ve been constantly been building on that. They didn’t start with a concrete plan, just generally going with the flow. Taking content built for Web or print and converting it for voice is an art & science. Shorter answers required and the need to predict how a question will be asked.

3. Conducted a pilot where patients would be instructed by nurses after the doctor was done with them that they could ask a voice assistant about wound care upon discharge. An example of how you can use a patient’s “down time” when they are alone back in a room to get more educated about their condition. Highly successful from both the medical staff and patient’s perspective. Now they’re planning on rolling out a pilot for the emergency room.

4. The speakers noted that some patients are either loathe to ask their doctor certain questions (eg. they worry they would look stupid to ask or due to privacy concerns) or they forget their questions when the doctor comes in. Oftentimes, the family also has a lot of questions. The voice assistant can help with efficiency & education.

5. Amazon asked Mayo Clinic to provide first-party content (ie. content that is part of Alexa’s core; you don’t have to ask for Alexa to open a Mayo Clinic skill). That took some work to convert the third-party content they had developed into first person content.

6. A content team leads voice at the Mayo Clinic. Bret remarked that’s unusual as it typically is a team from marketing, product or IT.

7. The Mayo Clinic voice doesn’t have a persona. They eventually may have one – or maybe even multiple personas depending on the type of interaction (eg. audience is particular type of patients, their own doctors, etc.) – but it may be unnecessary and they won’t do that. Still early days.

8. The Mayo Clinic has a digital strategy that stretches out to 2030. A few possibilities about how voice may evolve are interactions with a voice app that is empathetic (eg. it will get to really know you & can cater to your needs); voice apps that are more proactive by reaching out & being more engaged (eg. “did you take your meds?”); and freeing up providers to be more efficient by dramatically cutting down on the four hrs they spend per day doing medical records today.

January 22, 2020

How Wearables Allow You to be Health Proactive

In this FutureEar podcast, Dave Kemp talks with Valencell’s Ryan Kraudel about how PPG sensors in wearables & hearables are ushering a whole new way of keeping track of your health. Here’s an excerpt from Dave’s blog about this topic:

One of the biggest shifts that these type of biometric-laden wearables will usher in is the ability for people to start assembling their own, individualized longitudinal data sets for their health. Previously, metrics such as heart rate and blood pressure were captured during the few times of the year when one visits the doctor. AirPods, Hearing Aids, Apple Watches and so forth, might soon be able to collect these type of metrics on the minute, every hour that you’re wearing the device. So, rather than having two or three data points in your data set for the year, the user would have tens of thousands, painting a far more robust picture of one’s health and creating individual benchmarks that machine learning algorithms can work off of to detect abnormalities in one’s health.

For decades, we’ve largely treated our health in a reactionary manner; You go see your doctor when you’re sick. Now, we’re entering into a phase that offers much deeper biometric insights from massively proliferated consumer wearables, allowing for a more proactive approach. Each individual would have their own baseline of metrics that are established through the constant usage of consumer wearables outfitted with biometric sensors. The user would then be signaled whenever there’s a deviation from the baseline.

July 31, 2019

How Voice Helps Older Adults Battle Social Isolation

I love Dave Kemp’s ‘FuturEar” blog. Here’s an excerpt from something he just posted:

The slide pictured above from the session illustrates why I see so much potential for voice technology, specifically for older adults. It’s becoming increasingly apparent through numerous research studies that loneliness and social isolation are severely detrimental to us as individuals, as well as to the broader economy.

The industry that I come from, the world of hearing aids and hearing loss, understands these co-morbidities all too well, as hearing loss is often correlated to social isolation. If your hearing is so diminished that you can no longer engage in social situations, you’re more likely to become withdrawn and become social isolated/lonely.

This is ultimately why I think we’ll see voice assistants become integrated into this new generation of hearing aids. It kills two birds with one stone, as it augments one’s physical sound environment by providing amplification and the ability to hear more clearly, as well as serve as an access point to a digital assistant that can be used to communicate with one’s technology. One of the best solutions on the horizon for helping to circumvent the rising demand for caregivers might be “digital caregivers” in the form of Alexa/Google housed in hearing aids or other hearable devices.

June 10, 2019

Your Robot Doctor

This isn’t even ‘new’ news. A few years back (as noted in this article – and a video), a robot – from China’s iFlytek – took China’s national medical licensing examination and passed. Not only did the robot pass the exam, it actually got a score of 456 points – which was 96 points above the passing threshold. This makes sense given that healthcare is one of the fields that has been taking AI seriously for some time. Robots actually aren’t meant to replace human doctors. They’re meant to just be assistants to help human doctors improve their efficiency.