A layperson’s exploration of all things voice

Monthly Archives: March 2020

March 30, 2020

Helping Voice Assistants Understand Those With Down Syndrome

“Project Understood” is an undertaking to ensure that the voice movement doesn’t leave those with Down syndrome behind. Here’s an explanation from this page:

The future is voice-first, but not for everyone. Because of their unique speech patterns, voice technology doesn’t always understand people with Down syndrome. Project Understood is ensuring the future of voice technology includes people with Down syndrome. The Canadian Down Syndrome Society is working with Google to collect voice samples from the adult Down syndrome community to create a database that can help train Google’s technology to better understand people with Down syndrome. The more voice samples we have, the more likely Google will be able to eventually improve speech recognition for everyone.

March 26, 2020

How People Are Using Voice In Their Cars

Here’s the intro from this Voicebot.ai article:

You may hear that voice assistant use depends on context, but you rarely see data that backs up those statements. The In-car Voice Assistant Consumer Adoption Report 2020 breaks down 15 consumer use cases and shows that “making a phone call” is by far the most common activity while driving. Seventy-three percent of voice assistant users in the car say they are making phone calls by voice followed by asking for directions at 49.7%, and sending a text by 38.9%. The fourth and fifth most common use cases were “playing a streaming music service” and “playing the radio” with 27.0% and 14.0% respectively. These results are from a nationally representative sample of 1,090 U.S. adults in January 2020.

Meanwhile, this piece explains how voice plays a role in buying a car for 60% of car buyers…

March 24, 2020

Amazon Removes Coronavirus Skills (& Won’t Approve New Ones)

This Voicebot.ai article provides the sad news that Amazon has removed any Alexa skills relating to coronavirus because some of them had misleading information. So much misinformation out there about this critical topic…

March 19, 2020

The NFL’s Push Into Voice

In this podcast, Voicebot.ai’s Bret Kinsella talks with NFL Labs’ Ian Campbell and Bondad.fm’s John Gillilan about how the NFL has embraced voice. The topics included:

– Goal is engaging with fans on multiple channels since fan expectations are higher nowadays as many are tech savvy.
– The NFL’s partners also expect more and the benefit to the NFL is additional product integration opportunities.
– Started with a lot of small prototypes in voice. Started with an Alexa Skill (‘Rookie’s Guide to the NFL’) last offseason. The skill teaches new fans the rules, including an international audience (games now played in London and Mexico City). Most of their voice endeavors so far have been on Alexa, but they do have some content on the Google Assistant too.
– You need to rethink your content for a voice platform. Can’t write for voice in a vacuum, you need to hear how it sounds – so how you spell things matters as its part of your personality, what type of music behind the voice matters, etc. So it’s more than just scripting.
– Voice brings a lot of truths to your content. For example, for the ‘Rookie’s Guide” skill, they had to consider how to explain the jargon and commentary that accompanies the rules. There is a unique language & nomenclature exists for every industry.
– So far, the NFL has done four types of Flash Briefings: Definitions, News, Editorials, Quizzes, Games & Storytelling.
– Used both synthetic Polly voice (the one offered by Amazon called “Matthew”) and a real player (Maurice Jones-Drew) and a sportscaster (Cole Wright). They are looking at VocalID’s service too. They have tried proto personas to see what works – and if it works, they build on that.
– They tried an avatar of ‘Football Frank,’ which used the Polly voice of Matthew.
– They spend a lot of time trying to help fans get back on track if they make a request that “fails” – they do that with some humor to lessen the blow of a failure.
– They have a multimodal project that is just internal now. They use a ‘hear, see, do’ principle to try to adjust to the differences from voice-only to a screen addition.

March 16, 2020

Attending Voice Events in a Virus World

In this podcast, Modev’s Pete Erickson and Score’s Bradley Metrock talk to Voicebot.ai’s how this virus could impact their voice events this year. No bueno for in-person events but they do have some online events that will continue regardless…

March 12, 2020

Being Bearish on Brands Building Their Own Voice Assistants

The past few days I’ve been blogging about this podcast, in which Voicebot.ai’s Bret Kinsella talks with John Kelvie from Bespoken about how “domains” will replace voice apps. I wanted to offer one last excerpt from John’s blog, pulled from the bottom about how companies that are building their own voice assistants might be better served doing something else:

The devices in column one are inevitable and in some cases are already essential. Column two? Many may seem silly but some nonetheless will prove indispensable.

And these are JUST the devices with voice-capabilities embedded – the march of voice continues to be the march of IoT. Voice is our point of control for the ubiquitous computing power that exists around us. If you imagine a world in which the average cell phone owner has just ONE of each of the above items, the coming wave of voice-enabled devices looks like a tsunami. And if you factor in the devices under their control (thermostats, lights, power switches, appliances, etc.), it becomes even more staggering.

And the very good news is third parties have a huge role to play – the big guys need to provide the platforms and the device access, but they cannot do all the fulfillment. The future of the ecosystem is everyone playing nicely together in this new query-centric, domain-centric world, in which first and third-parties work together seamlessly.

For the platforms, it’s the chance to employ, at massive scale, the wisdom of the crowd – the wisdom of every brand, app builder, API and website on earth. What an amazing achievement it will be.

For third parties, it’s the opportunity to meet users, wherever they are, whatever they are doing – properly done, they will be just a short trip of the tongue away.

March 10, 2020

The Early Days: How Samsung’s Bixby is Shaping Up

This VoiceFirst.fm podcast hosted by Bradley Metrock with three evangelists from Samsung’s Bixby explores where Bixby is headed. Here are a few nuggets:

1. The ability of Samsung televisions (and other Samsung appliances) to offer voice assistant help can be a differentor down the road. For example, you’re watching a football game and a “clipping” penalty is called. You can ask the TV to explain what “clipping” is – and a graphic will pop up with the explanation.

2. Amazon struggles with discoverability issues since more than 100k skills are now in the library. Google’s challenge is that it only allows a limited number of third-parties to make Actions for its library. For Samsung, you can make a capsule and it will stand out since you’ll be a first-mover since Bixby is relatively new. Like Amazon, Samsung encourages third-parties to contribute capsules.

[For those new to voice, Amazon uses the term “Skill”; Google uses “Action”; and Samsung uses “Capsule” as their way of identifying the same thing – essentially an “app” but these things are played from a voice assistant rather than a mobile phone.]

3. When it comes to privacy, Bixby has the functionality for you to go back and delete any (or all) of your “utterances.” Meaning you can delete anything you asked Bixby to do.

March 4, 2020

Customer Service on Voice: “Conversational Customer Care”

I thought I would take a step back and share an article that is fairly broad and high level in case I have readers that are fairly new to voice. Most of the other articles & podcasts I have been sharing are more sophisticated. Here’s an excerpt from this high-level piece:

The trend of moving customer experience beyond the screen has been dubbed “conversational customer care.” It’s still unclear just how many channels are included under this umbrella or how the future of conversational customer care will look. Brands that are dealing with demanding customers can’t afford to sit back and wait for this to play out. Screen-free customer experiences could be the future. They could be just a single touchpoint in the broader context of customer experience strategy. Or, they could just be a passing fad.

March 2, 2020

The Mayo Clinic’s Voice Experience as a First Mover

This Voicebot.ai podcast with the Mayo Clinic’s Dr. Sandhya Pruthi and Joyce Even is interesting for those helping their organizations get into voice because the Mayo Clinic was a first mover and these speakers share some details about how they got started. The points include:

1. The Mayo Clinic is a content-driven organization. It was already involved in educating the public & medical staff through multiple mediums, including chat bots.

2. They started with a first aid skill to try it out. And since then, they’ve been constantly been building on that. They didn’t start with a concrete plan, just generally going with the flow. Taking content built for Web or print and converting it for voice is an art & science. Shorter answers required and the need to predict how a question will be asked.

3. Conducted a pilot where patients would be instructed by nurses after the doctor was done with them that they could ask a voice assistant about wound care upon discharge. An example of how you can use a patient’s “down time” when they are alone back in a room to get more educated about their condition. Highly successful from both the medical staff and patient’s perspective. Now they’re planning on rolling out a pilot for the emergency room.

4. The speakers noted that some patients are either loathe to ask their doctor certain questions (eg. they worry they would look stupid to ask or due to privacy concerns) or they forget their questions when the doctor comes in. Oftentimes, the family also has a lot of questions. The voice assistant can help with efficiency & education.

5. Amazon asked Mayo Clinic to provide first-party content (ie. content that is part of Alexa’s core; you don’t have to ask for Alexa to open a Mayo Clinic skill). That took some work to convert the third-party content they had developed into first person content.

6. A content team leads voice at the Mayo Clinic. Bret remarked that’s unusual as it typically is a team from marketing, product or IT.

7. The Mayo Clinic voice doesn’t have a persona. They eventually may have one – or maybe even multiple personas depending on the type of interaction (eg. audience is particular type of patients, their own doctors, etc.) – but it may be unnecessary and they won’t do that. Still early days.

8. The Mayo Clinic has a digital strategy that stretches out to 2030. A few possibilities about how voice may evolve are interactions with a voice app that is empathetic (eg. it will get to really know you & can cater to your needs); voice apps that are more proactive by reaching out & being more engaged (eg. “did you take your meds?”); and freeing up providers to be more efficient by dramatically cutting down on the four hrs they spend per day doing medical records today.