In this podcast, Voicebot.ai’s Bret Kinsella talks with Matt Ware of FIRST (up to the 24:30 mark) to get the perspective of how voice is progressing in Asia and Australia. Here are some of the points made:
1. The Chinese government’s emphasis on becoming the leader in AI leads them to work closer with large tech companies in that country, sharing data with them, providing financial support & taking into account their needs in international diplomacy.
2. Large Chinese companies – like Baidu, Alibaba – have made inroads into many non-anglo countries by building out infrastructure upon AI will depend upon. This is particularly true in India and Africa.
3. By figuring out how voice works for a complex language like Chinese – which tends to more complicated sentence structures than the English language – it makes it actually easier for them to then handle “easier” languages in voice.
4. In Australia, there is a significant segment of the population that is Chinese. These Chinese ex-pats help to make Chinese technology the norm in that country.
5. Matt believes that the privacy initiatives in the US & Europe might only temporarily slow the development of AI & voice there since the transparency those initiatives provide to the general public might make the lack of privacy acceptable (eg. the opt-in rates under GDPR so far). Matt acknowledges that others view the situation more dire.
6. At the 20-minute mark, Matt & Bret discuss the challenges of discoverability – at the 23-minute mark, Matt gives his thoughts about whether China can be the first to fix that problem.
In this podcast, Voicebot.ai’s Bret Kinsella talks with Voiceflow’s Braden Ream and his theory about the rise of “intentless” voice apps. Here are some of the points made:
– Discoverability remains the big obstacle to voice commerce exploding. Not likely to be solved this year.
– There are two ways to discover: explicit thru skill directory or implicit through a type of audio search, which means that the creator of that content will some of the value (eg. the audience might not even know who created that voice content).
– When approving skills for its directory, Amazon does more of a functional than real quality control assessment. [My own experience bears that out.]
– There’s a small group that are making a living by creating skills for Amazon’s directory. The number of people doing is low, below Amazon’s expectations.
– At a high level, it can be said that Amazon’s current voice strategy is building an ecosystem; whereas Google’s strategy is screens.
– Siri shortcuts is underrated in some ways but few people are using it. It’s real AI as it provides recommendations.
As noted in this Voicebot.ai article, the growth rate of U.S. Alexa skills was 25% in 2019 compared to 120% growth in 2018, a big drop on a relative basis but the nominal decline was also sharp. New Alexa skills per day in 2017 was 51.3 and rose to 85.0 in 2018, but fell to only 38.2 last year. So while the Alexa user base is growing quickly, developer activity appears to be shrinking. Here’s an excerpt from the article explaining perhaps why this is the case:
Much of this decline is likely attributed to less aggressive promotion in 2019 for new Alexa skill launches and a reduction in both the number of contests and rewards programs. Amazon has historically offered many incentives to developers ranging from t-shirts to smart displays for launching new skills. In 2019, these were curtailed compared to 2016-18 and Amazon started asking developers to focus more on quality than quantity of skills.
Amazon also didn’t run many contests with monetary prizes and began reducing payouts in the developer rewards programs in the U.S. These were both tangible incentives for developers to invest in building new skills for Alexa. The contests could deliver immediate payouts and the rewards program might lead to a recurring check for successful skills. Some Alexa developers have shared privately with Voicebot that they have reduced their activity on the platform or stopped working in the ecosystem altogether because of these changes.
Sarah Andrew Wilson of Matchbox.io thinks that the hobbyists are no longer experimenting with Alexa skills and that is invariably reducing new skill introductions.
As this Voicebot.ai article notes, several of Amazon’s 14 new voice products appear to be faring well in the smart home category. Here’s an excerpt:
Information on Amazon.com also offers us some indication about what the hits of the 2019 product launch are so far. Amazon Smart Oven with Alexa (with a caveat) and the Echo Dot with Clock are out-of-stock in the U.S. until February 5th and 24th respectively. These items have both been on backorder for weeks now and selling through your inventory is typically an indicator of stronger than expected consumer demand. Interestingly, searches on Amazon.com for the Smart Oven brings up other products still in stock first while searches for the Echo Dot with Clock generally direct consumers to the standard Echo Dot product pages.
Here is the caveat. The Amazon Smart Oven may be outselling its forecast given the attractive price point relative to the Tovala, Breville, and June Ovens, but has a materially lower customer star-rating than its competitors. Tovala is slightly ahead with a 3.8-star average compared to 3.5 stars for the Amazon Smart Oven. Breville has four offerings in the category with ratings ranging from 4.3 – 4.5 stars. June Oven has a 4.9-star rating with 87% of consumers giving the product the coveted 5-star rating.
Amazon’s Rausch pointed out to Voicebot the success of the Amazon Basics Microwave can be clearly seen in 4-star rating with 60% or more reviewers giving a 5-star rating. For the Alexa Smart Plug, those figures are a 4.5-star rating average with more than 80% giving a 5-star rating. At 3.5 stars and only 40% of reviewers offering a 5-star rating, the long-term prospects for the Amazon Smart Oven may not be strong. There is demand for the product at the current price point, but it is not clear whether consumers will overlook its reported deficiencies.
This Voicebot.ai article describes how Mozilla has rolled out – in beta form – voice search for its Firefox browser called “Firefox Voice.” Here’s the article’s analysis of how it works right now:
Firefox Voice performs like a smart display voice assistant within the browser as an extension. Unlike a smart speaker, it doesn’t have a wake word, at least not yet, and is activated by clicking on the icon in the address bar. The tool is limited to the desktop version of the browser and only works in English. Once activated, the assistant will answer questions by using the default search engine or go to specific websites if it recognizes them. A brief test found that it will recognize big company names relatively quickly, but will do a search instead if it’s a more obscure website.
The more interesting aspect of the voice assistant is that it can manage the browser’s tabs and control media playback. For instance, it will start or stop a YouTube video and adjust the volume. Firefox Voice also understands requests for maps and translations and can even copy and paste text, although it can be slightly tricky to pick out which parts of a website to highlight for copying.
Combined use with Google’s announcement that it will soon begin using Google Assistant instead of existing voice tool in its Chrome browser means that the move towards multimodal (ie. using voice combined with screens) continue to grow…
In this FutureEar podcast, Dave Kemp talks with Valencell’s Ryan Kraudel about how PPG sensors in wearables & hearables are ushering a whole new way of keeping track of your health. Here’s an excerpt from Dave’s blog about this topic:
One of the biggest shifts that these type of biometric-laden wearables will usher in is the ability for people to start assembling their own, individualized longitudinal data sets for their health. Previously, metrics such as heart rate and blood pressure were captured during the few times of the year when one visits the doctor. AirPods, Hearing Aids, Apple Watches and so forth, might soon be able to collect these type of metrics on the minute, every hour that you’re wearing the device. So, rather than having two or three data points in your data set for the year, the user would have tens of thousands, painting a far more robust picture of one’s health and creating individual benchmarks that machine learning algorithms can work off of to detect abnormalities in one’s health.
For decades, we’ve largely treated our health in a reactionary manner; You go see your doctor when you’re sick. Now, we’re entering into a phase that offers much deeper biometric insights from massively proliferated consumer wearables, allowing for a more proactive approach. Each individual would have their own baseline of metrics that are established through the constant usage of consumer wearables outfitted with biometric sensors. The user would then be signaled whenever there’s a deviation from the baseline.
This Voicebot.ai podcast provides ten short interviews from the CES conference. CES offered a “voice” track for the first time – and Voicebot’s Bret Kinsella noted that voice was expected to be integrated with technology this time around, a development from it just being a novelty.
At the 11:50 mark, Bret talks to Audiobrain’s Audrey Arbeeney. Audrey’s company provides assistance to those companies who are adding sounds as part of their branding – sort of the analogue to logos from a visual standpoint. It’s an art & science that goes beyond playing simple sounds to identify your brand. She notes she’s on a panel with someone at Whirlpool – and how Whirlpool uses different sounds in their washing machines that are emotive & experiential.
There are sonic branding guidelines to consider, which for some companies will be on a global basis – particularly because you want the brand to be consistent. Here are other examples of what Audiobrain has done for clients. Fascinating stuff!
The latest report from Edison Research/NPR shows that voice assistants continue to be the fastest-growing technology of all-time. 60 million people own a smart speaker – and for households that have them, they own an average of 2.6 of them. That shows that once someone owns one, they typically find them of value and obtain more of them to use in multiple rooms.
Of those who use voice assistants, 24% say they use the technology daily. And over half of the US population has used a voice command technology at least once. These are all staggering numbers…
In this recode article, Rani Molla describes what it’s like to have a “smart” apartment. Here’s an excerpt:
In choosing which of these devices to use I did a bunch of my own research, then consulted professionals’ opinions and Wirecutter reviews. I wasn’t interested in testing the relative quality of devices, but rather wanted to choose the best of what was out there (within a reasonable price range) to see if the smartest smart devices could make my life better. With all that in mind, here’s the list of smart devices I installed in my house:
– A smart lock
– Smart speakers (and their assistants)
– A dog camera/treat dispenser
– A robovac
– A smart smoke/carbon monoxide detector
– Smart lights
– Smart plugs
Rani goes on to describe the details for each of these devices – including the challenges in setting them up, the benefits of having them and more. If you’re looking to make your home smarter, this is a good article to start with…
I learned a lot about the state of hearables in this Voicebot.ai podcast hosted by Bret Kinsella with Dave Kemp & Andy Bellavia. Here’s some things I learned:
1. There will likely be a tech disruption to the traditional high-end headphone market. Will Bose, etc. get bought? Or live on to be just a high-end niche player in the intelligible hearable space? It is likely that the high-end headphone companies will take much market share from the leaders in intelligible hearables (which already have pretty nice fidelity).
2. Apple will likely continue to dominate the IOS echosystem as Apple’s Airbuds have a very high satisfaction rating from customers (98%). However, the Android echosystem is more wide open and
3. So far, air buds pretty much get used the same way that smart speakers are used (egs. phone calls, texts, music, podcasts, audiobooks, setting alarms & timers). How will that vary going forward? At the 49-minute mark, there is a good discussion about how geolocation offers opportunities for hearables such as in-store purchases (eg. in a retail store, can tell you where to go for a specific product; can help upsell or cross-sell) or catching a train. There will be more interactivity with apps using hearables.
4. At the 54-minute mark, Bret notes that only 20% of Airbud users have used the voice assistant feature in them. That surely will change soon enough as they become more comfortable with using voice and they then explore new modalities that voice offers.