This voicebot.ai article talks about something awesome! Leveraging Alexa as a social tool to connect with family & friends by sharing songs. Wild.
Here is an excerpt from the piece:
Alexa users listening to a song on an Echo smart speaker or their smartphone can ask the voice assistant to share the music with any contact who owns an Echo or has the Alexa mobile app. The recipient will get a notification, and Alexa will ask if they want to hear the song and what device they want to play the song on and allowing them to send a reaction back to the sender.
The music will play on the recipient’s default music streaming service if the song is available there, or use another streaming service if not. Alexa can tap an enormous playlist across Amazon Music, Apple Music, iheartradio, TuneIn, and Radio.com, but in case none of those have the song being shared, Alexa will offer to play a music station that might be relevant to the song title or artist.
Here is a note from the “Rain” agency:
Voice is a natural channel for conveying our emotions and feelings. However, voice technology is still trying to crack analyzing sentiment and how these data points can inform the creation of emotionally intelligent voice experiences and assistants. We have seen voice assistants like Amazon Alexa and Google Assistant expand their speaking styles to include different emotions in certain responses, reflecting more humanlike interactions.
However, monitoring emotions on the consumer side is still a nascent technology. Amazon has made some steps toward realizing this with its health and wellness wearable Halo, which tracks users’ tone through their voices to make them more aware of their communication styles. This week, we’ve seen a new update in emotion recognition with Spotify’s patent approval of technology that analyzes listeners’ moods. Even though the patent only points to a small amount of features and a targeted use case, we are beginning to see how voice technology might leverage sentiment to provide more relevant recommendations and experiences for consumers in many contexts.
Here’s the intro from this “voicebot.ai” article:
Facebook is building a virtual assistant to digest, summarize, and read articles for users, according to a Buzzfeed report on a closed company meeting. The TL;DR tool, internet slang for “too long, didn’t read,” would use AI to condense articles into bullet points and read them out loud for users who want to skip reading it for themselves. The social media giant also discussed other planned AI projects, including creating a neural sensor to detect thoughts as they form and turn them into commands to AI assistants.
Here’s the intro from this voicebot.ai article:
Voice assistants may soon need to pay Wikipedia to find answers to some of the questions users pose. The Wikimedia Foundation, the umbrella organization that encompasses Wikipedia and its sibling wiki-projects, is launching Wikimedia Enterprise to start packaging and selling Wikipedia’s content to Apple, Amazon, Facebook, and Google, including their respective voice assistants, as first reported by Wired.