The next time you shout angrily at Amazon’s Alexa voice assistant for misinterpreting “weather” as “whether,” it might respond with an apology. According a report from the MIT Technology Review, the retail giant’s working on “significant” language processing upgrades to Alexa that may allow the voice assistant to, among other things, accurately detect the emotion in your voice.
Specifically, engineers with Amazon’s Echo project are exploring “new natural-language processing techniques” that might significantly augment the voice assistant’s recognition capabilities, reports the MIT Technology Review. The project is in the early stages, but one leg of research concerns Alexa’s ability to interpret subtle differences in the intonation, tenor, and pitch of a person’s voice. “How the human affect is recognized and then reflected by [Alexa’s] voice will be a key area of [Amazon’s] R&D,” said an MIT Technology Review source.
Amazon is also experimenting with an advanced language model that would be able to comprehend “ambiguous requests” — the sort of vague questions that throw the current iteration of Alexa for a loop, basically. ” New “probabilistic techniques” will factor geographic proximity into the assistant’s responses, among other variables — “What’s the score of the soccer game?” might return stats for the nearest cities’ teams, for example. And
The ability to determine a person’s emotion from speech is nothing new, exactly — MIT Technology Review points out that some telephone support software can detect when a customer is becoming irritated — and voice recognition systems like Vocal IQ, which Apple acquired last year, use past conversations to improve future requests. But Amazon, in combining the two technologies, has the potential to create a far more precise assistant than has been so far seen.
The company is already making inroads, reportedly — Amazon currently uses data about Alexa users’ interests to improve the accuracy of the assistant’s voice recognition. But the new systems go far beyond the current platform’s capabilities. “It is super-vital for the conversation to be magical,” said MIT Technology Review’s source.
Amazon, perhaps under pressure from rivals like Google, Apple, and others, has focused an increasing amount of development resources on Alexa. In June 2015, Amazon opened
Amazon hasn’t been resting on its voice-recognition laurels since then. Company chief Jeff Bezos revealed in an interview at Recode’s recent Code Conference that the Alexa division within Lab126 — Amazon’s skunkworks research and development arm — had grown to more than 1,000 employees. Amazon launched
Alexa may be far and away the most successful voice assistant platform in terms of hardware — Amazon has sold more than 3 million Echo units so far — but it’ll face stiff competition in the coming months. Apple, according to VentureBeat, plans open its Siri voice platform to third-party developers, and later this year release an Apple TV set-top box with beefed-up Siri support in the form of built-in microphones and facial recognition features. Google, meanwhile, will launch its Google Home voice assistance device in the fall.
Editors' Recommendations
- What to do if your Amazon Alexa app is not working
- The best Alexa skills to use on your Amazon Echo in 2023
- Amazon to pay $30M in FTC settlements over Alexa, Ring privacy violations
- Alexa, May the Fourth be with you
- Amazon might be using Alexa to send you targeted ads