Consumers already expect that in-car voice assistants will be responsive and accurate all the time. Whether or not that is happening is a point of disagreement between consumers and car manufacturers. Capgemini reports that 81% of automakers believe their embedded voice assistants are meeting consumer’s needs, while only 59% of consumers agree.
The growing consumer demand for voice assistants in-car represents a massive opportunity for automakers willing to continue iterating and improving the voice user interfaces embedded in their vehicles. In the future, consumers will expect more out of voice assistants, according to the report.
As a provider of an advanced voice AI platform, Houndify, SoundHound, Inc is already working with top auto manufacturers to develop voice assistants that meet these three key criteria:
- Responsive and conversational
- Contextual
- Connected
Beyond ASR and NLU
According to Capgemini, half of the users of voice assistants in cars feel that they are not accurately understood. Although the voice industry is moving toward greater natural language understanding, most voice assistants still require a two-step process for converting speech to data and back to speech.
Automatic Speech Recognition (ASR) technology allows people to talk to machines and for machines to understand the queries. In a second step, machines will make sense of the data and reply back using Natural Language Understanding (NLU). The process takes time and there is still room for error.
What people really expect from their voice assistants is a conversation that happens as naturally as talking to another person. This level of intelligent conversation is available through a breakthrough technology referred to as Speech-to-Meaning.