Though occasionally, the “Sorry, I’m having trouble understanding you right now” is not one of my favorite Alexa responses.
And so, I can’t imagine having to hear that every single time.
For people with mobile disability, a technology powered by their voice, is a huge blessing. But if they rely on voice assistants that can’t understand them, the results can be dangerous.
For where the data is available- about 7.5 million people in the United States “have trouble using their voices” because of disorders like stuttering or speech-altering conditions caused by cerebral palsy.
“Telling Alexa to play a song or asking Siri for directions can be almost impossible whenever prolonged (“Aaaaaaaaa-lexa”) or chopped (“Hey … Si … ri!”) sounds cause the devices to misunderstand my commands or stop listening altogether” – NY Times (Link in comments)
How can we ensure that disabled communities are included in the voice revolution? That our voice-enabled world no longer leave people behind. That Voice technology accounts for diverse speech patterns.
Perhaps, something for all #ConversationalAI practitioners to consider. Myself included 🙂