There is now an app that lets the deaf community fully utilize Alexa
This elegant third-party solution enables sign language to be used to control voice assistants.
It’s safe to say that voice assistants have proven their mettle with the population at large. What if you’re part of the deaf community, and aren’t able to hear the responses? That was the question asked by Abhishek Singh, a computer scientist who first rose to prominence when he built Super Mario Bros in augmented reality.
He’s created a web application that can translate sign language both into spoken words for your voice assistant of choice to hear and do the reverse translation from their spoken response into a typed reply for those who are hearing impaired or deaf to read. As it works on a full translation, it’s compatible with any of the voice assistants, the user will just need to use the correct trigger phrase.
This app goes way further than the pseudo-solution that the Amazon Echo Show provides, which does let members of the deaf community interact with Alexa, but not carry out full conversations. The solution created by Singh fixes this lack of functionality. As Singh tells Fast Company,
The project was a thought experiment inspired by observing a trend among companies of pushing voice-based assistants as a way to create instant, seamless interactions. If these devices are to become a central way we interact with our homes or perform tasks, then some thought needs to be given to those who cannot hear or speak. Seamless design needs to be inclusive in nature.
Singh leveraged Tensorflow, the machine learning platform, to train an A.I. system how to recognize sign language. This was a tedious process, where he had to sign each word into his webcam to “teach” the system sign language. He then added Google’s well-performing text-to-speech API to translate the sign language into spoken words.
While the third-party solution Singh has created is an elegant solution to the problem, he still hopes that Amazon will ultimately have inbuilt sign language capabilities without the need of additional add-ons.
That’s where I hope this heads. And if this project leads to a push in that direction in any small way, then mission accomplished. In an ideal world, I would have built this on the Show directly, but the devices aren’t that hackable yet, [I] wasn’t able to find a way to do it.
It’s awesome to see developers are stepping up for accessibility, but it would be great to see this built-in. What do you think?
For more tech news, check out:
- Xbox Insiders now have the chance to test out Dolby Vision and new accessibility features
- Some Uber drivers are being accused of committing ‘vomit fraud’
- Ecovacs crushed it on Prime Day and now they are the best-selling robo vacuum on Amazon