Amazon Echo to go beyond voice, will interpret sign language
A new app has been developed for smart speakers with the help of AI through a machine learning engine named as Tensorflow
Voice-assisted smart speakers such as Amazon Echo are more than just daily-use gadgets as they make for a smart equipment to flaunt publically. From playing music for you to telling about the weather, the Amazon Echo can do every electronically-assisted task, almost. However, for availing the benefits of the Echo, you need to have proper speaking and listening skills which limits its capabilities. However, that won't be the case now.
Pushing its capabilities a notch higher, Abhishek Singh, an IT-expert, developed an app which can help deaf people communicate with the smart speaker and make it do tasks. The app has been developed with the help of AI through a machine learning engine named as Tensorflow and the developer has implemented smart techniques to make the app accomplish the task.
Well, making Echo understand the sign language isn't as quick as commanding it by voice. The app utilises a webcam assembled within a computer. The deaf person needs to convey his/her message in front of the camera which understands, decodes and converts the actions into text. This text is read aloud by the app on the computer which is grasped by Alexa. Then, whatever Alexa replies is fetched by the app and it translates the same into text for the deaf person to read.
What sounds a bit of a stretched process could be a great assistance for unprivileged deaf people to avail the benefits of this technology. The developer hasn't revealed the complete mechanism behind the system but has said that soon he will release it publically. The idea sounds revolutionary but how it fares in the real world conditions remains to be seen until it gets adapted on a wider scale.