Amazon’s Device That Can Respond to Sign-Language.

0
62

The human voice is very important in computing. Now the question is how about those that can’t speak or even hear, or both? This was the same question that was posted by a technology designer Abhishek Singh, the developer of an application that can swiftly allow Amazon Alexa to respond to sign language comfortably. Abhishek’s innovation uses a camera-based system that can quickly identify gestures and then interpret them as speech and text. Home devices should be designed in such a way that they will be comfortable for the deaf people. In the past few years, we have witnessed an increase in the popularity of voice assistance gadgets that are run by Apple, Google and Amazon. A study carried out by Smart Audio suggested that the adoption of smart speakers like those offered by Amazon and Google has outdone that of tablets and smartphones in the United States.

On the other side of the deaf community, the numerous innovations of gadgets that are controlled by voices pose a great challenge for them since they can’t speak. Speech recognition tech barely picks up the rhythms of users that are deaf. The inability to hear represents a significant challenge to communicate with voice-based helpers. To help the deaf, Singh is working on a project that will enable Amazon’s Alexa to respond in text form to the entire American Sign Language. Singh noted that if the gadgets are going to become a middle way to interact within our homes or even perform tasks, then serious thinking needs to be innovated so that those who can’t hear can speak too.

A unified design is required to be comprehensive. Singh severally trained an artificial intelligence device using the machine learning platform Tensorflow. It required recurrent gesturing anterior to a webcam to impart knowledge to the system which is a fundamental aspect of sign language. The moment the system responded to the hand movements, the developer linked it to the Google’s text and speech software to deliver the matching words loudly. The Amazon’s Echo reacted adequately and its vocal retort was spontaneously copied by the computer into text which can be read by the user. The developer said that the whole activity is a major workaround as one has to work with a laptop as well as with an interpreter which, in this case is Alexa. In the past, there have been attempts of using artificial intelligence and Image recognition to make translation sign language successful.

LEAVE A REPLY

Please enter your comment!
Please enter your name here