Abstract
Communication plays a vital role in people’s life and is regarded an
important skill in life. A large number of people with speech and hearing impairment in
our country use Indian Sign Language (ISL) as their primary mode of communication.
Sign language is a non-verbal communication system in which people communicate by
only using their visual sign patterns to express their meaning. Sign language serves as
the primary mode of communication for individuals with speaking and/or hearing
disabilities. However, due to limited proficiency in sign language among a substantial
portion of the population, the Speech to Sign Language Translator emerges as a
potential solution for effective communication among those unfamiliar with sign
language. This translator employs machine learning techniques and a trained dataset to
convert text and speech input in English into expressive actions and gestures of the
standard Indian sign language, as performed by an animated avatar on the webpage [1].
The audio-to-sign language translator utilizes natural language processing techniques
implemented in Python, employing machine learning algorithms for model training,
and leveraging full-stack development technologies to construct the web page interface
and embed the trained model. This tool offers convenience and real-world
interpretability, enabling more efficient communication with individuals lacking sign
language fluency. Future advancements can enhance this technology to support
multiple languages worldwide, enabling the translation of text or speech into their
respective sign languages. Consequently, the Sign Language Translator functions as a
communication tool and assumes the role of a comprehensive 'bilingual dictionary
webpage' for individuals with speaking or hearing disabilities.