Enhancing Sign Language Understanding through Machine Learning at the Sentence Level

Authors

DOI:

https://doi.org/10.52756/ijerr.2024.v41spl.002

Keywords:

Sign Language, Feature Extraction, Machine Learning, Random Forest

Abstract

The visual language of sign language is based on nonverbal communication, including hand and body gestures. When it comes to communicating, it is the main tool for those who are deaf or hard of hearing all around the globe. Useful for both hearing and deaf persons, this can translate sign language into sentences in real-time or help those who are hard of hearing communicate with others. This work focuses on developing a sentence-level sign language detection system utilizing a custom dataset and Random Forest model. Leveraging tools such as Media Pipe and TensorFlow, we facilitate gesture detection. Through continuous detection of gestures, we generate a list of corresponding labels. These labels are then used to construct sentences automatically. The system seamlessly integrates with ChatGPT, allowing direct access to generate sentences based on the detected gestures. Our custom dataset ensures that the model can accurately interpret a wide range of sign language gestures. Our method helps close the communication gap between people who use sign language and others, with an accuracy of 80%, by merging machine learning with complex language models.

Published

2024-07-30

How to Cite

Sekhar, C., Devi, J. A., Kumar, M. S., Swathi, K., Ratnam, P. P., & Rao, M. S. (2024). Enhancing Sign Language Understanding through Machine Learning at the Sentence Level. International Journal of Experimental Research and Review, 41(Spl Vol), 11–18. https://doi.org/10.52756/ijerr.2024.v41spl.002