Sweta, W. and Kartiki, J. and Prerana, K. and Aarya, M. and Rutuja, P. (2025) Sign Language to Text and Speech Conversion. International Journal of Innovative Science and Research Technology, 10 (6): 25jun285. pp. 141-147. ISSN 2456-2165
This report presents the design and development of a Sign Language to Text and Speech Conversion System. The main goal of this project is to improve communication for people who are deaf or hard of hearing by translating sign language gestures into text and spoken words in real time. This helps bridge the communication gap between sign language users and people who don’t know sign language, making daily conversations easier and more inclusive. Our system uses a gesture recognition model based on Convolutional Neural Networks (CNNs) to accurately detect hand gestures that represent different signs. One of the major challenges in this process is handling changing lighting conditions, various backgrounds, and differences in hand shapes or gestures. To overcome these issues, we use vision-based techniques and landmark detection with the help of the MediaPipe library, which enhances the accuracy and performance of the system. After recognizing a gesture, the system converts it into text and uses Text-to-Speech (TTS) technology to generate clear spoken output. This allows people with hearing disabilities to communicate more smoothly with those unfamiliar with sign language, making interactions quicker and more effective. The report also discusses the positive impact this technology can have in places like schools, offices, and public service areas. It emphasizes the importance of ongoing improvements in machine learning and computer vision to make such systems even more reliable and user-friendly. Overall, this project highlights how modern technology can promote a more inclusive, accessible world for everyone.
Altmetric Metrics
Dimensions Matrics
Downloads
Downloads per month over past year
![]() |