Real Time Sign Language Recognition and Translation to Text for Vocally and Hearing-Impaired People
by Jonah Joseph Yama, Mahammad Ayan hebballi, Mohammad Ali Mulla, Mohammed Saqeeb
Published: February 11, 2026 • DOI: 10.51584/IJRIAS.2026.11010087
Abstract
The Real-Time Sign Language Recognition and Translation system shown in this study aims to improve communication between sign language users and non-sign language speakers. The system uses a webcam to record hand movements, which are then processed using OpenCV for real-time image processing and MediaPipe for hand landmark identification.
Next, American Sign Language (ASL) movements are accurately classified using a Convolutional Neural Network (CNN). Smoother and more natural communication is made possible by a Text-to-Speech (TTS) engine that translates the identified motions into readable text and then into speech.
By integrating computer vision, deep learning, and speech synthesis, the project provides an accessible, efficient, and user-friendly tool for vocally and hearing-impaired individuals. The goal of this approach is to improve communication and encourage inclusivity in commonplace situations like social contact, healthcare, and education.
The solution is designed to be cost-effective, easy to use, and scalable, making it highly beneficial in educational environments, workplaces, hospitals, and public interactions. The ultimate goal of this project is to use an intelligent, real-time translation system to close the communication gap, encourage inclusivity, and support the freedom of people with hearing and voice impairments.