AI-Based Framework for Real-Time Recognition of Arabic and English Sign Languages

Document Type : Original research articles

Authors

1 Modern University for information and technology

2 Faculty of Engineering, Biomedical department, at Modern University for Technology & Information (MTI).

10.21608/svusrc.2025.388195.1287

Abstract

Over 300 sign languages are used worldwide, posing challenges for effective communication between deaf and hearing individuals. This study presents a bilingual sign language recognition (SLR) system using deep learning to enhance accessibility for the deaf and mute communities. The system processes real-time video input, leveraging MediaPipe for hand and body landmark extraction. For static gesture classification (e.g., alphabet recognition), a Support Vector Machine (SVM) with a linear kernel is employed. For dynamic gesture sequences (e.g., word-level recognition), a Long Short-Term Memory (LSTM) network is used to model temporal patterns. The models were trained on large-scale datasets of Arabic and English sign languages, achieving recognition accuracies exceeding 99% for English letters and over 93% for selected Arabic words. The training dataset consists of images from Kaggle and real-time videos, and the test dataset uses independent real-time videos not seen during training. The system supports sign-to-text translation as well as voice and text-to-sign conversion through avatars or image sequences, promoting inclusive, real-time communication across linguistic boundaries.

Keywords

Main Subjects