S. Pothalaiah, M Amru, Kranti Kumar Appari
In the paper the study employs computer vision and machine learning to analyze real-time sign language gestures. It captures video input from a webcam and utilizes OpenCV to detect and track hand movements. A pre-trained Convolutional Neural Network (CNN) then classifies these gestures based on a dataset containing American Sign Language (ASL) and British Sign Language (BSL) signs. The system converts the recognized gestures into text or speech, which is displayed through an intuitive user interface. This technology seeks to enhance communication for deaf or hard-of-hearing individuals, fostering inclusivity in educational, professional, and social environments.
Home
About Us
Editorial Board
Authors
Topics
Current Issue
October 2023
Impact Factor
Indexing
FAQ
Policies
Contact Us
Copyright © 2021 IJMRSET All Rights Reserved