This project aims to be a stepping stone for increasing awareness of BISINDO among the broader society. By utilizing real-time BISINDO recognition with dynamic sentence output, this innovation seeks to promote inclusivity and bridge the communication gap between the deaf community and the general public.
Use the package manager pip to install all related packages.
pip install cv2
pip install numpy
pip install mediapipeFor any missing packages, install them using pip install (package name).
- First, run
keypoint_classification_EN.ipynb. This file contains the training model for our hand landmark classification model. The dataset has already been provided in themodelfolder. - Then, run
app.py. This file contains the main functions to run the sign language detection software. Note: make sure to download OpenCV beforehand as it is needed to run this file
An example output, if successful, should look like this:
Fig. 1. Real-time image of class “V” detected as class “V”
Our model has successfully recognized BISINDO alphabet gestures using hand landmark classification. The model achieved a high accuracy of 97–98% and showed reliable real-time performance even in noisy environments.

