Skip to content
2000
Volume 15, Issue 1
  • ISSN: 2666-2558
  • E-ISSN: 2666-2566

Abstract

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is a barrier in communication. This is the problem faced by people with speech impairments or disorder. In this paper, we have presented a system which captures hand gestures with a Kinect camera and classifies the hand gesture into its correct symbol. Methods: We used the Kinect camera, not the ordinary web camera, because the ordinary camera does not capture its 3d orientation or depth of an image; however, Kinect camera can capture 3d image and this will make the classification more accurate. Results: Kinect camera produces a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’; however, a simple web camera cannot distinguish between these two. We used hand gestures for Indian sign language and our dataset contained 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total, 36 hand gestures were considered to capture alphabets and alphabets ranged from A-Z and 10 for numerics. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of various machine learning models in which we found that CNN working on depth- images has more accuracy than other models. All these resulted were obtained on the PYNQ Z2 board. Discussion: We performed labeling of the data set, training, and classification on PYNQ Z2 FPGA board for static images using SVM, logistic regression, KNN, multilayer perceptron, and random forestalgorithms. For this experiment, we used our own 4 different datasets of ISL alphabets prepared in our lab. We analyzed both RGB images and depth images.

Loading

Article metrics loading...

/content/journals/rascs/10.2174/2666255813999200909110140
2022-01-01
2025-07-04
Loading full text...

Full text loading...

/content/journals/rascs/10.2174/2666255813999200909110140
Loading

  • Article Type:
    Research Article
Keyword(s): Computer vision; depth images; hand gestures; kinect camera; PYNQ-Z2; sign language
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test