Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Research Article

Indian Sign Language Recognition on PYNQ Board

Author(s): Sukhendra Singh*, G. N. Rathna and Vivek Singhal

Volume 15, Issue 1, 2022

Published on: 09 September, 2020

Page: [98 - 104] Pages: 7

DOI: 10.2174/2666255813999200909110140

Price: $65

Abstract

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is a barrier in communication. This is the problem faced by people with speech impairments or disorder. In this paper, we have presented a system which captures hand gestures with a Kinect camera and classifies the hand gesture into its correct symbol.

Methods: We used the Kinect camera, not the ordinary web camera, because the ordinary camera does not capture its 3d orientation or depth of an image; however, Kinect camera can capture 3d image and this will make the classification more accurate.

Results: Kinect camera produces a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’; however, a simple web camera cannot distinguish between these two. We used hand gestures for Indian sign language and our dataset contained 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total, 36 hand gestures were considered to capture alphabets and alphabets ranged from A-Z and 10 for numerics.

Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of various machine learning models in which we found that CNN working on depth- images has more accuracy than other models. All these resulted were obtained on the PYNQ Z2 board.

Discussion: We performed labeling of the data set, training, and classification on PYNQ Z2 FPGA board for static images using SVM, logistic regression, KNN, multilayer perceptron, and random forestalgorithms. For this experiment, we used our own 4 different datasets of ISL alphabets prepared in our lab. We analyzed both RGB images and depth images.

Keywords: Computer vision, kinect camera, PYNQ-Z2, sign language, depth images, hand gestures

Graphical Abstract

[1]
T. Starner, and A. Pentland, Real-time american sign language recognition from video using hidden markov models.Motion-based Recognition., Springer: Dordrecht, 1997, pp. 227-243.
[http://dx.doi.org/10.1007/978-94-015-8935-2_10]
[2]
M.I. Yusoff, I. Mohamed, and M.R. Bakar, "Hidden Markov models: An insight", In Proceedings of the 6th International Conference on Information Technology and Multimedia, 2014pp. 259-264
[http://dx.doi.org/10.1109/ICIMU.2014.7066641]
[3]
J. Singha, and K. Das, "Recognition of Indian sign language in live video", Int. J. Comput. Appl., vol. 70, no. 19, p. 8887, 2013.
[4]
J. Singha, and K. Das, "Indian sign language recognition using eigen value weighted Euclidean distance based classification technique", Int. J. Adv. Comput. Sci. Appl., vol. 4, no. 2, pp. 188-195, 2013.
[http://dx.doi.org/10.14569/IJACSA.2013.040228]
[5]
Z. Ju, Y. Wang, W. Zeng, S. Chen, and H. Liu, "Depth and RGB image alignment for hand gesture segmentation using Kinect", In 2013 IEEE International Conference on Machine Learning and Cybernetics, vol. 2, pp. 913-919, 2013.
[http://dx.doi.org/10.1109/ICMLC.2013.6890413]
[6]
L. Rioux-Maldague, and P. Giguere, "Sign language fingerspelling classification from depth and color images using a deep belief network", In 2014 Canadian Conference on Computer and Robot Vision, 2014pp. 92-97
[http://dx.doi.org/10.1109/CRV.2014.20]
[7]
H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng, "Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations", In Proceedings of the 26th Annual International Conference on Machine Learning, 2009pp. 609-616
[http://dx.doi.org/10.1145/1553374.1553453]
[8]
T. Yamashita, and T. Watasue, "Hand posture recognition based on bottom-up structured deep convolutional neural network with curriculum learning", In 2014 IEEE International Conference on Image Processing (ICIP), 2014pp. 853-857
[http://dx.doi.org/10.1109/ICIP.2014.7025171]
[9]
Z. Ju, Y. Wang, W. Zeng, H. Cai, and H. Liu, "A modified EM algorithm for hand gesture segmentation in RGB-D data", In 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2014pp. 1736-1742
[http://dx.doi.org/10.1109/FUZZ-IEEE.2014.6891777]
[10]
L. Pigou, S. Dieleman, P.J. Kindermans, and B. Schrauwen, "Sign language recognition using convolutional neural networks", European Conference on Computer Vision, 2014pp. 572-578
[11]
D. Li, Y. Chen, M. Gao, S. Jiang, and C. Huang, "Multimodal gesture recognition using densely connected convolution and blstm", In 2018 24th International Conference on Pattern Recognition (ICPR), 2018pp. 3365-3370
[http://dx.doi.org/10.1109/ICPR.2018.8545502]
[12]
J. Yi, H. Ni, Z. Wen, and J. Tao, "Improving blstm RNN based mandarin speech recognition using accent dependent bottleneck features", In 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2016pp. 1-5
[http://dx.doi.org/10.1109/APSIPA.2016.7820723]
[13]
E. Cuevas, "Block-matching algorithm based on harmony search optimization for motion estimation", Appl. Intell., vol. 39, no. 1, pp. 165-183, 2013.
[http://dx.doi.org/10.1007/s10489-012-0403-7]
[14]
M.A. Ramírez-Ortegón, V. Märgner, E. Cuevas, and R. Rojas, "An optimization for binarization methods by removing binary artifacts", Pattern Recognit. Lett., vol. 34, no. 11, pp. 1299-1306, 2013.
[http://dx.doi.org/10.1016/j.patrec.2013.04.007]
[15]
F.Y. Chan, F.K. Lam, and H. Zhu, "Adaptive thresholding by variational method", IEEE Trans. Image Process., vol. 7, no. 3, pp. 468-473, 1998.
[http://dx.doi.org/10.1109/83.661196] [PMID: 18276266]
[16]
T.W. Chong, and B.G. Lee, "American sign language recognition using leap motion controller with machine learning approach", Sensors, vol. 18, no. 10, p. 3554, 2018.
[http://dx.doi.org/10.3390/s18103554] [PMID: 30347776]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy