The International Arab Journal of Information Technology (IAJIT)


Hybrid FiST_CNN Approach for Feature Extraction for Vision-Based Indian Sign Language Recognition

Indian Sign Language (ISL) is the commonly used language by the deaf-mute community in the Indian continent. Effective feature extraction is essential for the automatic recognition of gestures. This paper aims at developing an efficient feature extraction technique using Features from Fast Accelerated Segment Test (FAST), Scale-Invariant Feature Transformation (SIFT), and Convolution Neural Networks (CNN). FAST with SIFT are used to detect and compute features, respectively. CNN is used for classification with the hybridization of FAST-SIFT features. The system is implemented and tested using the python-based library Keras. The results of the proposed techniques have been tested on 34 gestures of ISL (24 alphabets set and 10 digit sets) and then compared with the CNN and SIFT_CNN, and it is also tested on two publicly available datasets on Jochen Trisech Dataset (JTD) and NUS-II dataset. The proposed study outperformed some existing ISLR works with an accuracy of 97.89%, 95.68%, 94.90% and 95.87% for ISL-alphabets, MNIST, JTD and NUS-II, respectively.

[1] Adel E., Elmogy M., and Elbakry H., “Image Stitching System based on ORB Feature-Based Technique and Compensation Blending,” International Journal of Advanced Computer Science and Applications, vol. 6, no. 9, pp. 55-62, 2015.

[2] Agrawal S., Jalal A., and Bhatnagar C., “Recognition of Indian Sign Language Using Feature Fusion,” in Proceedings of the4th International Conference on Intelligent Human Computer Interaction, Kharagpur, pp. 1-5, 2012.

[3] Azhar R., Tuwohingide D., Kamudi D., and Suciati N., “Batik Image Classification Using SIFT Feature Extraction, Bag of Features and Support Vector Machine,” Procedia Computer Science, vol. 72, pp. 24-30, 2015.

[4] Barros P., Maciel-Junior N., Fernandes B., Bezerra, B., and Fernandes S., “A Dynamic Gesture Recognition and Prediction System Using the Convexity Approach,” Computer Vision and Image Understanding, vol. 155, pp. 139-149, 2017.

[5] Bheda V. and Radpour D., “Using Deep Convolutional Networks for Gesture Recognition in American Sign Language,” arXiv preprint arXiv:1710.06836, 2017.

[6] Bora R., Bisht A., Saini A., Gupta T., and Mittal A., “ISL Gesture Recognition Using Multiple Feature Fusion,” in Proceedings of theInternational Conference on Wireless Communications, Signal Processing and Networking, Chennai, pp. 196-199, 2017.

[7] Cheok M., Omar Z., and Jaward M., “A of Hand Gesture and Sign Language Recognition Techniques,” International Journal of Machine Learning and Cybernetics,” vol. 10, no. 1, pp.131-153, 2019.

[8] Dardas N., Chen Q., Georganas N., and Petriu E., “Hand Gesture Recognition Using Bag-of- Features and Multi-Class Support Vector Machine,” in Proceedings of theIEEE International Symposium on Haptic Audio Visual Environments and Games, Phoenix, pp. 1-5, 2010.

[9] Dudhal A., Mathkar H., Jain A., Kadam O., and Shirole M., “Hybrid Sift Feature Extraction Approach for Indian Sign Language Recognition System Based on Cnn,” in Proceedings of the International Conference on ISMAC in Computational Vision and Bio-Engineering, Palladam, pp. 727-738, 2018.

[10] El-Gayar M., Soliman H., and Meky N., “A Comparative Study of Image Low Level Feature Extraction Algorithms,” Egyptian Informatics Journal, vo1. 4, no. 2, pp. 175-181, 2013.

[11] GitHub project

[Licensed by MIT] ( Sign-Language-Recognition

[Unpulished raw data], 2017.

[12] He K., Zhang X., Ren S., and Sun J., “Deep Residual Learning for Image Recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, pp. 770-778, 2016.

[13] Hore S., Chatterjee S., Santhi V., Dey N., Ashour A., Balas V., and Fuqian S., Indian Sign Language Recognition Using Optimized Neural Networks, Springer International Publishing, 2017.

[14] Huijuan Z. and Qiong H., “Fast Image Matching Based-on Improved SURF Algorithm,” in Proceedings of theInternational Conference on Electronics, Communications and Control, Ningbo, pp. 1460-1463, 2011.

[15] Ibrahim N., Selim M., and Zayed H., “An automatic Arabic Sign Language Recognition System (ArSLRS),” Journal of King Saud University-Computer and Information Sciences, vol. 30, no. 4, pp. 470-477, 2018.

[16] Islam M., Mitu U., Bhuiyan R., and Shin J., “Hand Gesture Feature Extraction Using Deep Convolutional Neural Network for Recognizing American Sign Language,” in Proceedings of the4th International Conference on Frontiers of Signal Processing, Poitiers, pp. 115-119,2018.

[17] Islalm M., Rahman M., Rahman M., Arifuzzaman M., Sassi R., and Aktaruzzaman M., “Recognition Bangla Sign Language Using Convolutional Neural Network,” in Proceedings of theInternational Conference on Innovation and Intelligence for Informatics, Computing, and 410 The International Arab Journal of Information Technology, Vol. 19, No. 3, May 2022 Technologies (3ICT), Sakhier, pp. 1-6, 2019.

[18] Joshi G., Singh S., and RenuV., “Taguchi- TOPSIS based HOG Parameter Selection for Complex Background Sign Language Recognition,” Journal of Visual Communication and Image Representation, vol. 71, no. 4, pp. 102834, 2020.

[19] Juan L. and Oubong G., “A Comparison of Sift, Pca-Sift and Surf,” International Journal of Image Processing, vol. 3, no. 4, pp. 143-152, 2009.

[20] Kalam M., Mondal M., and Ahmed B., “Rotation Independent Digit Recognition in Sign Language,” in Proceedings of theInternational Conference on Electrical, Computer and Communication Engineering, Cox'sBazar, pp. 1- 5, 2019.

[21] Karami E., Prasad S., andShehata M., “Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images,” arXiv preprint arXiv:1710.02726, 2017.

[22] Karima K. and Nacera B., “A Dynamic Particle Swarm Optimisation and Fuzzy Clustering Means Algorithm for Segmentation of Multimodal Brain Magnetic Resonance Image Data,” The International Arab Journal of Information Technology, vol. 17, no. 6, pp. 976-983, 2020.

[23] Kaur B., Joshi G., and Vig R., “Indian Sign Language Recognition Using KrawtchoukMoment-Based Local Features,” The Imaging Science Journal, vol. 65, no. 3, pp. 171- 179, 2017.

[24] Kishore P., Rao G., Kumar E., Kumar M., and Kumar D., “SelfieSign Language Recognition with Convolutional Neural Networks,” International Journal of Intelligent Systems and Applications, vol. 10, no. 10, pp. 63- 71, 2018.

[25] Loncomilla P., Ruiz-del-Solar J., and Martínez L., “Object Recognition Using Local Invariant Features for Robotic Applications: A Survey,” Pattern Recognition, vol. 60, pp. 499- 514, 2016.

[26] Lowe D., “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.

[27] Mavi A., “A New Dataset and Proposed Convolutional Neural Network Architecture for Classification of American Sign Language Digits,” arXiv preprint arXiv:2011.08927, 2020.

[28] Nagi J., Ducatelle F., Di Caro G., Cire┼čan D., Meier U., Giusti A., and Gambardella L., “Max- pooling Convolutional Neural Networks for Vision-Based Hand Gesture Recognition,”in Proceedings of the IEEE International Conference on Signal and Image Processing Applications, Kuala Lumpur, pp. 342-347, 2011.

[29] Nandy A., Prasad J., Mondal S., Chakraborty P., and Nandi G., “Recognition of Isolated Indian Sign Language Gesture in Real-Time,” in Proceedings of the International Conference on Business Administration and Information Processing, Trivandrum, pp. 102-107,2010.

[30] Patil S. and Sinha G., “Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture Using Scale Invariant Feature Transform (SIFT),” Journal of The Institution of Engineers (India): Series B, vol. 98, no. 1, pp. 19-26, 2017.

[31] Pisharady P., Vadakkepat P., and Loh A., “Attention Based Detection And Recognition of Hand Postures Against Complex Backgrounds,” International Journal of Computer Vision, vol. 101, no. 3, pp. 403-419, 2013.

[32] Rastgoo R., Kiani K., and Escalera S., “Hand Pose Aware Multimodal Isolated Sign Language Recognition,” Multimedia Tools and Applications vol. 80, pp. 127-163, 2021.

[33] Rosten E., Porter R., and Drummond T., “Faster and Better: A Machine Learning Approach to Corner Detection,” IEEE transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 105-119, 2008.

[34] Rosten E. and Drummond T., “Machine learning for High-Speed Corner Detection,” in Proceedings of the European Conference on Computer Vision, Graz, pp. 430-443, 2006.

[35] Sarkar A., Talukdar A., and Sarma K., “Cnn- Based Real-Time Indian Sign Language Recognition System,” in Proceedings of the International Conference on Advances in Computational Intelligence and Informatics, Hyderabad, 2020.

[36] Sharma A., Sharma N., Saxena Y., Singh A., and Sadhya D., “Benchmarking Deep Neural Network Approaches for Indian Sign Language Recognition,” Neural Computing and Applications, vol. 33, pp. 6685-6696, 2020.

[37] Sharafudeen M., David S., and Simon P., “Visual Words based Static Indian Sign Language Alphabet Recognition using KAZE Descriptors. in Proceedings of Evolution in Signal Processing and Telecommunication Networks, Singapore, pp. 93-101, 2022.

[38] Tao W., Leu M., and Yin Z., “American Sign Language Alphabet Recognition Using Convolutional Neural Networks with Multiview Augmentation and Inference Fusion,” Engineering Applications of Artificial Intelligence, vol. 76, pp. 202-213, 2018.

[39] Tareen S. and Saleem Z., “A Comparative Analysis of Sift, Surf, Kaze, Akaze, Orb, and Brisk,” in Proceedings of the International Conference on Computing, Mathematics and Engineering Technologies, Sukkur, pp. 1-10, Hybrid FiST_CNN Approach for Feature Extraction for Vision-Based Indian Sign … 411 2018.

[40] Triesch J., and Von Der Malsburg C., “Robust Classification of Hand Postures Against Complex Backgrounds,” in Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition, Killington, pp. 170-175, 1996.

[41] Tyagi A. and Bansal S., “Feature Extraction Technique for Vision-Based Indian Sign Language Recognition System: A Review,” in Proceedings of Computational Methods and Data Engineering, Singapore, pp. 39-53, 2021.

[42] Tyagi A. and Bansal D., “Hybrid FAST-SIFT- CNN (HFSC) Approach for Vision-Based Indian Sign Language Recognition,” International Journal of Computing and Digital System, 2022.

[43] Tyagi A., Bansal S., and Kashyap A., “Comparative Analysis of Feature Detection and Extraction Techniques for Vision-based ISLR System,” in Proceedings of the 6th International Conference on Parallel, Distributed and Grid Computing (PDGC), Waknaghat, pp. 515-520, 2020.

[44] Villagomez E., King R., Ordinario M., Lazaro J., and Villaverde J.,“Hand Gesture Recognition for Deaf-Mute using Fuzzy-Neural Network,” in Proceedings of the IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Bangkok, pp. 30-33, 2019.

[45] Vishwakarma D., “Hand Gesture Recognition Using Shape and Texture Evidences in Complex Background,” in Proceedings of theInternational Conference on Inventive Computing and Informatics, Coimbatore, pp. 278-283, 2017.

[46] Wadhawan A. and Kumar P., “Deep Learning- Based Sign Language Recognition System for Static Signs,” Neural Computing and Applications, vol. 32, no. 12, pp. 7957-7968, 2020.

[47] Wadhawan A.and Kumar P., “Sign Language Recognition Systems: A Decade Systematic Literature Review,” Archives of Computational Methods in Engineering, vol. 28, no. 3, pp. 785- 813, 2021.

[48] Wangchuk K., Riyamongkol P., and Waranusast R., “Real-Time Bhutanese Sign Language Digits Recognition System Using Convolutional Neural Network,” ICT Express, vol. 7, no. 2, pp. 215- 220, 2021.

[49] Zhu Q., Li J., Yuan F., and Gan Q., “Multi-Scale Temporal Network for Continuous Sign Language Recognition,” arXiv preprint arXiv:2204.03864, 2022.