..............................
..............................
..............................
Computer Vision in Contactless Biometric Systems Farukh Hashmi1, Kiran Ashish2, Satyarth Katiyar3, and Avinash Keskar4 1Department of Electronics and Communication Engineering, National Institute of Technology, India 2Computer Vision Engineer, Viume, India 3Department of Electronics and Communication Engineering, Harcourt Butler Technical University, India 4Department of Electronics and Communication Engineering, Visvesvaraya National Institute of
Contactless biometric systems have increased ever since the corona pandemic outbreak. The two main contactless
biometric systems are facial recognition and gait patterns recognition. The authors in the previous work [11] have built hybrid
architecture AccessNet. It involves combination of three systems: facial recognition, facial anti-spoofing, and gait recognition.
This work involves deploying the hybrid architecture and deploying two individual systems such as facial recognition with facial
anti-spoofing and gait recognition individually and comparing the individual results in real-time with the AccessNet hybrid
architecture results. This work even involves in identifying the main crucial features from each system that are responsible for
predicting a subject. It includes extracting few crucial parameters from gait recognition architecture, facial recognition and
facial anti-spoof architectures by visualizing the hidden layers. Each individual method is trained and tested in real-time, which
is deployed on both edge device NvidiaJetsonNano, and high-end GPU. A conclusion is also adapted in terms of commercial
and research usage for each single method after analysing the real-time test results.
[1] Ariyanto G. and Nixon M., “Model-Based 3D Gait Biometrics,” in Proceedings of IEEE International Joint Conference on Biometrics, Washington, pp. 1-7, 2011.
[2] Bodor R., Drenner A., Fehr D., Masoud O., and Papanikolopoulos N., “View-Independent Human Motion Classification Using Image-Based Reconstruction,” Image and Vision Computing, vol. 27, no. 8, pp.1194-1206, 2009.
[3] Bouchrika I. and Nixon M., “Model-Based Feature Extraction for Gait Analysis and Recognition,” in Proceedings of International Conference on Computer Vision/Computer Graphics Collaboration Techniques and Applications, Berlin, pp. 150-160, 2007.
[4] Castro F., Marín-Jiménez M., Guil N., and De La Blanca N., “Automatic Learning of Gait Signatures for People Identification,” in Proceedings of International Work-Conference on Artificial Neural Networks, Cham, pp. 257-270, 2017.
[5] Derawi M., Bours P., and Holien K., “Improved Cycle Detection for Accelerometer Based Gait Authentication,” in Proceedings of IEEE 6th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Darmstadt, pp. 312-317, 2010.
[6] Feng Y., Li Y., and Luo J., “Learning Effective Gait Features Using LSTM,” in Proceedings of IEEE 23rd International Conference on Pattern Recognition, Cancun,pp. 325-330, 2016.
[7] Fourati E., Elloumi W., and Chetouani A., “Anti- Spoofing in Face Recognition-Based Biometric Authentication Using Image Quality Assessment,” Multimedia Tools and Applications, vol. 79, no. 1, pp. 865-889, 2020.
[8] Ghouzali S., and Marie-Sainte S., “Face Identification Based Bio-Inspired Algorithms,” The International Arab Journal of Information Technology, vol. 17, no. 1, pp. 118-127, 2020.
[9] Hashmi M., Farukh B., Ashish K., and Keskar A., “GAIT Analysis: 3D Pose Estimation And Prediction in Defence Applications Using Pattern Recognition,” in Proceedings of the 12th International Conference on Machine Vision, Shenzhen, 2020.
[10] Hashmi M., Ashish B., Keskar A., Bokde N., and Geem Z., “FashionFit: Analysis of Mapping 3D Pose and Neural Body Fit for Custom Virtual Try-On,” IEEE Access, vol. 8, pp. 91603-91615, 2020.
[11] Hashmi M., Ashish B., Keskar A., Bokde N., Yoon J., and Geem Z., “An Exploratory Analysis On Visual Counterfeits Using Conv-Lstm Hybrid Architecture,” IEEE Access, vol. 8, pp. 101293- 101308, 2020.
[12] Hashmi M., Ashish K., Katiyar S., and Keskar A., “AccessNet: A Three Layered Visual Based Access Authentication System for Restricted Zones,” in Proceedings of the IEEE 21st International Arab Conference on Information Technology, 6th of October city, pp. 1-7, 2020.
[13] Hashmi M., Ashish B., Sharma V., KeskarG., Bokde N., Yoon J., and Geem Z., “LARNet: Real-Time Detection of Facial Micro Expression Using Lossless Attention Residual Network,” Sensors, vol. 21, no. 4, pp. 1098, 2021.
[14] He K., ZhangX., RenS., and Sun J., “Deep Residual Learning for Image Recognition,” in Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, San Juan, pp. 770-778, 2016.
[15] Howard A., Zhu M., Chen B., Kalenichenko D., Wang W., Weyand T., and Adam H., “Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv preprint arXiv:1704.04861, 2017.
[16] Hu M., Wang Y., Zhang Z., Zhang D., and Little J., “Incremental Learning for Video-Based Gait Recognition with LBP Flow,” IEEE Transactions on Cybernetics, vol. 43, no. 1, pp. 77-89, 2012.
[17] Jain A., Dass S., and Nandakumar K., “Can Soft Biometric Traits Assist User Recognition,” in Proceedings of The Biometric Technology for Human Identification, Dhanbad, pp. 561-572, 2004.
[18] Khan S., Javed M., Ahmed E., Shah S., and Ali S., “Facial Recognition Using Convolutional Neural Networks and Implementation on Smart Glasses,” in Proceedings of the IEEE Computer Vision in Contactless Biometric Systems 491 International Conference on Information Science and Communication Technology, Karachi, pp. 1-6, 2019.
[19] Lengerich B., Xing E., and Caruana R., “On Dropout, Overfitting, and Interaction Effects in Deep Neural Networks,” arXiv preprint arXiv:2007.00823, 2020.
[20] Li H., Li W., Cao H., Wang S., Huang F., and Kot A., “Unsupervised Domain Adaptation For Face Anti-Spoofing,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 7, pp. 1794- 1809, 2018.
[21] Liao R., Cao C., Garcia E., Yu S., and Huang Y., “Pose-Based Temporal-Spatial Network (PTSN) for Gait Recognition with Carrying and Clothing Variations,” in Proceedings of Chinese Conference on Biometric Recognition, pp. 474-483, Cham, 2017.
[22] Liu Y., Jourabloo A., and Liu X., “Learning Deep Models For Face Anti-Spoofing: Binary or Auxiliary Supervision,” in Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake, pp. 389-398. 2018.
[23] Luo J., Tang J., Tjahjadi T., and Xiao X., “Robust Arbitrary View Gait Recognition Based on Parametric 3D Human Body Reconstruction and Virtual Posture Synthesis,” Pattern Recognition, vol. 60, pp. 361-377, 2016.
[24] Makihara Y., Sagawa R., Mukaigawa Y., Echigo T., and Yagi Y., “Gait Recognition Using A View Transformation Model in The Frequency Domain,” in Proceedings of European Conference on Computer Vision, Berlin, pp. 151-163, 2006.
[25] Ross A., Nandakumar K., and Jain A., Handbook of Multibiometrics, Science and Business Media, 2006.
[26] Russakovsky O., Deng J., Su H., Krause J., Satheesh S., Ma S., and Fei-Fei L., “ImagenetLarge Scale Visual Recognition Challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211-252, 2015.
[27] Simonyan K. and Zisserman A., “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv preprint arXiv: 1409.1556, 2014.
[28] Sun Y., Wang X., and Tang X., “Deep Learning Face Representation by Joint Identification- Verification,” in Proceedings of Advances in Neural Information Processing Systems, Montreal, pp. 1988-1996, 2014.
[29] Sun Y., Wang X., and Tang X., “Hybrid Deep Learning for Face Verification,” in Proceedings of The IEEE International Conference on Computer Vision, Sydney, pp. 1489-1496. 2013.
[30] Szegedy C., Vanhoucke V., Ioffe S., Shlens J., and Wojna Z., “Rethinking The Inception Architecture for Computer Vision,” in The Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, pp. 2818-2826, 2016.
[31] Taigman Y., Yang M., Ranzato M., and Wolf L., “Deepface: Closing the Gap to Human-Level Performance in Face Verification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, pp. 1701-1708, 2014.
[32] Tran D., Bourdev L., Fergus R., Torresani L., and Paluri M., “Learning Spatiotemporal Features with 3d Convolutional Networks, ”in Proceedings of The IEEE International Conference on Computer Vision, Santiago, pp. 4489-4497, 2015.
[33] Wang L., Tan T., Ning H., and Hu W., “Silhouette Analysis-Based Gait Recognition for Human Identification, ”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 12, pp.1505-1518, 2003.
[34] Wang Q., “Kernel Principal Component Analysis and Its Applications in Face Recognition and Active Shape Models,” arXiv preprint arXiv: 1207.3538, 2012.
[35] Wen Y., Li Z., and Qiao Y., “Latent Factor Guided Convolutional Neural Networks for Age- Invariant Face Recognition,” in Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pp. 4893-4901. 2016.
[36] Wu Z., Huang Y., and Wang L., “Learning Representative Deep Features for Image Set Analysis,” IEEE Transactions on Multimedia, vol. 17, no. 11, pp.1960-1968, 2015.
[37] Xu B., Wang N., Chen T., and Li M., “Empirical Evaluation of Rectified Activations in Convolutional Network,” arXiv preprint arXiv: 1505.00853, 2015.
[38] Xu Z., Li S., and Deng W., “Learning Temporal Features Using LSTM-CNN Architecture for Face Anti-Spoofing,” in Proceedings of the IEEE 3rd IAPR Asian Conference on Pattern Recognition, Kuala Lumpur, pp. 141-145, 2015.
[39] Yam C. and Nixon M., “Model-based Gait Recognition in: Encyclopedia of Biometrics,” Springer, pp.1082-1088, 2009.
[40] Yu S., Chen H., Garcia Reyes E., and Poh N., “Gaitgan: Invariant Gait Feature Extraction Using Generative Adversarial Networks,” in Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, pp. 30-37, 2017.
[41] Zhang C., Liu W., Ma H., and Fu H., “Siamese Neural Network Based Gait Recognition for Human Identification,” in the Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, pp. 2832-2836, 2016.
[42] Zhang Y., Huang Y., Wang L., and Yu S., “A Comprehensive Study on Gait Biometrics Using A Joint CNN-Based Method,” Pattern 492 The International Arab Journal of Information Technology, Vol. 18, No. 3A, Special Issue 2021 Recognition, vol. 93, pp. 228-236, 2019.
[43] Zhao G., Liu G., Li H., and Pietikainen M., “3D Gait Recognition Using Multiple Cameras, ”in the Proceedings of IEEE 7th International Conference on Automatic Face and Gesture Recognition, Southampton, pp. 529-534, 2006.
[44] Zheng S., Zhang J., Huang K., He R., and Tan T., “Robust View Transformation Model for Gait Recognition,” in Proceeding of 18th IEEE International Conference on Image Processing, Brussels, pp. 2073-2076, 2011. Farukh Hashmi received his B.E in Electronics and Communication Engineering from R.G.P.V Bhopal University in 2007. He obtained his M.E. in Digital Techniques & Instrumentation in 2010 from R.G.P.V Bhopal University. He received Ph.D. at VNIT Nagpur under the supervision of Dr. AvinashKeskar. He has a teaching experience of 11 years. He is currently an Assistant professor at Department of Electronics and Communication Engineering, National Institute of Technology, Warangal. He has published up to 65 papers in National/International Conferences/ Journals. His current research interests are Image Processing, Internet of Things, Embedded Systems, Biomedical Signal Processing, Computer Vision, Circuit Design, and Digital IC Design etc. Mr. Mohammad F. Hashmi is a member of IEEE, LMIETE, and LMIAENG. Kiran Ashish is currently working as Computer Vision Engineer at Viume, an AI based startup at Barcelona, Spain. He completed his Bachelor's degree in ECE from Anurag Group of Institutions, Hyderabad in 2019. He is having 2 years of industry experience in the computer vision domain and has worked in several verticals. He has published six papers in National/International Conferences/Journals. His current field of interests include Deep Learning, Neural Networks and Computer Vision, Biomedical Imaging, Facial; recognition etc.,. Satyarth Katiyar is studying Bachelors in Technology in Electronics Engineering, Harcourt Butler Technical University (HBTU), Kanpur (U.P.) India from 2017. He will complete B.Tech. Degree in the June 2021. He is working as a research Intern under the supervision of Dr. Md. Farukh Hashmi. He has published three papers in National/International Conferences/Journals. His current field of interests include Deep Learning, Neural Networks and Computer Vision, Biomedical Imaging, Facial; recognition etc.. Avinash Keskar completed his B.E. from VNIT, Nagpur in 1979 and received gold medal for the same. He completed his M.E. from IISc, Bangalore in1983, receiving the gold medal again. He also completed his Ph.D. from VNIT Nagpur in1997.The author is a life member of IAENG. He has 32 years of teaching experience and 7 years of industrial experience. He is currently a Professor and Head of the Department at Department of Electronics Engineering, VNIT Nagpur. His current research interests include Image Processing, Computer Vision, Soft Computing, and Fuzzy Logic etc. Prof. Keskar is a senior member of IEEE, FIETE, LMISTE, FIE.