..............................
..............................
..............................
Enhanced Median Flow Tracker for Videos with Illumination Variation Based on Photometric
Object tracking is a fundamental task in video surveillance, human-computer interaction and activity analysis. One
of the common challenges in visual object tracking is illumination variation. A large number of methods for tracking have been
proposed over the recent years, and median flow tracker is one of them which can handle various challenges. Median flow
tracker is designed to track an object using Lucas-Kanade optical flow method which is sensitive to illumination variation,
hence fails when sudden illumination changes occur between the frames. In this paper, we propose an enhanced median flow
tracker to achieve an illumination invariance to abruptly varying lighting conditions. In this approach, illumination variation
is compensated by modifying the Discrete Cosine Transform (DCT) coefficients of an image in the logarithmic domain. The
illumination variations are mainly reflected in the low-frequency coefficients of an image. Therefore, a fixed number of DCT
coefficients are ignored. Moreover, the Discrete Cosine (DC) coefficient is maintained almost constant all through the video
based on entropy difference to minimize the sudden variations of lighting impacts. In addition, each video frame is enhanced
by employing pixel transformation technique that improves the contrast of dull images based on probability distribution of
pixels. The proposed scheme can effectively handle the gradual and abrupt changes in the illumination of the object. The
experiments are conducted on fast-changing illumination videos, and results show that the proposed method improves median
flow tracker with outperforming accuracy compared to the state-of-the-art trackers.
[1] Abdennour S. and Tebbikh H., “Parallel Particle Filters for Multiple Target Tracking,” The International Arab Journal of Information Technology, vol. 13, no. 6, pp. 707-714, 2016.
[2] Asvadi A., Mahdavinataj H., Karami M., and Baleghi Y., “Incremental Discriminative Color Object Tracking,” in Proceedings of International Symposium on Artificial Intelligence and Signal Processing, Tehran, pp. 71-81, 2013.
[3] Briechle K. and Hanebeck U., “Template Matching Using Fast Normalized Cross Correlation,” in Proceedings of The International Society for Optical Engineering Control, Orlando, pp. 95-102, 2001.
[4] Chen W., Er M., and Wu S., “Illumination Compensation and Normalization for Robust Face Recognition Using Discrete Cosine Transform in Logarithm Domain,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 36 , no. 2, pp. 458- 466, 2006.
[5] Comaniciu D., Ramesh V., and Meer P., “Real- Time Tracking of Non-Rigid Objects Using Mean Shift,” in Proceedings IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, pp. 142-149, 2000.
[6] Du S. and Ward R., “Wavelet-Based Illumination Normalization For Face Recognition,” in Proceedings of IEEE International Conference on Image Processing, Genova, pp. II-954, 2005.
[7] Felsberg M., “Enhanced Distribution Field Tracking Using Channel Representations,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, Sydney, pp. 121-128, 2013.
[8] Grabner H., Grabner M., and Bischof H., “Real- Time Tracking Via On-Line Boosting,” in Proceedings of British Machine Vision Conference, Edinburgh, pp. 47-56, 2006.
[9] Henriques J., Caseiro R., Martins P., and Batista J., “Exploiting the Circulant Structure of Tracking-By-Detection with Kernels,” in Proceedings of European Conference on Computer Vision, Florence, pp. 702-715, 2012.
[10] Henriques J., Caseiro R., Martins P., and Batista J., “High-Speed Tracking with Kernelized Correlation Filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 583-596, 2015.
[11] Huang S., Cheng F., and Chiu Y., “Efficient Contrast Enhancement Using Adaptive Gamma Correction with Weighting Distribution,” IEEE Transactions on Image Processing, vol. 22, no. 3, pp. 1032-1041, 2013.
[12] Islam Z., Oh C., and Lee C., “Effect of Resampling Steepness on Particle Filtering Performance in Visual Tracking,” The International Arab Journal of Information Technology, vol. 10, no. 1, pp. 102-109, 2013. man Center location error Shaking Center location error Frame number Frame number Enhanced Median Flow Tracker for Videos with Illumination Variation Based ... 271
[13] Kalal Z., Mikolajczyk K., and Matas J., “Forward-Backward Error: Automatic Detection of Tracking Failures,” in Proceedings of International Conference on Pattern Recognition, Istanbul, pp. 2756-2759, 2010.
[14] Kalal Z., Mikolajczyk K., and Matas J., “Tracking Learning Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 7, pp. 1409-1422, 2012.
[15] Nhat V. and Lee G., “Illumination Invariant Object Tracking with Adaptive Sparse Representation,” International Journal of Control, Automation and Systems, vol. 12, no. 1, pp. 195-201, 2014.
[16] Ning J., Zhang L., Zhang D., and Wu C., “Robust Mean-Shift Tracking with Corrected Background-Weighted Histogram,” IET Computer Vision, vol. 6, no. 1, pp. 62-69, 2012.
[17] Phadke G. and Velmurugan R., “Improved Weighted Histogram for Illumination Invariant Mean-Shift Tracking,” in Proceedings of the Indian Conference on Computer Vision Graphics and Image Processing, Bangalore, pp. 1-8, 2014.
[18] Possegger H., Mauthner T., and Bischof H., “In Defense of Color-Based Model-Free Tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston, pp. 2113-2120, 2015.
[19] Ross D., Lim J., Lin R., and Yang M., “Incremental Learning for Robust Visual Tracking,” International Journal of Computer Vision, vol. 77, no. 1, pp. 125-141, 2008.
[20] Sevilla-Lara L. and Learned-Miller E., “Distribution Fields for Tracking,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1910-1917, 2012.
[21] Wang B., Li W., Yang W., and Liao Q., “Illumination Normalization Based on Weber's Law with Application to Face Recognition,” IEEE Signal Processing Letters, vol. 18, no. 8, pp. 462-465, 2011.
[22] Wang H., Li S., Wang Y., and Zhang J., “Self Quotient Image for Face Recognition,” in Proceedings of International Conference on Image Processing, Singapore, pp. 1397-1400, 2004.
[23] Wu Y., Lim J., and Yang M., “Online Object Tracking: A Benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, pp. 2411-2418, 2013.
[24] Zhang K., Zhang L., and Yang M., “Fast Compressive Tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 10, pp. 2002-2015, 2014.
[25] Zhang K., Zhang L., and Yang M., “Real-Time Compressive Tracking,” in Proceedings of European Conference on Computer Vision, Florence, pp. 864-877, 2012.
[26] Zhang T., Tang Y., Fang B., Shang Z., and Liu X., “Face Recognition Under Varying Illumination Using Gradientfaces,” IEEE Transactions on Image Processing, vol. 18, no. 11, pp. 2599-2606, 2009. Asha Narayana received B E and M Tech degree in Electronics and Communication Engineering from NMAMIT Nitte in 2006 and 2012 respectively. She is currently pursuing Ph.D. in National Institute of Technology Karnataka, India (NITK). Her research interests include computer vision and medical image processing. Narasimdhan Venkata received BE degree in Electronics and Communication Engineering from Andhra University, in 2005 and M Tech degree in Signal Processing from Indian Institute of Technology, Guwahati, India, in 2007. Later, He received his Ph.D. degree from Indian Institute of Science, India, in 2012. He is presently working as an Assistant Professor in department of Electronics and Communication Engineering at National Institute of Technology, Karnataka, India. His research interests include medical imaging, computer vision, and medical image processing