..............................
..............................
..............................
PLDL: A Novel Method for Label Distribution Learning
The nature, volume and orientation of data have been changed a lot in the last few years. The changed situation has
beckoned data scientists to modify traditional algorithms and innovate new methods for processing new type of high volume,
extremely complex data. One of the challenges is label ambiguity in the data, where the distribution of the significance of the
labels matters. In this paper, a new method named Probabilistic Label Distribution Learning (PLDL) has been proposed for a
computing degree of the belongingness. It is based on a proposed new Label Probability Density Function (LPDF) derived
from Parzon estimate. The LPDF has been used in Algorithm Adoption K-Nearest Neighbors (AA-KNN) for Label Distribution
Learning (LDL). Probability density estimators are used to estimate this ambiguity for each and every label. The overall
degree of the belongingness of unseen instance has been evaluated on various real datasets. Comparative performance
evaluation in terms of prediction accuracy of the proposed PLDL has been made with Algorithm adaptation KNN, Multilayer
Perceptron, Levenberg-Marquardt neural network and layer recurrent neural for Label Distribution Learning. It has been
observed that the increase in prediction accuracy for the proposed PLDL is highly statistically significant for most of the real
datasets when compared with the standard algorithms for LDL.
[1] Ahonen T., Hadi A., and Pietikainen M., “Face Description with Local Binary Patterns: Application to Face Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037-2041, 2006.
[2] Boutell M., Luo J., Shen X., and Brown C., “Learning Multi-Label Scene Classification,” Pattern Recognition, vol. 37, no. 9, pp. 1757-1771, 2004.
[3] Breiman L., “Bagging Predictors,” Machine Learning, vol. 24, no. 2, pp. 123-140, 1996.
[4] Cacoullos T., “Estimation of A Multivariate Density,” Annals of the Institute of Statistical Mathematics, vol. 18, no.1, pp. 179-189, 1966.
[5] Chaudhuri B. and Bhattacharya U., “Efficient Training and Improved Performance of Multilayer Perceptron in Pattern Classification,” Neurocomputing, vol. 34, no. 1, pp. 11-27, 2000.
[6] Eisen M., Spellman P., Brown P., and Botstein D., “Cluster Analysis and Display of Genome- Wide Expression Patterns,” Proceedings of the National Academy of Sciences, vol. 95, no. 25, pp. 14863-14868, 1998.
[7] Geng X., Smith-Miles K., and Zhou Z., “Facial Age Estimation By Learning From Label Distributions,” in Proceedings of the 24th AAAI Conference on Artificial Intelligence, Atlanta, pp. 451-456, 2010.
[8] Geng X., Yin C., and Zhou Z., “Facial Age Estimation By Learning from Label Distributions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 10, pp. 2401-2412, 2013.
[9] Geng X., “Label Distribution Learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 7, pp. 1734-1748, 2016.
[10] Geng X. and Rongzi J., “Label Distribution Learning,” in Proceedings of IEEE 13th International Conference on Data Mining Workshops. IEEE Computer Society, Washington, 2013.
[11] Han J., Pei J., and Kamber M., Data Mining: Concepts and Techniques, Elsevier, 2011.
[12] Ho T., “The Random Subspace Method for Constructing Decision Forests,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, pp. 832-844, 1998.
[13] Larose D., Discovering Knowledge in Data: An Introduction to Data Mining, John Wiley and Sons, 2014.
[14] Li T., Ogihara M., and Li Q., “A Comparative Study on Content-Based Music Genre Classification,” in Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, Toronto, pp. 282-289, 2003.
[15] Liu Q. and Wang J., “A One-Layer Recurrent Neural Network with A Discontinuous Hard- Limiting Activation Function for Quadratic Programming,” IEEE Transactions on Neural Networks, vol. 19, no. 4, pp. 558-570, 2008.
[16] Lyons M., Akamatsu S., Kamachi M., and Gyoba J., “Coding Facial Expressions with Gabor Wavelets,” in Proceedings of 3rd IEEE International Conference on Automatic Face and Gesture Recognition, Nara, pp. 200-205, 1998.
[17] Murthy V., “Estimation of Probability Density,” The Annals of Mathematical Statistics, vol. 36, no. 3, pp. 1027-1031, 1965.
[18] Nasierding G., Kouzani A., and Tsoumakas G., “A Triple-Random Ensemble Classification Method for Mining Multi-Label Data,” IEEE International Conference on Data Mining Workshops, Sydney, pp. 49-56, 2010. PLDL: A Novel Method for Label Distribution Learning 1027
[19] Parzen E., “On Estimation of a Probability Density Function and Mode,” The Annals of Mathematical Statistics, vol. 33, no. 3, pp. 1065- 1076, 1962.
[20] Read J., Bernhard P., and Holmes G. “Multi- Label Classification Using Ensembles of Pruned Sets,” in Proceedings of 8th IEEE International Conference on Data Mining, Pisa, pp. 995-1000, 2008.
[21] Read J., Pfahringer B., Holmes G., and Frank E., “Classifier Chains for Multi-Label Classification,” Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Berlin, pp. 254-269, 2009.
[22] Sapna S., Tamilarasi A., and Kumar M., “Backpropagation Learning Algorithm Based on Levenberg Marquardt Algorithm,” Computer Science and Information Technology, vol. 2, no. 4, pp. 393-398, 2012.
[23] Tsoumakas G. and Katakis I., “Multi-label Classification: An Overview,” International Journal Data Warehousing and Mining, pp. 1-13, 2006.
[24] Tsoumakas G., Katakis I., and Vlahavas I., “Random K-Labelsets for Multilabel Classification,” IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 7, pp. 1079-1089, 2011.
[25] Vapnik V., Statistical Learning Theory, Wiley, 1998.
[26] Zarrouk E. and Benayed Y., “Hybrid SVM/HMM Model for the Arab Phonemes Recognition,” The International Arab Journal of Information Technology, vol. 13 no. 5, pp. 574- 582, 2016.
[27] Zhang M. and Zhou Z., “ML-KNN: A Lazy Learning Approach to Multi-Label Learning,” Pattern Recognition, vol. 40, no. 7, pp. 2038-2048, 2007.
[28] Zhang M. and Zhou Z., “A K-Nearest Neighbor Based Algorithm for Multi-Label Classification,” IEEE International Conference on Granular Computing, Beijing, pp. 718-721, 2005.
[29] Zhang Y., Nur Z., and Milios E., “Narrative Text Classification for Automatic Key Phrase Extraction in Web Document Corpora,” in Proceedings of the 7th Annual ACM International Workshop on Web Information and Data Management, Bremen, pp. 51-58, 2005.
[30] Zhou Z., Zhang M., Huang S., and Li Y., “Multi- Instance Multi-Label Learning,” Artificial Intelligence, vol. 176, no. 1, pp. 2291-2320, 2012.
[31] Zhu S., Ji X., Xu W., and Gong Y., “Multi- Labelled Classification Using Maximum Entropy Method,” in Proceedings of The 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Salvador, pp. 274-281, 2005. Appendix: Symbol Table Table A1. Symbol table. SN. Symbol Denotation 1 m number of attributes/features 2 Xtr neighborhood pattern 3 Xj test pattern 4 ω(y) weighting function 5 λ size of Parzen window 6 σ smoothing operator 7 N number of k-nearest neighbors 8 degree of belongingness (d) of existing pattern Xtr for category l 9 degree of belongingness for kth nearest neighbor 10 degree of belongingness for test pattern 11 diff error accumulated over between actual and predicted distribution of degrees Venkatanareshbabu Kuppili PhD, is with the Machine Learning Group, Department of CSE, NIT Goa, India, where he is currently an Assistant Professor. He was with Evalueserve pvt. ltd, as a Senior Research Associate. He is also actively involved in teaching and research development for the Graduate Program in Computer Science and Engineering Department at the NIT Goa. He has authored a number of research papers published in reputed international journals. Mainak Biswas M.Tech, completed his B.Tech Degree in Information Technology from Govt. College of Engineering and Ceramic Technology, Kolkata in 2007, M.Tech in Distributed and Mobile Computing from Jadavpur University, Kolkata in 2009 and is currently pursuing his PhD in NIT Goa.Previously he has served as Assistant Professor in OPJU, Raigarh in 2009-13 and MITS, Gwalior in 2014-15. He has published and presented papers in various International Journals and Conferences. Damodar Edla PhD received B.Sc degree from Kakatiya University in 2004, M.Sc degree from University of Hyderabad in 2006. M.Tech. and PhD degree in computer science and engineering from ISM Dhanbad in 2009 and 2013 respectively. His research area is Data mining and Wireless Sensor Networks. He is currently Assistant Professor in CSE Department, NIT Goa. He has more than 20 research publications in reputed international journals and conferences.