The International Arab Journal of Information Technology (IAJIT)


Hybrid Algorithm with Variants for Feed Forward

Levenberg-Marquardt back-propagation algorithm, as a Feed forward Neural Network (FNN) training method, has some limitations associated with over fitting and local optimum problems. Also Levenberg-Marquardt back-propagation algorithm is opted only for small network. This research uses hybrid evolutionary algorithm based on Particle Swarm Optimization (PSO) in FNN training. This algorithm includes a number of components that gives advantage in the experimental study. Variants such as size of the swarm, acceleration coefficients, coefficient constriction factor and velocity of the swarm are proposed to improve convergence speed as well as to improve accuracy. The integration of components in different ways in hybrid algorithm produces effective optimization of back propagation algorithm. Also, this hybrid evolutionary algorithm based on PSO can be used for complex neural network structure.

[1] Angeline P., Sauders G., and Pollack J., An Evolutionary Algorithm that Construct Recurrent Neural Network, IEEE Transactions on Neural Networks, vol. 5, no. 1, pp. 54-65, 1994.

[2] Behera L., Kumar S., and Patnaik A., On Adaptive Learning Rate that Guarantees Convergence in Feed Forward Networks, IEEE Transactions Neural Networks, vol. 17, no. 5, pp. 1116-1125, 2006.

[3] Clerc M., Particle Swarm Optimization, Wiley online Library, 2006.

[4] Clerc M. and Kennedy J., The Particle Swarm- Explosion, Stability, and Convergence in a Multidimensional Complex Space, IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58-73, 2002.

[5] Cooren y., Clerc M., and Siarry P., Performance Evaluation of TRIBES, an Adaptive Particle Swarm Optimization Algorithm, Swarm Intelligence, vol. 3, no. 2, pp. 149-178, 2009. Hybrid Algorithm with Variants for Feed Forward Neural Network 245

[6] De Oca M., Stutzle T., Birattari M., and Dorigo M., Frankenstein s PSO; A Composite Particle Swarm Optimization Algorithm, IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 1120-1132, 2009.

[7] Fan S. and Zahara E., A Hybrid Simplex search and Particle Swarm Optimization for Unconstrained Optimization, European Journal of Operational Research, vol. 181, no. 2, pp. 527-548, 2007.

[8] Gori M. and Tesi A., On the Problem of Local Minima in Back-propagation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 1, pp. 76- 86, 1992.

[9] Jacobs R., Increased Rates of Convergence through Learning Rate Adaptation, Neural Networks, vol. 1, no. 4, pp. 295-307, 1988.

[10] Janson S. and Middendorf M., A Hierarchical Particle Swarm Optimizer and its Adaptive Variant, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 35, no. 6, pp. 1272-1282, 2005.

[11] Jordan J., Helwig S., and Wanka R., Social Interaction in Particle Swarm Optimization, the Ranked FIPS and Adaptive Multi Swarms, in Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, Atlanta, pp. 49-56. 2008.

[12] Kennedy J., Small Worlds and Mega-minds: Effects of Neighborhood Topology on Particle Swarm Performance, in Proceedings of Congrees on Evolutionary Computation, Washington, pp. 1931-1938, 1999.

[13] Khan S., Nazir M., Riaz N., and Khan M., Optimized Features Selection using Hybrid PSO-GA for Multi-view Gender Classification, The International Arab Journal of Information Technology, vol. 12, no. 2, pp. 183-189, 2014.

[14] Li X., Adaptively Choosing Neighborhood Bests Using Species in a Particle Swarm Optimizer for Multimodal Function Optimization, in Proceedings of Genetic and Evolutionary Computation Conference, Seattle, pp. 105-116. 2004

[15] Liang J., Qin A., Suganthan P., and Baska S., Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions, IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281-295, 2006.

[16] Magoulas G., Plagianakos V., and Vrahatis M., Globally Convergent Algorithm with Local Learning Rate, IEEE Transactions on Neural Networks, vol. 13, no. 3, pp. 774-779, 2002.

[17] Mendes R., Kennedy J., and Neves J., The Fully Informed Particle Swarm: Simpler, Maybe Better, IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204-210, 2004.

[18] Nguyen Q., Ong Y., and Lim M., A Probabilistic Memetic Framework, IEEE Transactions on Evolutionary Computation, vol. 13, no. 3, pp. 604-623, 2009.

[19] Wu Z. and Zhou J., A Self Adaptive Particle Swarm Optimization Algorithm with Individual Coefficient Adjustment, in Proceedings of International Conference on Computational Intelligence and Security, Harbin, pp. 133-136, 2007.

[20] Yao X., A Review of Evolutionary Artificial Neural Network, International Journal Intelligent Ssystem, vol. 8, no. 4, pp. 539-567, 1993.

[21] Yisu J., Knowles J., Hongmei L., Yizeng L., and Kell D., The landscape Adaptive Particle Swarm Optimizer, Applied Soft Computing, vol. 8, no. 1, pp. 295-304, 2008.

[22] Yu X., Chen G., and Cheng S., Acceleration of Backpropagation Learning using Optimized Learning Rate and Momentum, Electronics Letters, vol. 29, no. 14, pp. 1288-1290, 1993. Thinakaran kandasamy received M.E., degree in computer science from Mahendra Engineering College which is affiliated to Anna University, Coimbatore, Tamilnadu in 2009. He is currently working toward the Ph.D. degree at the Anna University. He is currently an Assistant Professor in Computer Science Engineering, Sri Venkateswara College of Engineering & Technology, Thiruvallur, India. His current research interests include Neural Network and Data Mining. Rajasekar Rajendran received his doctorate from Department of Aeronautics, Imperial College, London, UK. His aeronautical masters degree was from IIT, Madras. He is currently working as the Professor and Head of Aeronautical Engineering Department, Excel Engineering College, Erode, India. (an affiliated college under Anna University, Chennai). His specialization and research interests are aerodynamics,Neural Network and its applications.