The International Arab Journal of Information Technology (IAJIT)


A Heuristic Tool for Measuring Software Quality Using Program Language Standards

Quality is a critical aspect of any software system. Indeed, it is a key factor for the competitiveness, longevity, and effectiveness of software products. Code review facilitates the discovery of programming errors and defects, and using programming language standards is such a technique. In this study, we developed a code review technique for achieving maximum software quality by using programming language standards. A Java Code Quality Reviewer tool (JCQR) was proposed as a practical technique. It is an automated Java code reviewer that uses SUN and other customized Java standards. The JCQR tool produces new quality-measurement information that indicates applied, satisfied, and violated rules in a piece of code. It also suggests whether code quality should be improved. Accordingly, it can aid junior developers and students in establishing a successful programming attitude. JCQR uses customized SUN-based Java programming language standards. Therefore, it fails to cover certain features of Java.

[1] Abdallah M. and Al-Rifaee M., “Java Standards: A Comparative Study,” International Journal of Computer Science and Software Engineering, vol. 6, no. 6, pp. 146-151, 2017.

[2] Abdallah M. and Al-Rifaee M., “Towards A New Framework of Program Quality Measurement Based on Programming Language Standards,” International Journal of Engineering and Technology, vol. 7, no. 2-3, pp. 1-3, 2018.

[3] Adewumi A., Misra S., and Omoregbe N., “A Review of Models for Evaluating Quality in Open Source Software,” IERI Procedia, vol. 4, pp. 88-92, 2013.

[4] Ahmed B., Gargantini A., Zamli K., Yilmaz C., Bures M., and Miroslav S., “Code-Aware Combinatorial Interaction Testing,” IET Software, vol. 13, no. 6, pp. 600-609, 2019.

[5] Ala-Mutka K., “A Survey of Automated Assessment Approaches for Programming Assignments,,” Computer Science Education, vol. 15, no. 2, pp. 83-102, 2005.

[6] Arafati O. and Riehle D., “The Comment Density of Open Source Software Code,” in Proceedings of 31st International Conference on Software Engineering-Companion Volume, Vancouver, pp. 195-198, 2009.

[7] Atoum I., “A Novel Framework for Measuring Software Quality-In-Use Based on Semantic Similarity and Sentiment Analysis of Software Reviews,” Journal of King Saud University- Computer and Information Sciences, vol. 32, no. 1, pp. 113-125, 2020.

[8] Ayewah N., Pugh W., Hovemeyer D., Morgenthaler J., and Penix J., “Using Static Analysis to Find Bugs,” IEEE Software, vol. 25, no. 5, pp. 22-29, 2008.

[9] Bader R., Alokush B., Abdallah M., Awad K., and Ngah A., “A Proposed Java Forward Slicing Approach,” Telkomnika, vol. 18, no. 1, pp. 311- 316, 2020.

[10] Bakota T., Hegedüs P., Siket I., Ladányi G., and Ferenc R., “Qualitygate Sourceaudit: A Tool for Assessing the Technical Quality of Software,” in Proceedings of Software Evolution Week-IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering, Antwerp, pp. 440-445, 2014.

[11] BaumT., Leßmann H., and Schneider K., “The Choice of Code Review Process: A Survey on the State of the Practice,” in Proceedings of International Conference on Product-Focused Software Process, Improvement, pp. 111-127 2017.

[12] Belli F. and Crisan R., “Towards Automation of Checklist-Based Code-Reviews,” in Proceedings of ISSRE '96: 7th International Symposium on Software Reliability Engineering, White Plains, 1996.

[13] Bernhart M. and Grechenig T., “on The Understanding of Programs With Continuous Code Reviews,” in Proceedings of 21st International Conference on Program Comprehension, San Francisco, pp. 192-198, 2013.

[14] Boehm B., Characteristics of Software Quality, North-Holland, 1978.

[15] Bosu A., Greiler M., and Bird C., “Characteristics of Useful Code Reviews: An Empirical Study At Microsoft,” in Proceedings of the 12th Working Conference on Mining Software Repositories, Florence, pp. 146-156, 2015.

[16] Brothers L., Sembugamoorthy V., and Muller M., “ICICLE: Groupware for Code Inspection,” in Proceedings of the ACM Conference on Computer-Supported Cooperative Work, Los Angeles, pp. 169-181, 1990.

[17] Cavano J. and McCall J., “A Framework for The Measurement of Software Quality,” SIGSOFT ACM SIGSOFT Software Engineering Notes, vol. 3, no. 5, pp. 133-139, 1978.

[18] Curcio K., Malucelli A., Reinehr S., and Paludo M., “An Analysis of The Factors Determining Software Product Quality: A Comparative Study,” Computer Standards and Interfaces, vol. 48, pp. 10-18, 2016.

[19] Dalla-Palma S., Di-Nucci D., Palomba F., and AndrewTamburri D., “Toward A Catalog of Software Quality Metrics for Infrastructure Code,” Journal of Systems and Software, vol. 170, pp. 110726, 2020.

[20] Dey T. and Mockus A., “Deriving A Usage- Independent Software Quality Metric,” Empirical Software Engineering, vol. 25, no. 2, pp. 1596- 1641, 2020.

[21] Dos-Santos E. and Nunes I., “Investigating The Effectiveness of Peer Code Review in Distributed Software Development Based on Objective and Subjective Data,” Journal of Software Engineering Research and Development, vol. 6, no. 1, pp. 14, 2018.

[22] Dunsmore A., Roper M., and Wood M., “The Development and Evaluation of Three Diverse A Heuristic Tool for Measuring Software Quality Using Program Language Standards 321 Techniques for Object-Oriented Code Inspection,” IEEE Transactions on Software Engineering, vol. 29, no. 8, pp. 677-686, 2003.

[23] Ebert F., Castor F., Novielli N., and Serebrenik A., “Confusion Detection in Code Reviews,” in Proceedings of IEEE International Conference on Software Maintenance and Evolution, Shanghai, pp. 549-553, 2017.

[24] Emden E. and Moonen L., “Java Quality Assurance By Detecting Code Smells,” in Proceedings of 9th Working Conference on Reverse Engineering Proceedings, Richmond, pp. 97-106, 2002.

[25] Fagan M., “Design and Code Inspections To Reduce Errors in Program Development,” IBM Systems Journal, vol. 15, no. 3, pp. 182-211, 1976.

[26] Fagan M., “Advances in Software Inspections,” IEEE Transactions on Software Engineering, vol. SE-12, no. 7, pp. 744-751, 1986.

[27] Fenton N. and Bieman J., Software Metrics: A Rigorous and Practical Approach, CRC Press, 2014.

[28] Fisher M. and Cukic B., “Automating Techniques for Inspecting High Assurance Systems,” in Proceedings of 6th IEEE International Symposium on High Assurance Systems Engineering. Special Topic: Impact of Networking, Boco Raton, pp. 117-126, 2001.

[29] Hanna S., Jaber H., Abu-Jaber F., Al Shalaby T., and Almasalmeh A., “Enhancing The Software Engineering Curriculums: A Case Study of The Jordanian Universities,” in Proceedings of IEEE 27th Conference on Software Engineering Education and Training, Klagenfurt, pp. 84-93, 2014.

[30] Hatton L., “Testing the Value of Checklists in Code Inspections,” IEEE Software, vol. 25, no. 4, pp. 82-88, 2008.

[31] Hundhausen C., Agrawal A., and Agarwal P., “Talking About Code: Integrating Pedagogical Code Reviews into Early Computing Courses,” ACM Transactions on Computing Education, vol. 13, no. 3, pp. 1-28, 2013.

[32] IEEE: IEEE Standard for Software Reviews and Audits. IEEE Std 1028-2008, 2008.

[33] IEEE, 730-2014, IEEE Standard for Software Quality Assurance Processes, 2014.

[34] Ivan I., Zamfiroiu A., Doineaa M., and Despa M., “Assigning Weights for Quality Software Metrics Aggregation,” Procedia Computer Science, vol. 55, pp. 586-592, 2015.

[35] Jubilson E. and Sangam R., “Software Metrics for Computing Quality of Software Agents,” Journal of Computational and Theoretical Nanoscience, vol. 17, no. 5, pp. 2035-2038, 2020.

[36] Katleman., Added tag jdk7u6-b30 for changeset 4bd052837497”. Version 1. c2c5d63a17e/src/share/classes/java/lang/String.ja va, Last Visited, 2021.

[37] Klint P., Storm T., and Vinju J., “RASCAL: A Domain Specific Language for Source Code Analysis and Manipulation,” in Proceedings of 9th IEEE International Working Conference on Source Code Analysis and Manipulation, Edmonton, pp. 168-177, 2009.

[38] Kononenko O., Baysal O., and Godfrey M., “Code Review Quality: How Developers See It,” in Proceedings of IEEE/ACM 38th International Conference on Software Engineering, Austin, pp. 1028-1038, 2016.

[39] Lad´anyi G., “Business Process Quality Measurement using Advances in Static Code Analysis,” Acta Cybernetica, vol. 22, pp. 135- 150, 2015.

[40] Lafi M., Botros J., Kafaween H., Al-Dasoqi A., and Al-Tamimi A., “Code Smells Analysis Mechanisms, Detection Issues, and Effect on Software Maintainability,” in Proceedings of IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology, Amman, pp. 663-666, 2019.

[41] Li Z., Jing X., and Zhu X., “Progress on Approaches to Software Defect Prediction,” IET Software, vol. 12, no. 3, pp. 161-175, 2018.

[42] Mántylá, M., “Empirical Software Evolvability- Code Smells and Human Evaluations,” in Proceedings of IEEE International Conference on Software Maintenance, Timisoara, pp. 1-6, 2010.

[43] McIntosh S., Kamei Y., Adams B., and Hassan A., “The Impact of Code Review Coverage and Code Review Participation on Software Quality: A Case Study of The Qt, VTK, and ITK Projects,” in Proceedings of Proceedings of the 11th Working Conference on Mining Software Repositories, India, pp. 192-201, 2014.

[44] McMeekin D., Von-Konsky B., Chang E., and Cooper D., “Checklist Inspections and Modifications: Applying Bloom's Taxonomy to Categorise Developer Comprehension,” in Proceedings of 16th IEEE International Conference on Program Comprehension, Amsterdam, pp. 224-229, 2008.

[45] Parnas D. and Weiss D., “Active Design Reviews: Principles And Practices,” Journal of Systems and Software, vol. 7, no. 4, pp. 259-265, 1987.

[46] Pecka P., Nowak M., Rataj A., and Nowak S., “Solving Large Markov Models Described with Standard Programming Language,” in Proceedings of International Symposium on Computer and Information Sciences, pp. 57-67, 2018. 322 The International Arab Journal of Information Technology, Vol. 19, No. 3, May 2022

[47] PMD Software 2020; Available from:, Last Visited, 2021.

[48] Rahman M. and Roy C., “Impact of Continuous Integration on Code Reviews,” in Proceedings of IEEE/ACM 14th International Conference on Mining Software Repositories, Buenos Aires, pp. 499-502, 2017.

[49] Ramler R., Moser M., and Pichler J., “Automated Static Analysis of Unit Test Code,” in Proceedings of IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering, Osaka, 2016.

[50] Rodriguez-Prieto O., Mycroft A., and Ortin F., “An Efficient and Scalable Platform for Java Source Code Analysis Using Overlaid Graph Representations,” IEEE Access, vol. 8, pp. 72239-72260, 2020.

[51] Schnoor H. and Hasselbring W., “Comparing Static and Dynamic Weighted Software Coupling Metrics,” Computers, vol. 9, no. 2, pp. 24, 2020.

[52] Singh D., Sekar V., Stolee K., and Johnson B., “Evaluating How Static Analysis Tools Can Reduce Code Review Effort,” in Proceedings of IEEE Symposium on Visual Languages and Human-Centric Computing, Raleigh, pp. 101- 105, 2017.

[53] Style C., Software, Available from:, Last Visited, 2020.

[54] Suma V. and Nair T., “Defect Management Strategies In Software Development,” arXiv preprint arXiv, 2012.

[55] Taba N. and Ow S., “A Scenario-Based Model to Improve the Quality of Software Inspection Process,” in Proceedings of 4th International Conference on Computational Intelligence, Modelling and Simulation, Kuantan, pp. 194-198, 2012.

[56] Thongtanunam P., Kula R., Cruz A., Yoshida N., and Iida H., “Improving Code Review Effectiveness Through Reviewer Recommendations,” in Proceedings of the 7th International Workshop on Cooperative and Human Aspects of Software Engineering, Hyderabad, pp. 119-122, 2014.

[57] Thongtanunam P., McIntosh S., Hassan A., and Iida H., “Review Participation in Modern Code Review,” Empirical Software Engineering, vol. 22, no. 2, pp. 768-817, 2017.

[58] Tomas P., Escalona M., and Mejias M., “Open Source Tools for Measuring the Internal Quality of Java Software Products. A Survey,” Computer Standards and Interfaces, vol. 36, no. 1, pp. 244- 255, 2013.

[59] Tsuda N., Washizaki H., Fukazawa Y., YasudaY., and Sugimura S., “Machine Learning to Evaluate Evolvability Defects: Code Metrics Thresholds for a Given Context,” in Proceedings of IEEE International Conference on Software Quality, Reliability and Security (QRS), Lisbon, pp. 83- 94, 2018.

[60] Wang Y., Li H., Feng Y., Jiang Y., and Liu Y., “Assessment Of Programming Language Learning Based On Peer Code Review Model: Implementation and Experience Report,” Computers and Education, vol. 59, no. 2, pp. 412-422, 2012.

[61] Winkler D., Sabou M., Petrovic S., Carneiro G., Kalinowski M., and Biffl S., “Improving Model Inspection with Crowdsourcing,” in Proceedings of the 4th International Workshop on CrowdSourcing in Software Engineering, Buenos Aires, pp. 30-34, 2017.

[62] Ziade H., Ayoubi R., Velazco R., “A Survey on Fault Injection Techniques,” The International Arab Journal of Information Technology, vol. 1, no. 2, pp. 171-186, 2004. Mohammad Abdallah Received the Ph.D. degree in Software Engineering from Durham University, UK in 2012. The M.Sc. degree in Software Engineering from Bradford University, UK in 2008. BSc in Computer Science from Al-Zaytoonah University of Jordan in 2007. Currently, he is the Director of Technology Transfer Office and an Assistant Professor of Software Engineering Department in Al-Zaytoonah University. His research interests in Quality Engineering. Mustafa Alrifaee Received the B.Sc. and M.Sc. degrees in information technology from the University of Sindh, Pakistan, in 2001. He received the Ph.D. degree in Computer Science\Multimedia from University of De Montfort, UK, in 2015. Currently he is a member in the Computer Science Department in Al-Zaytoonah.