..............................
..............................
..............................
Usability Study of Enhanced Salat Learning
Salat learning is one of the most important processes for every Muslim. The current learning approach requires the
teacher or expert to be present in the learning session, which is time and resource-consuming. Previous researchers use both
wearable and non-wearable sensors to recognize salat movement. They focus on specific salat movement rather than complete
salat movement cycle. This paper present a motion recognition system to enhance salat learning experience. The system helps
users recognize the complete salat movements cycle based on salat law by using a multisensor setup for better tracking
capability. Three evaluations are conducted to validate the system's performance and its contribution. The first evaluation is to
measure success score in recognition accuracy and identify user error. The second evaluation is conducted to compare the
proposed system with the traditional-based methodology, and the last evaluation focuses on the user experience and acceptance
of the proposed system. The result from performance evaluation shows the system has high accuracy in recognizing salat
movement. There is a significant difference in the error rate and success score when comparing the learning methodology.
However, users provide positive feedback based on the survey conducted after using the proposed system.
[1] Ahmad N., Han L., Iqbal K., Ahmad R., Abid, M., and Iqbal N., “SARM: Salah Activities Recognition Model Based on Smartphone,” Electronics, vol. 8, no. 8, pp. 1-24, 2019.
[2] Ahmad F. and Tamuri H., “Persepsi Guru Terhadap Penggunaan Bahan Bantu Mengajar Berasaskan Teknologi Multimedia Dalam Pengajaran J-QAF,” Journal of Islamic and Arabic Education, vol. 2, no. 2, pp. 53-64, 2010. -1.00 1.00Excellent Good 420 The International Arab Journal of Information Technology, Vol. 18, No. 3A, Special Issue 2021
[3] Al-Ghannam R. and Al-Dossari H., “Prayer Activity Monitoring and Recognition Using Acceleration Features with Mobile Phone,” Arabian Journal for Science and Engineering, vol. 41, no. 12, pp. 4967-4979, 2016.
[4] Alias A. and Aziz N., “Implementing Multimedia and Simulation in Developing Computerized Blood Circulation and Bodily Movement During Obligatory Prayers,” in Proceedings of 2nd International Conference on Computer Technology and Development, Cairo, pp. 735-738, 2010.
[5] Bo A., Hayashibe M., and Poignet P., “Joint Angle Estimation in Rehabilitation with Inertial Sensors and Its Integration with Kinect,” in Proceedings of The Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, pp. 3479-3483, 2011.
[6] Calvert T., Chapman J., and Patla A., “Aspects of The Kinematic Simulation of Human Movement,” IEEE Annals of the History of Computing, vol. 2, no. 09, pp. 41-50, 1982.
[7] Cutting J. and Kozlowski L., “Recognizing Friends by Their Walk: Gait Perception without Familiarity Cues,” Bulletin of the psychonomic Society, vol. 9, no. 5, pp. 353-356, 1977.
[8] Dockstader S. and Tekalp A., “Multiple Camera Tracking of Interacting and Occluded Human Motion,” in Proceedings of the IEEE, vol. 89, no. 10, pp. 1441-1455, 2001.
[9] El-Hoseiny M. and Shaban E., “Muslim Prayer Actions Recognition,” in Proceedings of The Second International Conference on Computer and Electrical Engineering, Dubai, pp. 460-465, 2009.
[10] Fung S., Sundaraj K., Ahamed N., Kiang L., Nadarajah S., Sahayadhas A., Ali A., Islam A., and Palaniappan R., “Hybrid Markerless Tracking of Complex Articulated Motion in Golf Swings,” Journal of Bodywork And Movement Therapies, vol. 18, no. 2, pp. 220-227, 2014.
[11] Guerra-Filho G. and Biswas A., “The Human Motion Database: A Cognitive and Parametric Sampling of Human Motion,” Image and Vision Computing, vol. 30, no. 3, pp. 251-261, 2012.
[12] Hamzah N., Halim N., Hassan M., and Ariffin A., “Android Application for Children to Learn Basic Solat,” International Journal of Interactive Mobile Technologies, vol. 13, no. 7, pp. 69-79, 2019.
[13] Ismail J., Noor L., and Rahim A., Isa M., “Smart Prayer Mat: A Textile-Based Pressure Sensor to Assist Elderly with Cognitive Impairment in Praying Activity,” in Proceedings of 5th International Conference on Computing and Informatics, Istanbul, pp. 241-246, 2015.
[14] Jaafar A., Ismail N., Jasmi K., and Yusoff Y., “Optimal Dual Cameras Setup for Motion Recognition in Salat Activity,” The International Arab Journal of Information Technology, vol. 16, no. 6, pp. 1082-1089, 2019.
[15] Jaafar N., Ismail N., and Yusoff Y., “A New Approach in Islamic Learning: Performance Evaluation of Motion Recognition System for Salat Movement,” in Proceedings of 21st International Arab Conference on Information Technology, 6th of October city, pp. 1-6, 2020.
[16] Jaafar N., Ismail N., and Yusoff Y., “An Investigation Of Motion Tracking For Solat Movement With Dual Sensor Approach,” ARPN Journal of Engineering and Applied Sciences, vol. 10, no. 23, pp. 17981-17986, 2015.
[17] Johansson G., “Visual Perception of Biological Motion and A Model for Its Analysis,” Perception and Psychophysics, vol. 14, no. 2, pp. 201-211, 1973.
[18] Kehl R. and Van Gool L., “Markerless Tracking of Complex Human Motions from Multiple Views,” Computer Vision and Image Understanding, vol. 104, no. 2-3, pp. 190-209, 2006.
[19] Khalid N., Jaafar H., and Kasbun R., “Developing A Mobile Learning Application Framework of ‘Jamak Qasar Apps, Using ADDIE Approach,” Australian Journal of Basic and Applied Sciences, vol. 9, no. 19, pp. 40-44, 2015.
[20] Lam K. and Chiu C., “The Design of A Wireless Real-Time Visual Surveillance System,” Multimedia Tools and Applications, vol. 33, no. 2, pp. 175-199, 2007.
[21] Lam K. and Chiu C., “The Design of A Wireless Real-Time Visual Surveillance System,” Multimedia Tools and Applications, vol. 33, no. 2, pp. 175-199, 2007.
[22] Lee S. and Mase K., “Activity and Location Recognition Using Wearable Sensors,” IEEE Pervasive Computing, vol. 1, no. 3, pp. 24-32, 2002.
[23] Manaf S., Zaid A., Din R., Hamdan A., Salleh N., Kamsin I., Karim A., and Lubis M., “Aplikasi Mudah Alih Panduan Solat dan Penggunaannya,” Ulum Islam Journal, vol. 16, pp. 43-61, 2015.
[24] Moore D., “A Real-World System for Human Motion Detection and Tracking,” California Institute of Technology, pp. 3-36, 2003.
[25] Okada S. and Hasegawa O., “Motion Recognition Based on Dynamic-Time Warping Method with Self-Organizing Incremental Neural Network,” in Proceedings of The 19th International Conference on Pattern Recognition, Tampa, pp. 1-4, 2008.
[26] Picone J., “Continuous Speech Recognition Using Hidden Markov Models,” IEEE Assp Magazine, vol. 7, no. 3, pp. 26-41, 1990.
[27] Rosmani A., Zainuddin N., Ahmad S., and Ramli S., “Bio Terapi Solat: 3d Integration in Solat Technique for Therapeutic Means,” in Proceedings of The Advanced Computer and Usability Study of Enhanced Salat Learning Approach using Motion Recognition System 421 Communication Engineering Technology, Cham, pp. 1001-1011, 2015.
[28] Schrepp M., Hinderks A., and Thomaschewski J., “Applying The User Experience Questionnaire (UEQ) In Different Evaluation Scenarios,” in Proceedings of The International Conference of Design, User Experience, and Usability, Cham, pp. 383-392, 2014.
[29] Shao L., Ji L., Liu Y., and Zhang J., “Human Action Segmentation and Recognition Via Motion and Shape Analysis” Pattern Recognition Letters, vol. 33, no. 4, pp. 438-445, 2012.
[30] Suhid A. and Mutalib L., “Tinjauan Terhadap Pelaksanaan Kem Bestari Solat” Journal of Islamic and Arabic Education, vol. 1, no. 1, pp. 15-28, 2009.
[31] Yamato J., Ohya J., and Ishii K., “Recognizing Human Action in Time-Sequential Images Using Hidden Markov Model,” in CVPR, vol. 92, pp. 379-385. 1992. Nor Azrini Jaafar received her bachelor’s degree in computer science from Universiti Teknologi Malaysia (UTM) in 2012. Currently, she is doing a Ph.D. in Human-Computer Interaction (HCI). Her research interests include Human-Computer Interaction, motion recognition, and machine learning. Nor Azman Ismail received his BSc from Universiti Teknologi Malaysia (UTM), Master of Information Technology from Universiti Kebangsaan Malaysia (UKM), and Ph.D. in the field of Human- Computer Interaction (HCI) from Loughborough University. He has been a lecturer at the Faculty of Computing, Universiti Teknologi Malaysia for more than twenty years. He has made various contributions to the field of Human- Computer Interaction including research, practice, and education. Yusman Azimi Yusoff received his bachelor’s degree in computer science in 2013 and Ph.D. in scientific visualization in 2020 from Universiti Teknologi Malaysia. His research interests include machine learning, data visualization, and Internet-of- Things. He has been a Data Scientist at PPG Coating (M) Sdn Bhd since 2019.