The International Arab Journal of Information Technology (IAJIT)

..............................
..............................
..............................


An Efficient Deep Learning based Multi-Level Feature Extraction Network for Multi-modal

Medical image fusion is the process of creating a single image from the information included in several medical images of the same body region taken using various imaging modalities like Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Single-Photon Emission Computed Tomography (SPECT) And Positron Emission Tomography (PET). Many deep learning-based techniques for combining medical images have been presented, but creating suitable fusion rules is still challenging. Another issue with single-scale networks is inadequate feature extraction. Therefore, this paper proposes an efficient deep learning-based multi-level feature extraction network for Multi-modal Medical Image Fusion (MMIF). In this research, we propose a new advanced MMIF approach for medical image fusion. The proposed research employs two distinct enhanced Deep Learning (DL) algorithms for low and high-level feature extraction to fully extract and fuse significant and distinctive features from source images. The Improved GoogLeNet (IGoogLeNet) model is used to extract the low-level features, while Modified DenseNet-201 (MDenseNet-201) is used to extract the high-level features. Second, without creating new fusion rules, the proposed unique feature fusion module merely permits the fusion and enhancement of unique features. The Soft Attention (SA) fusion mechanism based on Softmax is used for fusing low-level and high features. Finally, the Modified Resblock module is developed for image reconstruction. For all image pairs, the proposed approach yields average values of 0.7671, 32.84, 19.316, 10.063, 0.8232, 5.3384, and 8.9874 for Edge-based Similarity Measure (QAB/F), Spatial Frequency (SF), Peak Signal-to-Noise Ratio (PSNR), Average Gradient (AG), Structural Similarity Index Measure (SSIM), Mutual Information (MI), and Gradient-based Metric (QG). Compared with the most recently published methods, the experimental findings show that our proposed fusion approach efficiently improves image contrast, brightness, and quality and better retains important information.

[1] Ahamed B., Baskar R., and Nalinipriya G., “Enhanced Brain Tumor MRI Scan Reconstruction via the EI-Fusion-Net Model,” International Journal of Intelligent Engineering and Systems, vol. 17, no. 4, pp. 1-23, 2024. DOI: 10.22266/ijies2024.0831.53

[2] Avci D., Sert E., Ozyurt F., and Avci E., “MFIF- DWT-CNN: Multi-focus Image Fusion Based On Discrete Wavelet Transform with Deep Convolutional Neural Network,” Multimedia Tools and Applications, vol. 83, no. 4, pp. 10951- 10968, 2024. https://doi.org/10.1007/s11042-023- 16074-6

[3] Cheng C., Xu T., and Wu X., “MUFusion: A General Unsupervised Image Fusion Network Based on Memory Unit,” Information Fusion, vol. 92, no. 1, pp. 80-92, 2023. https://doi.org/10.1016/j.inffus.2022.11.010

[4] Cheng Y., Fang X., Tang Z., Yu Z., Sun L., and Zhu L., “SDR2Tr‐GAN: A Novel Medical Image Fusion Pipeline Based on GAN with SDR2 Module and Transformer Optimization Strategy,” International Journal of Imaging Systems and Technology, vol. 34, no. 6, pp. 23208, 2024. https://doi.org/10.1002/ima.23208

[5] Ding W., Geng S., Wang H., Huang J., and Zhou T., “FDiff-Fusion: Denoising Diffusion Fusion Network Based on Fuzzy Learning for 3D Medical Image Segmentation,” Information Fusion, vol. 1, no. 1, pp. 102540, 2024. https://doi.org/10.1016/j.inffus.2024.102540

[6] Ghandour C., El-Shafai W., El-Rabaie E., and Elshazly E., “Applying Medical Image Fusion Based on a Simple Deep Learning Principal Component Analysis Network,” Multimedia Tools and Applications, vol. 83, no. 2, pp. 5971- 6003, 2024. https://doi.org/10.1007/s11042-023- 15856-2

[7] He D., Li W., Wang G., Huang Y., and Liu, S., “LRFNet: A Real-Time Medical Image Fusion Method Guided by Detail Information,” Computers in Biology and Medicine, vol. 173, no. 1, pp. 108381, 2024. https://doi.org/10.1016/j.compbiomed.2024.1083 81

[8] Ibrahim S., El-Tawel G., and Makhlouf M., “Brain Image Fusion Using the Parameter Adaptive- Pulse Coupled Neural Network (PA-PCNN) and Non-Subsampled Contourlet Transform (NSCT),” Multimedia Tools and Applications, vol. 83, no. 9, pp. 27379-27409, 2024. https://doi.org/10.1007/s11042-023-16515-2

[9] Kahol A. and Bhatnagar G., “Deep Learning- 446 The International Arab Journal of Information Technology, Vol. 22, No. 3, May 2025 Based Multimodal Medical Image Fusion,” Data Fusion Techniques and Applications for Smart Healthcare, vol. 1, no. 1, pp. 251-279, 2024. https://doi.org/10.1016/B978-0-44-313233- 9.00017-5

[10] Khan S., Alharbi M., Shah S., and ELAffendi M., “Medical Image Fusion for Multiple Diseases Features Enhancement,” International Journal of Imaging Systems and Technology, vol. 34, no. 6, pp. 23197, 2024. https://doi.org/10.1002/ima.23197

[11] Kumari B., Nandal A., and Dhaka A., “Breast Tumor Detection Using Multi‐Feature Block Based Neural Network by Fusion of CT and MRI Images,” Computational Intelligence, vol. 40, no. 3, pp. 12652, 2024. https://doi.org/10.1111/coin.12652

[12] Li B., Wang J., Wang B., Shao Z., Li W., Huang J., and Li P., “BMCS-Net: A Bi-Directional Multi- Scale Cascaded Segmentation Network Based on Transformer-Guided Feature Aggregation for Medical Images,” Computers in Biology and Medicine, vol. 180, pp. 108939, 2024. https://doi.org/10.1016/j.compbiomed.2024.1089 39

[13] Liu Y., Yu C., Cheng J., Wang Z., and Chen X., “MM-Net: A Mixformer-Based Multi-Scale Network for Anatomical and Functional Image Fusion,” IEEE Transactions on Image Processing, vol. 33, pp. 2197-2212, 2024. DOI: 10.1109/TIP.2024.3374072

[14] Mishra N. and Dhabal S., “An Improved Hybrid Fusion of Noisy Medical Images Using Differential Evolution-Based Artificial Rabbits Optimization Algorithm,” Multidimensional Systems and Signal Processing, vol. 35, pp. 1-55. 2024. https://doi.org/10.1007/s11045-024-00889- z

[15] Nehru V. and Prabhu V., “Automated Multimodal Brain Tumor Segmentation and Localization in MRI Images Using Hybrid Res2-UNeXt,” Journal of Electrical Engineering and Technology, pp. 1-13, 2024. https://doi.org/10.1007/s42835-023-01779-3

[16] Raj S. and Singh B., “SpFusionNet: Deep Learning-Driven Brain Image Fusion with Spatial Frequency Analysis,” Multimedia Tools and Applications, vol. 83, pp. 82983-83004, 2024. https://doi.org/10.1007/s11042-024-18682-2.

[17] Ramaraj V., Swamy M., and Sankar M., “Medical Image Fusion for Brain Tumor Diagnosis Using Effective Discrete Wavelet Transform Methods,” Journal of Information Systems Engineering and Business Intelligence, vol. 10, no. 1, pp. 70-80, 2024. https://doi.org/10.20473/jisebi.10.1.70-80

[18] Ravi J. and Narmadha R., “Multimodality Medical Image Fusion Analysis with Multi-Plane Features of PET and MRI Images Using ONSCT.” Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization, vol. 11, no. 7, pp. 2255684, 2024. https://doi.org/10.1080/21681163.2023.2255684

[19] Roy M. and Mukhopadhyay S., “A DCT-Based Multiscale Framework for 2D Greyscale Image Fusion Using Morphological Differential Features,” The Visual Computer, vol. 40, no. 5, pp. 3569-3590, 2024. https://doi.org/10.1007/s00371- 023-03052-0

[20] Shi K., Liu A., Zhang J., Liu Y., and Chen X., “Medical Image Fusion Based on Multi-Level Bidirectional Feature Interaction Network,” IEEE Sensors Journal, vol. 24, no. 12, pp. 19428-19441, 2024. DOI: 10.1109/JSEN.2024.3393619

[21] Sinha A., Agarwal R., Kumar V., Garg N., Pundir D., Singh H., Rani R., and Panigrahy C., “Multi- Modal Medical Image Fusion Using Improved Dual-Channel PCNN,” Medical and Biological Engineering and Computing, vol. 62, pp. 2629- 2651, 2024. https://doi.org/10.1007/s11517-024- 03089-w

[22] Song W., Zeng X., Abdelmoniem A., Zhang H., and Gao M., “Cross-Modality Interaction Network for Medical Image Fusion,” IEEE Transactions on Consumer Electronics, vol. 99, pp. 1-1, 2024. DOI: 10.1109/TCE.2024.3412879

[23] Sun Y., Li W., Xing J., Zhang B., Pu D., Liu Q., and Sun Y., “Enhancing Session-Based Recommendations by Fusing Candidate Items,” The International Arab Journal of Information Technology, vol. 21, no. 6, pp. 1029-1042, 2024. https://doi.org/10.34028/iajit/21/6/7

[24] Tang W., He F., Liu Y., and Duan Y., “MATR: Multimodal Medical Image Fusion via Multiscale Adaptive Transformer,” IEEE Transactions on Image Processing, vol. 31, pp. 5134-5149, 2022. DOI: 10.1109/TIP.2022.3193288

[25] Tirupal T., Pandurangaia Y., Roy A., Kishore V., and Nayyar A., “On the use of UDWT and Fuzzy Sets for Medical Image Fusion,” Multimedia Tools and Applications, vol. 83, no. 13, pp. 39647- 39675, 2024. https://doi.org/10.1007/s11042-023- 16892-8

[26] Wang W., He J., and Liu H., “EMOST: A Dual- Branch Hybrid Network for Medical Image Fusion Via Efficient Model Module and Sparse Transformer,” Computers in Biology and Medicine, vol. 179, pp. 108771, 2024. https://doi.org/10.1016/j.compbiomed.2024.1087 71

[27] Wang Z., Wu Y., Wang J., Xu J., and Shao W., “Res2Fusion: Infrared and Visible Image Fusion Based on Dense Res2net and Double Nonlocal Attention Models,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1- 12. 2022. DOI: 10.1109/TIM.2021.3139654

[28] Wei Y. and Ji L., “Multi-Modal Bilinear Fusion An Efficient Deep Learning based Multi-Level Feature Extraction Network... 447 with Hybrid Attention Mechanism for Multi- Label Skin Lesion Classification,” Multimedia Tools and Applications, vol. 83, pp. 65221-65247, 2024. https://doi.org/10.1007/s11042-023-18027- 5

[29] Xie X., Zhang X., Tang X., Zhao J., Xiong D., Ouyang L., Yang B., Zhou H., Ling B., and Teo K., “MACTFusion: Lightweight Cross Transformer for Adaptive Multimodal Medical Image Fusion,” IEEE Journal of Biomedical and Health Informatics, 2024. DOI: 10.1109/JBHI.2024.3391620

[30] Xie Y., Yu L., and Ding C., “CFIFusion: Dual‐ Branch Complementary Feature Injection Network for Medical Image Fusion,” International Journal of Imaging Systems and Technology, vol. 34, no. 4, pp. 23144, 2024. https://doi.org/10.1002/ima.23144

[31] Zhang W., Yu L., Wang H., and Pedrycz W., “End-to-End Dynamic Residual Focal Transformer Network for Multimodal Medical Image Fusion,” Neural Computing and Applications, vol. 36, pp. 11579-11601, 2024. https://doi.org/10.1007/s00521-024-09729-4

[32] Zhao H., Cai H., and Liu M., “Transformer Based Multi-Modal MRI Fusion for Prediction of Post- Menstrual Age and Neonatal Brain Development Analysis,” Medical Image Analysis, vol. 94, pp. 103140, 2024. https://doi.org/10.1016/j.media.2024.103140

[33] Zhou Y., He K., Xu D., Tao D., Lin X., and Li C., “ASFusion: Adaptive Visual Enhancement and Structural Patch Decomposition for Infrared and Visible Image Fusion,” Engineering Applications of Artificial Intelligence, vol. 132, pp. 107905, 2024. https://doi.org/10.1016/j.engappai.2024.107905

[34] Zhou Y., Yang X., Liu S., and Yin J., “Multimodal Medical Image Fusion Network Based on Target Information Enhancement,” IEEE Access, vol. 12, pp. 70851-70869, 2024. DOI: 10.1109/ACCESS.2024.3402965