Advancements in Transfer Learning Strategies for PET and MRI Brain Image Fusion

Authors

  • Sonia Panesar, Amit Ganatra

Keywords:

Fusion, Transfer Learning, PET, MRI, VGG19, ALEXNET, DENSENET, Image Fusion, Multimodal Medical image.

Abstract

Multimodal medical picture fusion is a trending research topic, in which research is happening. Multimodal medical image fusion is a procedure that integrates information from several medical imaging modalities to provide a more useful and complete visual content in the same picture to do subsequent operations like image segmentation etc. Medical image fusion may be particularly advantageous for biomedical research and medical image analysis and to minimize both the scan duration and motion artifacts in scan. The merging of neuroimaging data may lead to new insights into brain function and structure. In this article, multiple deep learning techniques including pre-trained VGG19 model, ALEXNET model and DENSENET model are applied utilizing transfer learning methodology to merge MRI (Magnetic Resonance Imaging) and PET (Positron Emission Tomography) neuroimaging. As the availability to medical data is restricted, transfer learning is employed for feature extraction and save training time. The features are blended using a pre-trained VGG19 model, ALEXNET model and DENSENET model. The experimental findings of all the three models include both quantitative and qualitative assessment metrics analysis for fused picture and achieves superior overall performance than unimodal and feature-level fusion approaches, and that it beats state-of-the-art methods.

Downloads

Download data is not yet available.

References

Li Y, Zhao J, Lv Z, Li J. Medical image fusion method by deep learning. International Journal of Cognitive Computing in Engineering. 2021 Jun 1; 2:21-9.

Zhang YD, Dong Z, Wang SH, Yu X, Yao X, Zhou Q, Hu H, Li M, Jiménez-Mesa C, Ramirez J, Martinez FJ. Advances in multimodal data fusion in neuroimaging: Overview, challenges, and novel orientation. Information Fusion. 2020 Dec 1; 64:149-87.

Zeng, Guixi, et al. "Medical image fusion using a deep convolutional neural network." Journal of Healthcare Engineering, Volume 2018, Article ID 6417203, 2018.

Jin, Y. et al. "Deep learning for medical image fusion: A multi-task network for improved diagnosis and operative planning." In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2018.

Ghafoorian, M. et al. "Transfer learning for domain adaptation in MRI: Application in brain lesion segmentation." In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2017.

Song X, Zhou F, Frangi AF, Cao J, Xiao X, Lei Y, Wang T, Lei B. Multicenter and Multichannel Pooling GCN for Early AD Diagnosis Based on Dual-Modality Fused Brain Network. IEEE Transactions on Medical Imaging. 2022 Jun 29;42(2):354-67.

Faul, D. D., Varasteh, Z., Anselm, K., Kuhlmann, M. T., Giesel, F. L., Kratochwil, C., & Haberkorn, U. (2020). PET/MRI and SPECT/MRI in oncology: a guide to hybrid imaging. Cancer Imaging, 20(1), 52. doi:10.1186/s40644-020-00332-4

Supekar, K., Uddin, L. Q., Prater, K., Amin, H., Greicius, M. D., & Menon, V. (2010). Development of functional and structural connectivity within the default mode network in young children. Neuroimage, 52(1), 290-301.

Dou, Q., Chen, H., Yu, L., Qin, J., & Heng, P. A. (2019). Multilevel contextual 3D CNNs for false positive reduction in pulmonary nodule detection. IEEE Transactions on Biomedical Engineering, 66(12), 3434-3445.

Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600-612. doi:10.1109/TIP.2003.819861

Zhang, Y., Wei, W., Shen, W., & Zhang, G. (2019). A Survey of Deep Learning-Based Multimodal Medical Image Fusion. Neurocomputing, 338, 321-333.

Chen, W., Wang, L., Shen, L., Wang, J., & Yang, X. (2018). Deep Learning-based Multi-modal Fusion for Diagnosis of Alzheimer's Disease. Information Fusion, 41, 268-278.

Moeskops, Pim, et al. "Deep learning for brain MRI segmentation: State of the art and future directions." NeuroImage, Volume 202, 2019, Article 116091.

Fu J, Li W, Du J, Xu L. DSAGAN: A generative adversarial network based on dual-stream attention mechanism for anatomical and functional image fusion. Information Sciences. 2021 Oct 1; 576:484-506.

Finn C, Xu K, Levine S. Probabilistic model-agnostic meta-learning. Advances in neural information processing systems. 2018;31.

Li P, Gu J, Kuen J, Morariu VI, Zhao H, Jain R, Manjunatha V, Liu H. Selfdoc: Self-supervised document representation learning. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021 (pp. 5397-5407).

Hermessi H, Mourali O, Zagrouba E. Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. Neural Computing and Applications. 2018 Oct; 30:2029-45.

Huang P, Tan X, Zhou X, Liu S, Mercaldo F, Santone A. FABNet: Fusion Attention Block and Transfer Learning for Laryngeal Cancer Tumor Grading in P63 IHC

Downloads

Published

24.03.2024

How to Cite

Amit Ganatra, S. P. (2024). Advancements in Transfer Learning Strategies for PET and MRI Brain Image Fusion. International Journal of Intelligent Systems and Applications in Engineering, 12(3), 2037–2044. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/5670

Issue

Section

Research Article