Ensemble Deep Learning Algorithm for Multi View Image Fusion
Keywords:
Differentiable Fusion with Mutual Information-Network (DFMI-Net), Image fusion, Lion Swarm Optimization based Convolutional Neural Network (LSOCNN), MRI images, Tissue-Aware Conditional Generative Adversarial Network (TA-cGAN)Abstract
In order to create a synthetic image with more valuable data than just single particular image could ever provide, image fusion attempts to combine many image graphs of the similar subject. The clarity of the majority of original images is restricted by imaging sensors' limitations and broadband signal transmission. An innovative multi-modality clinical image fusing technique is presented in this study to enhance the image quality and earlier brain tumor detection performance. Hence, ensemble based deep learning algorithm is proposed in this study to increase the Magnetic Resonance Imaging (MRI) brain image fusion performances by minimizing noises, executing segmentations, extracting features, fusing images in primary phases. Initial noise reduction improves image quality. Segmentation process MRI scans divides an image into its parts. Black-and-white images are created. Following that, Lion Swarm Optimization based Convolutional Neural Network (LSOCNN) extracts image characteristics with most information. Eventually, multi-modal image fusion generated lower, intermediate, and upper level images. It can be seen from all angles and fused. To increase image fusion performance, ensemble CNN, DFMI-Net, TA-cGAN algorithms are presented. From the result, proposed ensemble DCNN+DFMI+Cgan compared to previous methods, this approach performs improved in respect of correctness, clarity, memory, and mean square error (MSE).
Downloads
References
K. Wang, M. Zheng, H. Wei, G. Qi and Y. Li, “Multi-modality medical image fusion using convolutional neural network and contrast pyramid,” Sensors, vol. 20, no. 8, pp. 1-17 , 2020.
R. Zhu, X. Li, X. Zhang and M. Ma, “MRI and CT medical image fusion based on synchronized-anisotropic diffusion model,” IEEE Access, vol. 8, pp. 91336-91350, 2020.
R. Hou, D. Zhou, R. Nie, D. Liu and X. Ruan, “Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model,” Medical & biological engineering & computing, vol. 57, pp. 887-900, 2019.
S. Ray, V. Kumar, C. Ahuja and N. Khandelwal, “An automatic method for complete brain matter segmentation from multislice CT scan, “ arXiv preprint arXiv:1809.06215, pp. 1-17, 2018.
W. Zhao, D. Wang, and H. Lu, “Multi-focus image fusion with a natural enhancement via a joint multi-level deeply supervised convolutional neural network,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 4, pp. 1102-1115, 2018.
M. Bhat and M. V. Karki, “Feature selection based on PCA and PSO for multimodal medical image fusion using DTCWT,” arXiv preprint arXiv:1701.08918, pp. 1-8, 2017.
J. Chen, X. Li, L. Luo, X. Mei and J. Ma, “Infrared and visible image fusion based on target-enhanced multiscale transform decomposition,” Information Sciences, vol. 508, pp.64-78, 2020.
V. Subbiah Parvathy, S. Pothiraj and J. Sampson, “A novel approach in multimodality medical image fusion using optimal shearlet and deep learning,” International Journal of Imaging Systems and Technology, vol. 30, no. 4, pp. 847-859, 2020.
S. L. Hsu, P. W. Gau, I. L. Wu and J. H. Jeng, “Region-based image fusion with artificial neural network,” International Journal of Computer and Information Engineering, vol. 3, no. 5, pp. 1262-1265, 2009.
Y. Liu, X. Chen, H. Peng and Z. Wang, “Multi-focus image fusion with a deep convolutional neural network,” Information Fusion, vol. 36, pp. 191-207, 2017.
C. Yan, B. Gong, Y. Wei and Y. Gao, “Deep multi-view enhancement hashing for image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 4, pp. 1445-1451, 2020.
J. Ma, H. Xu, J. Jiang, X. Mei and X.P. Zhang, “DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion,” IEEE Transactions on Image Processing, vol. 29, pp. 4980-4995, 2020.
H. Zhang, Z. Le, Z. Shao, H. Xu and J. Ma, “MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion,” Information Fusion, vol. 66, pp. 40-53, 2021.
K. Verma, B.K. Singh and A.S. Thoke, “An enhancement in adaptive median filter for edge preservation,” Procedia Computer Science, 2015, vol. 48, pp. 29-36.
H. Ibrahim, N. S. P. Kong and T. F. Ng, “Simple adaptive median filter for the removal of impulse noise from highly corrupted images,” IEEE Transactions on Consumer Electronics, vol. 54, no. 4, pp. 1920-19272008..
S. N. Sulaiman, and N. A. M., Isa, “Adaptive fuzzy-K-means clustering algorithm for image segmentation,” IEEE Transactions on Consumer Electronics, vol. 56, no. 4, pp. 2661-2668, 2010.
A. Javadpour and A. Mohammadi, “Improving brain magnetic resonance image (MRI) segmentation via a novel algorithm based on genetic and regional growth,” Journal of biomedical physics & engineering, vol. 6, no. 2, pp. 95-108, 2016.
M. Wang, C. Wu, L. Wang, D. Xiang and X. Huang, “A feature selection approach for hyperspectral image based on modified ant lion optimizer,” Knowledge-Based Systems, vol.168, pp. 39-48, 2019.
H. N. Khan, A. R. Shahid, B. Raza, A. H. Dar and H. Alquhayz, “Multi-view feature fusion based four views model for mammogram classification using convolutional neural network,” IEEE Access, vol. 7, pp. 165724-165733, 2019.
F. Fan, Y. Huang, L. Wang, X. Xiong, Z. Jiang, Z. Zhang and J. Zhan, “A semantic-based medical image fusion approach,” arXiv preprint arXiv:1906.00225, pp. 1-6, 2019.
O. Ronneberger, P. Fischer and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234-241.
J. Ting, K. Punithakumar and N. Ray, “Multiview 3-d echocardiography image fusion with mutual information neural estimation,” IEEE International Conference on Bioinformatics and Biomedicine (BIBM), December 2020, pp. 765-771.
Y. Wang, B. Yu, L. Wang, C. Zu, D. S. Lalush, W. Lin, X. Wu, J. Zhou, D. Shen and L. Zhou, “3D conditional generative adversarial networks for high-quality PET image estimation at low dose,” Neuroimage, vol. 174, pp. 550-562, 2018.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair and Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139-144, 2020.
J. Kang, W. Lu and W. Zhang, “Fusion of brain PET and MRI images using tissue-aware conditional generative adversarial network with joint loss,” IEEE Access, vol. 8, pp. 6368-6378, 2020.
N. E. K. Mohamed and A. S. M. El-Bhrawy, “Artificial neural networks in data mining,” IOSR Journal of Computer Engineering, vol. 18, pp. 55-59, 2016.
M. A. Parwez and M. Abulaish, “Multi-label classification of microblogging texts using convolution neural network,” IEEE Access, vol. 7, pp. 68678-68691, 2019.
M. Yaseen, H. S. Salih, M. Aljanabi, A. H. Ali and S. A. Abed, “Improving Process Efficiency in Iraqi universities: a proposed management information system,” Iraqi Journal For Computer Science and Mathematics, vol. 4, no. 1, pp. 211-219, 2023.
M. Aljanabi and S. Y. Mohammed, “Metaverse: open possibilities,” Iraqi Journal For Computer Science and Mathematics, vol. 4, no. 3, pp. 79-86, 2023.
A. S. Shaker, O. F. Youssif, M. Aljanabi, Z. Abbood and M.S. Mahdi, “SEEK Mobility Adaptive Protocol Destination Seeker Media Access Control Protocol for Mobile WSNs,” Iraqi Journal For Computer Science and Mathematics, vol. 4, no. 1, pp. 130-145, 2023.
H. S. Salih, M. Ghazi and M. Aljanabi, “Implementing an Automated Inventory Management System for Small and Medium-sized Enterprises,” Iraqi Journal For Computer Science and Mathematics, vol. 4, no. 2, pp. 238-244, 2023.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.