Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Research Article

Multimodal Medical Image Fusion based on the VGG19 Model in the NSCT Domain

Author(s): ChunXiang Liu, Yuwei Wang, Tianqi Cheng, Xinping Guo and Lei Wang*

Volume 17, Issue 5, 2024

Published on: 16 October, 2023

Article ID: e161023222244 Pages: 12

DOI: 10.2174/0126662558256721231009045901

Price: $65

Abstract

Aim: To deal with the drawbacks of the traditional medical image fusion methods, such as the low preservation ability of the details, the loss of edge information, and the image distortion, as well as the huge need for the training data for deep learning, a new multi-modal medical image fusion method based on the VGG19 model and the non-subsampled contourlet transform (NSCT) is proposed, whose overall objective is to simultaneously make the full use of the advantages of the NSCT and the VGG19 model.

Methodology: Firstly, the source images are decomposed into the high-pass and low-pass subbands by NSCT, respectively. Then, the weighted average fusion rule is implemented to produce the fused low-pass sub-band coefficients, while an extractor based on the pre-trained VGG19 model is constructed to obtain the fused high-pass subband coefficients.

Result and Discussion: Finally, the fusion results are reconstructed by the inversion transform of the NSCT on the fused coefficients. To prove the effectiveness and the accuracy, experiments on three types of medical datasets are implemented.

Conclusion: By comparing seven famous fusion methods, both of the subjective and objective evaluations demonstrate that the proposed method can effectively avoid the loss of detailed feature information, capture more medical information from the source images, and integrate them into the fused images.

Graphical Abstract

[1]
P. Ganasala, and V. Kumar, "Multimodality medical image fusion based on new features in NSST domain", Biomed. Eng. Lett., vol. 4, no. 4, pp. 414-424, 2014.
[http://dx.doi.org/10.1007/s13534-014-0161-z]
[2]
T. Stathaki, Image Fusion: Algorithms and Applications., Academic. Press, 2011.
[3]
J. Du, W. Li, K. Lu, and B. Xiao, "An overview of multi-modal medical image fusion", Neurocomputing, vol. 215, pp. 3-20, 2016.
[http://dx.doi.org/10.1016/j.neucom.2015.07.160]
[4]
Y. Liu, S. Liu, and Z. Wang, "A general framework for image fusion based on multi-scale transform and sparse representation", Inf. Fusion, vol. 24, pp. 147-164, 2015.
[http://dx.doi.org/10.1016/j.inffus.2014.09.004]
[5]
J. Zong, and T. Qiu, "Medical image fusion based on sparse representation of classified image patches", Biomed. Signal Process. Control, vol. 34, pp. 195-205, 2017.
[http://dx.doi.org/10.1016/j.bspc.2017.02.005]
[6]
Z. Zhu, H. Yin, Y. Chai, Y. Li, and G. Qi, "A novel multi-modality image fusion method based on image decomposition and sparse representation", Inf. Sci., vol. 432, pp. 516-529, 2018.
[http://dx.doi.org/10.1016/j.ins.2017.09.010]
[7]
S. Maqsood, and U. Javed, "Multi-modal medical image fusion based on two-scale image decomposition and sparse representation", Biomed. Signal Process. Control, vol. 57, p. 101810, 2020.
[http://dx.doi.org/10.1016/j.bspc.2019.101810]
[8]
S. Paris, S.W. Hasinoff, and J. Kautz, "Local Laplacian filters", ACM Trans. Graph., vol. 30, no. 4, pp. 1-12, 2011.
[http://dx.doi.org/10.1145/2010324.1964963]
[9]
J. Fu, W. Li, J. Du, and B. Xiao, "Multimodal medical image fusion via laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy", Comput. Biol. Med., vol. 126, p. 104048, 2020.
[http://dx.doi.org/10.1016/j.compbiomed.2020.104048] [PMID: 33068809]
[10]
X. Xu, Y. Wang, and S. Chen, "Medical image fusion using discrete fractional wavelet transform", Biomed. Signal Process. Control, vol. 27, pp. 103-111, 2016.
[http://dx.doi.org/10.1016/j.bspc.2016.02.008]
[11]
F. Uesugi, "Novel image processing method inspired by wavelet transform", Micron, vol. 168, p. 103442, 2023.
[http://dx.doi.org/10.1016/j.micron.2023.103442] [PMID: 36921436]
[12]
K. Joshi, M. Diwakar, N.K. Joshi, and S. Lamba, "A concise review on latest methods of image fusion", Recent Adv. Comput, vol. 14, no. 7, pp. 2046-2056, 2021.
[http://dx.doi.org/10.2174/2213275912666200214113414]
[13]
N. Tawfik, H.A. Elnemr, M. Fakhr, M.I. Dessouky, and F.E.A. El-Samie, "Multimodal medical image fusion using stacked auto-encoder in NSCT domain", J. Digit. Imaging, vol. 35, no. 5, pp. 1308-1325, 2022.
[http://dx.doi.org/10.1007/s10278-021-00554-y] [PMID: 35768753]
[14]
A. Vishwakarma, and M.K. Bhuyan, "Image fusion using adjustable non-subsampled shearlet transform", IEEE Trans. Instrum. Meas., vol. 68, no. 9, pp. 3367-3378, 2019.
[http://dx.doi.org/10.1109/TIM.2018.2877285]
[15]
B. Li, H. Peng, and J. Wang, "A novel fusion method based on dynamic threshold neural P systems and nonsubsampled contourlet transform for multi-modality medical images", Signal Process., vol. 178, p. 107793, 2021.
[http://dx.doi.org/10.1016/j.sigpro.2020.107793]
[16]
A. Seal, D. Bhattacharjee, M. Nasipuri, D. Rodríguez-Esparragón, E. Menasalvas, and C. Gonzalo-Martin, "PET-CT image fusion using random forest and à-trous wavelet transform", Int. J. Numer. Methods Biomed. Eng., vol. 34, no. 3, p. e2933, 2018.
[http://dx.doi.org/10.1002/cnm.2933] [PMID: 29078042]
[17]
Z. Wang, X. Li, H. Duan, Y. Su, X. Zhang, and X. Guan, "Medical image fusion based on convolutional neural networks and non-subsampled contourlet transform", Expert Syst. Appl., vol. 171, p. 114574, 2021.
[http://dx.doi.org/10.1016/j.eswa.2021.114574]
[18]
C. Panigrahy, A. Seal, and N.K. Mahato, "MRI and SPECT image fusion using a weighted parameter adaptive dual channel PCNN", IEEE Signal Process. Lett., vol. 27, pp. 690-694, 2020.
[http://dx.doi.org/10.1109/LSP.2020.2989054]
[19]
C. Panigrahy, A. Seal, C. Gonzalo-Martín, P. Pathak, and A.S. Jalal, "Parameter adaptive unit-linking pulse coupled neural network based MRI–PET/SPECT image fusion", Biomed. Signal Process. Control, vol. 83, p. 104659, 2023.
[http://dx.doi.org/10.1016/j.bspc.2023.104659]
[20]
Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao, and L. Zhang, "IFCNN: A general image fusion framework based on convolutional neural network", Inf. Fusion, vol. 54, pp. 99-118, 2020.
[http://dx.doi.org/10.1016/j.inffus.2019.07.011]
[21]
S. Goyal, V. Singh, A. Rani, and N. Yadav, "Multimodal image fusion and denoising in NSCT domain using CNN and FOTGV", Biomed. Signal Process. Control, vol. 71, p. 103214, 2022.
[http://dx.doi.org/10.1016/j.bspc.2021.103214]
[22]
T. Zhou, Q. Li, H. Lu, Q. Cheng, and X. Zhang, "GAN review: Models and medical image fusion applications", Inf. Fusion, vol. 91, pp. 134-148, 2023.
[http://dx.doi.org/10.1016/j.inffus.2022.10.017]
[23]
C. Zhao, T. Wang, and B. Lei, "Medical image fusion method based on dense block and deep convolutional generative adversarial network", Neural Comput. Appl., vol. 33, no. 12, pp. 6595-6610, 2021.
[http://dx.doi.org/10.1007/s00521-020-05421-5]
[24]
J.P. Dai, L.Q. Zhong, and L.J. Cheng, "An infrared and visible image fusion approach of self-calibrated residual networks and feature embedding", Recent Adv. Comput, vol. 16, no. 2, pp. 2-13, 2023.
[25]
C. Panigrahy, A. Seal, and N.K. Mahato, "Fractal dimension based parameter adaptive dual channel PCNN for multi-focus image fusion", Opt. Lasers Eng., vol. 133, p. 106141, 2020.
[http://dx.doi.org/10.1016/j.optlaseng.2020.106141]
[26]
Y. Liu, "A medical image fusion method based on convolutional neural networks", In 2017 20th International Conference on Information Fusion (Fusion)
Xi'an, China, pp. 1-7, 2017. [http://dx.doi.org/10.23919/ICIF.2017.8009769]]
[27]
Y. Niu, J. Wu, W. Liu, W. Guo, and R.W.H. Lau, "HDR-GAN: HDR image reconstruction from multi-exposed LDR images with large motions", IEEE Trans. Image Process., vol. 30, pp. 3885-3896, 2021.
[http://dx.doi.org/10.1109/TIP.2021.3064433] [PMID: 33764875]
[28]
S. Karen, and Z Andrew, "Very deep convolutional networks for large-scale image recognition", arXiv:1409.1556, 2014.
[29]
Y. Lu, Y. Qiu, Q. Gao, and D. Sun, "Infrared and visible image fusion based on tight frame learning via VGG19 network", Digit. Signal Process., vol. 131, p. 103745, 2022.
[http://dx.doi.org/10.1016/j.dsp.2022.103745]
[30]
K.A. Johnson, and J.A. Becker, "The whole brain altas", Available from: http://www.med.harvard.edu/aanlib/
[31]
F.G. Veshki, N. Ouzir, S.A. Vorobyov, and E. Ollila, "Multimodal image fusion via coupled feature learning", Signal Process., vol. 200, p. 108637, 2022.
[http://dx.doi.org/10.1016/j.sigpro.2022.108637]
[32]
C. Agrawal, S.K. Yadav, and S.P. Singh, "A simplified parameter adaptive DCPCNN based medical image fusion", In Proceedings of International Conference on Communication and Artificial Intelligence: ICCAI 2021
Springer, Singapore, 2022, pp. 489-501. [http://dx.doi.org/10.1007/978-981-19-0976-4_40]
[33]
X.S. Li, "Multimodal medical image fusion based on joint bilateral filter and local gradient energy", Inf. Sci., vol. 569, pp. 305-325, 2021.
[http://dx.doi.org/10.1360/SSI-2020-0223]
[34]
Y. Zhang, W. Xiang, and S. Zhang, "et al. Local extreme map guided multi-modal brain image fusion", Front. Neurosci., vol. 16, p. 1055451, 2022.
[http://dx.doi.org/10.3389/fnins.2022.1055451] [PMID: 36389249]
[35]
W. Tan, P. Tiwari, H.M. Pandey, C. Moreira, and A.K. Jaiswal, "Multimodal medical image fusion algorithm in the era of big data", Neural Comput. Appl., pp. 1-21, 2020.
[http://dx.doi.org/10.1007/s00521-020-05173-2]
[36]
Z. Zhu, M. Zheng, G. Qi, D. Wang, and Y. Xiang, "A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain", IEEE Access, vol. 7, pp. 20811-20824, 2019.
[http://dx.doi.org/10.1109/ACCESS.2019.2898111]
[37]
W. Tan, W. Thitøn, P. Xiang, and H. Zhou, "Multi-modal brain image fusion based on multi-level edge-preserving filtering", Biomed. Signal Process. Control, vol. 64, p. 102280, 2021.
[http://dx.doi.org/10.1016/j.bspc.2020.102280]
[38]
Y.H. Jia, "Fusion of landsat TM and SAR images based on principal component analysis", Remote Sens. Technology. Application, vol. 13, no. 1, pp. 46-49, 2012.
[39]
Y. Chen, and R.S. Blum, "A new automated quality assessment algorithm for image fusion", Image Vis. Comput., vol. 27, no. 10, pp. 1421-1432, 2009.
[http://dx.doi.org/10.1016/j.imavis.2007.12.002]
[40]
W. Xu, Y.L. Fu, H. Xu, and K.K.L. Wong, "Medical image fusion using enhanced cross-visual cortex model based on artificial selection and impulse-coupled neural network", Comput. Methods Programs Biomed., vol. 229, p. 107304, 2023.
[http://dx.doi.org/10.1016/j.cmpb.2022.107304] [PMID: 36586176]
[41]
X. Feng, C. Fang, and G. Qiu, "Multimodal medical image fusion based on visual saliency map and multichannel dynamic threshold neural P systems in sub-window variance filter domain", Biomed. Signal Process. Control, vol. 84, p. 104794, 2023.
[http://dx.doi.org/10.1016/j.bspc.2023.104794]
[42]
K. Yu, X. Yang, S. Jeon, and Q. Dou, "An end-to-end medical image fusion network based on Swin-transformer", Microprocess. Microsyst., vol. 98, p. 104781, 2023.
[http://dx.doi.org/10.1016/j.micpro.2023.104781]
[43]
W. Li, Y. Zhang, G. Wang, Y. Huang, and R. Li, "DFENet: A dual-branch feature enhanced network integrating transformers and convolutional feature learning for multimodal medical image fusion", Biomed. Signal Process. Control, vol. 80, p. 104402, 2023.
[http://dx.doi.org/10.1016/j.bspc.2022.104402]
[44]
M. Kanmani, and V. Narasimhan, "Particle swarm optimisation aided weighted averaging fusion strategy for CT and MRI medical images", Int. J. Biomed. Eng. Technol., vol. 31, no. 3, pp. 278-291, 2019.
[http://dx.doi.org/10.1504/IJBET.2019.102975]
[45]
B. Li, H. Peng, and X. Luo, "et al. Medical image fusion method based on coupled neural P systems in non-subsampled shearlet transform domain", Int. J. Neural Syst., vol. 31, no. 1, p. 2050050, 2021.
[http://dx.doi.org/10.1142/S0129065720500501] [PMID: 32808852]
[46]
X.X. Xing, F.C. Cao, W.W. Shang, and F. Liu, "A novel image fusion method using non-subsampled shearlet transform", Appl. Mech. Mater., vol. 668-669, pp. 1033-1036, 2014.
[http://dx.doi.org/10.4028/www.scientific.net/AMM.668-669.1033]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy