Abstract
Aim: To deal with the drawbacks of the traditional medical image fusion methods, such as the low preservation ability of the details, the loss of edge information, and the image distortion, as well as the huge need for the training data for deep learning, a new multi-modal medical image fusion method based on the VGG19 model and the non-subsampled contourlet transform (NSCT) is proposed, whose overall objective is to simultaneously make the full use of the advantages of the NSCT and the VGG19 model.
Methodology: Firstly, the source images are decomposed into the high-pass and low-pass subbands by NSCT, respectively. Then, the weighted average fusion rule is implemented to produce the fused low-pass sub-band coefficients, while an extractor based on the pre-trained VGG19 model is constructed to obtain the fused high-pass subband coefficients.
Result and Discussion: Finally, the fusion results are reconstructed by the inversion transform of the NSCT on the fused coefficients. To prove the effectiveness and the accuracy, experiments on three types of medical datasets are implemented.
Conclusion: By comparing seven famous fusion methods, both of the subjective and objective evaluations demonstrate that the proposed method can effectively avoid the loss of detailed feature information, capture more medical information from the source images, and integrate them into the fused images.
Graphical Abstract
[http://dx.doi.org/10.1007/s13534-014-0161-z]
[http://dx.doi.org/10.1016/j.neucom.2015.07.160]
[http://dx.doi.org/10.1016/j.inffus.2014.09.004]
[http://dx.doi.org/10.1016/j.bspc.2017.02.005]
[http://dx.doi.org/10.1016/j.ins.2017.09.010]
[http://dx.doi.org/10.1016/j.bspc.2019.101810]
[http://dx.doi.org/10.1145/2010324.1964963]
[http://dx.doi.org/10.1016/j.compbiomed.2020.104048] [PMID: 33068809]
[http://dx.doi.org/10.1016/j.bspc.2016.02.008]
[http://dx.doi.org/10.1016/j.micron.2023.103442] [PMID: 36921436]
[http://dx.doi.org/10.2174/2213275912666200214113414]
[http://dx.doi.org/10.1007/s10278-021-00554-y] [PMID: 35768753]
[http://dx.doi.org/10.1109/TIM.2018.2877285]
[http://dx.doi.org/10.1016/j.sigpro.2020.107793]
[http://dx.doi.org/10.1002/cnm.2933] [PMID: 29078042]
[http://dx.doi.org/10.1016/j.eswa.2021.114574]
[http://dx.doi.org/10.1109/LSP.2020.2989054]
[http://dx.doi.org/10.1016/j.bspc.2023.104659]
[http://dx.doi.org/10.1016/j.inffus.2019.07.011]
[http://dx.doi.org/10.1016/j.bspc.2021.103214]
[http://dx.doi.org/10.1016/j.inffus.2022.10.017]
[http://dx.doi.org/10.1007/s00521-020-05421-5]
[http://dx.doi.org/10.1016/j.optlaseng.2020.106141]
Xi'an, China, pp. 1-7, 2017. [http://dx.doi.org/10.23919/ICIF.2017.8009769]]
[http://dx.doi.org/10.1109/TIP.2021.3064433] [PMID: 33764875]
[http://dx.doi.org/10.1016/j.dsp.2022.103745]
[http://dx.doi.org/10.1016/j.sigpro.2022.108637]
Springer, Singapore, 2022, pp. 489-501. [http://dx.doi.org/10.1007/978-981-19-0976-4_40]
[http://dx.doi.org/10.1360/SSI-2020-0223]
[http://dx.doi.org/10.3389/fnins.2022.1055451] [PMID: 36389249]
[http://dx.doi.org/10.1007/s00521-020-05173-2]
[http://dx.doi.org/10.1109/ACCESS.2019.2898111]
[http://dx.doi.org/10.1016/j.bspc.2020.102280]
[http://dx.doi.org/10.1016/j.imavis.2007.12.002]
[http://dx.doi.org/10.1016/j.cmpb.2022.107304] [PMID: 36586176]
[http://dx.doi.org/10.1016/j.bspc.2023.104794]
[http://dx.doi.org/10.1016/j.micpro.2023.104781]
[http://dx.doi.org/10.1016/j.bspc.2022.104402]
[http://dx.doi.org/10.1504/IJBET.2019.102975]
[http://dx.doi.org/10.1142/S0129065720500501] [PMID: 32808852]
[http://dx.doi.org/10.4028/www.scientific.net/AMM.668-669.1033]