Generic placeholder image

Current Signal Transduction Therapy

Editor-in-Chief

ISSN (Print): 1574-3624
ISSN (Online): 2212-389X

Research Article

Multimodal Medical Image Fusion Based on Intuitionistic Fuzzy Sets and Weighted Activity Measure in NSST Domain

Author(s): Vanitha Kamarthi*, Donthi Satyanarayana and Giri Prasad Mahendra Ninjappa

Volume 17, Issue 2, 2022

Published on: 27 July, 2022

Article ID: e050422203130 Pages: 10

DOI: 10.2174/1574362417666220405151738

Price: $65

Abstract

Background: In the extraction of information from multimodality images, anatomical and functional image fusion became an effective tool in the applications of clinical imaging.

Objective: A new approach to fuse anatomical and functional images that use the combination of activity measure and intuitionistic fuzzy sets in the NSST domain is presented.

Methods: First, the high and low-frequency sub-images of source images are obtained by utilizing NSST decomposition, which represents them in multi-scale and multi-directions. Next, the high-frequency sub-images are applied to intuitionistic fuzzy sets, in which the fused coefficients are selected using an activity measure called fuzzy entropy.

Results: The multiplication of weighted local energy and weighted sum modified Laplacian is used as an activity measure to fuse the low-frequency sub-images. At last, the reconstruction of the final fused image is done by applying the inverse NSST on the above-fused coefficients.

Conclusion: The efficacy of the proposed fuzzy-based method is verifiable by five different modalities of anatomical and functional images. Both subjective and objective calculations showed better results than existing methods.

Keywords: Medical image fusion, non-subsampled shearlet transform, intuitionistic fuzzy sets, weighted local energy, weighted sum-modified-Laplacian, NSST domain.

Graphical Abstract

[1]
James AP, Dasarathy BV. Medical image fusion: a survey of the state of the art. Inf Fusion 2014; 19: 4-19.
[http://dx.doi.org/10.1016/j.inffus.2013.12.002]
[2]
Fatmael-Zahra AG. Current trends in medical image registration and fusion. Egypt Inform J 2016; 17(1): 99-124.
[http://dx.doi.org/10.1016/j.eij.2015.09.002]
[3]
Li S, Kang X, Fang L, Hu J, Yin H. Pixellevel image fusion: A survey of the state of the art. Inf Fusion 2017; 33: 100-12.
[http://dx.doi.org/10.1016/j.inffus.2016.05.004]
[4]
Bavirisetti DP, Dhuli R. Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen -loeve transform. IEEE Sens J 2016; 16(1): 203-9.
[http://dx.doi.org/10.1109/JSEN.2015.2478655]
[5]
Bavirisetti DP, Xiao G, Liu G. Multi-sensor image fusion based on fourth order partial differential equations In 2017 20th International Conference on Information Fusion. 1-9.
[http://dx.doi.org/10.23919/ICIF.2017.8009719]
[6]
Bavirisetti DP, Dhuli R. Fusion of MRI and CT images using guided image filter and image statistics. Int J Imaging Syst Technol 2017; 27(3): 227-37.
[http://dx.doi.org/10.1002/ima.22228]
[7]
Li S, Kang X, Hu J. Image fusion with guided filtering. IEEE Trans Image Process 2013; 22(7): 2864-75.
[http://dx.doi.org/10.1109/TIP.2013.2244222] [PMID: 23372084]
[8]
Toet A. Image fusion by a ratio of lowpass pyramid. Pattern Recognit Lett 1989; 9(4): 245-53.
[http://dx.doi.org/10.1016/0167-8655(89)90003-2]
[9]
Liu Z, Takusada K, Hanasaki K, Ho YK, Dai YP. Image fusion by using steerable pyramid. Pattern Recognit Lett 2001; 22(9): 929-39.
[http://dx.doi.org/10.1016/S0167-8655(01)00047-2]
[10]
Burt PJ, Adelson EH. The laplacian pyramid as a compact image code. IRE Trans Commun Syst 1983; 31(4): 532-40.
[http://dx.doi.org/10.1109/TCOM.1983.1095851]
[11]
Shreyamsha BK. Multi focus and multi spectral image fusion based on pixel significance using discrete cosine harmonic wavelet trans-form. Signal Image Video Process 2013; 7(6): 1125-43.
[http://dx.doi.org/10.1007/s11760-012-0361-x]
[12]
Yang Y, Park DS, Huang S, Rao N. Medical image fusion via an effective wavelet-based approach. EURASIP J Adv Signal Process 2010; 2010: 1-13.
[http://dx.doi.org/10.1155/2010/579341]
[13]
Vijayarajan R, Muttan S. Discrete wavelet transform based principal component averaging fusion for medical images. AEU Int J Electron Commun 2015; 69(6): 896-902.
[http://dx.doi.org/10.1016/j.aeue.2015.02.007]
[14]
Mishra HOS, Bhatnagar S. MRI and CT image fusion based on wavelet transform. Int J Inf Comput Technol 2014; 4(1): 47-52.
[15]
Singh R, Khare A. Fusion of multimodal medical images using Daubechies complex wavelet transform-A multiresolution approach. Inf Fusion 2014; 19: 49-60.
[http://dx.doi.org/10.1016/j.inffus.2012.09.005]
[16]
Liu Y, Liu S, Wang Z. A general framework for image fusion based on multiscale transform and sparse representation. Inf Fusion 2015; 24: 147-64.
[http://dx.doi.org/10.1016/j.inffus.2014.09.004]
[17]
Srivastava R, Prakash O, Khare A. Local energy-based multimodal medical image fusion in curvelet domain. IET Comput Vis 2016; 10(6): 513-27.
[http://dx.doi.org/10.1049/iet-cvi.2015.0251]
[18]
Guorong XL, Dongzhu F. Multifocus image fusion based on non-subsampled shearlet transform. IET Image Process 2013; 7(6): 633-9.
[19]
Bhateja V, Patel H, Krishna A, Sahu A, Lay-Ekuakille A. Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains. IEEE Sens J 2015; 15(12): 6783-90.
[http://dx.doi.org/10.1109/JSEN.2015.2465935]
[20]
Vanitha K, Satyanrayana D, Giri PMN. A new hybrid approach for multi –modal medical image fusion. J Adv Res Dyn Control Syst 2018; 12(3): 221-30.
[21]
Vanitha K, Satyanarayana D, Giri PMN. Medical image fusion algorithm based on weighted local energy motivated PAPCNN in NSST domain. JARDCS 2020; 12(3): 960-7.
[http://dx.doi.org/10.1016/S0165-0114(86)80034-3]
[22]
Vanitha K, Satyanarayana D, Giri PMN. Multi-modal medical image fusion algorithm based on spatial frequency motivated PA-PCNN in NSST domain. Curr Med Imag 2021; 17(5): 634-43.
[http://dx.doi.org/10.4218/etrij.17.0116.0568]
[23]
Atanassov KT. Intuitionistic fuzzy sets. Fuzzy Sets Syst 1986; 20(1): 87-96.
[http://dx.doi.org/10.1016/S0165-0114(86)80034-3]
[24]
Tirupal T, Chandra MB, Srinivas KS. Multimodal medical image fu-sion based on sugeno’s intuitionistic fuzzy sets. ETRI J 2017; 39(2): 173-80.
[http://dx.doi.org/10.4218/etrij.17.0116.0568]
[25]
Tirupal T, Chandra MB, Srinivas Kumar S. Multimodal medical image fusion based on yager’s intuitionistic fuzzy sets. Iranian J Fuzzy Syst 2019; 16(1): 33-48.
[26]
Wang Z, Ma Y. Medical image fusion using m-PCNN. Inf Fusion 2008; 9(2): 176-85.
[http://dx.doi.org/10.1016/j.inffus.2007.04.003]
[27]
Qu XB, Yan JW, Xiao HZ, Zhu ZQ. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in non-subsampled contourlet transform domain. Acta Autom Sin 2008; 34(12): 1508-14.
[http://dx.doi.org/10.3724/SP.J.1004.2008.01508]
[28]
Bhatnagar G, Wu QMJ, Liu Z. Directive contrast based multimodal medical image fusion in NSCT domain. IEEE Trans Multimed 2013; 15(5): 1014-24.
[http://dx.doi.org/10.1109/TMM.2013.2244870]
[29]
Das S, Kundu MK. A neuro-fuzzy approach for medical image fusion. IEEE Trans Biomed Eng 2013; 60(12): 3347-53.
[http://dx.doi.org/10.1109/TBME.2013.2282461] [PMID: 24058012]
[30]
Yang Y, Que Y, Huang S, Lin P. Multimodal sensor medical image fusion based on type-2 fuzzy logic in NSCT domain. IEEE Sens J 2016; 16(10): 3735-45.
[http://dx.doi.org/10.1109/JSEN.2016.2533864]
[31]
Du J, Li W, Xiao B. Anatomical-functional image fusion by information of interest in local Laplacian filtering domain. IEEE Trans Image Process 2017; 26(12): 5855-66.
[http://dx.doi.org/10.1109/TIP.2017.2745202] [PMID: 28858799]
[32]
Chen Y, Park SK, Ma Y, Ala R. A new automatic parameter setting method of a simplified PCNN for image segmentation. IEEE Trans Neural Netw 2011; 22(6): 880-92.
[http://dx.doi.org/10.1109/TNN.2011.2128880] [PMID: 21550882]
[33]
Yin M, Liu X, Liu Y, Chen X. Medical image fusion with parameter adaptive pulse coupled neural network in non-subsampled shearlet transform domain. IEEE Trans Instrum Meas 2018; 68(1): 49-64.
[http://dx.doi.org/10.1109/TIM.2018.2838778]
[34]
Yu L, Chen X, Ward RK, Wang ZJ. Medical image fusion via convolutional sparsity based morphological component analysis. IEEE Signal Process Lett 2019; 26(3): 485-9.
[http://dx.doi.org/10.1109/LSP.2019.2895749]
[35]
The whole brain. Atlas. Available from: www.med.harvard.edu/AANLIB/home.html
[36]
Jagalingam P, Hegde AK. A review of quality metrics for fused image. Aquat Procedia 2015; 4: 133-42.
[http://dx.doi.org/10.1016/j.aqpro.2015.02.019]
[37]
Xydeas CS, Petrovic V. Objective image fusion performance measure. Electron Lett 2000; 36(4): 308-9.
[http://dx.doi.org/10.1049/el:20000267]
[38]
Haghighat MB, Aghagolzadeh A, Seyedarabi H. A non- reference image fusion metric based on mutual information of image features. Comput Electr Eng 2011; 37(5): 744-56.
[http://dx.doi.org/10.1016/j.compeleceng.2011.07.012]
[39]
Liu Z, Blasch E, Xue Z, Zhao J, Laganiere R, Wu W. Objective assessment of multi resolution image fusion algorithms for context en-hancement in night vision: A comparative study. IEEE Trans Pattern Anal Mach Intell 2012; 34(1): 94-109.
[http://dx.doi.org/10.1109/TPAMI.2011.109] [PMID: 21576753]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy