Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Research Article

An Infrared and Visible Image Fusion Approach of Self-calibrated Residual Networks and Feature Embedding

Author(s): Jinpeng Dai, Zhongqiang Luo* and Chengjie Li

Volume 16, Issue 2, 2023

Published on: 29 August, 2022

Article ID: e180522204986 Pages: 12

DOI: 10.2174/2666255815666220518143643

Price: $65

Abstract

Background: The fusion of infrared images and visible images has been a hot topic in the field of image fusion. In the process of image fusion, different methods of feature extraction and processing will directly affect the fusion performance.

Objectives: Low resolution (small size) of high-level features will lead to the loss of spatial information. On the other side, the low-level features are not significant due to their insufficient filtering of background and noise.

Methods: In order to solve the problem of insufficient feature utilization in existing methods, a new fusion approach (SC-Fuse) based on self-calibrated residual networks (SCNet) and feature embedding has been proposed. The method improves the quality of image fusion from two aspects: feature extraction and feature processing.

Results: First, self-calibrated modules are applied to the field of image fusion for the first time, which enlarged the receptive field to make feature maps contain more information. Second, we use ZCA (Zero-phase Component Analysis) and l1-norm to process features, and propose a feature embedding operation to realize the complementarity of feature information at different levels.

Conclusion: Finally, a suitable strategy is given to reconstruct the fused image. After ablation experiments and comparison with other representative algorithms, the results show the effectiveness and superiority of SC-Fuse.

Keywords: Image fusion, self-calibrated convolutions, features extraction, features embedding, image reconstruction

Graphical Abstract

[1]
D.K. Sahu, and M. Parsai, "Different image fusion techniques–a critical review", Int. J. Mod. Eng. Res., vol. 2, no. 5, pp. 4298-4301, 2012.
[2]
X. Zhang, P. Ye, and G. Xiao, "VIFB: A visible and infrared image fusion benchmark", In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 104-105
[http://dx.doi.org/10.1109/CVPRW50498.2020.00060]
[3]
J. Ma, Y. Ma, and C. Li, "Infrared and visible image fusion methods and applications: A survey", Inf. Fusion, vol. 45, pp. 153-178, 2019.
[http://dx.doi.org/10.1016/j.inffus.2018.02.004]
[4]
I. Ulusoy, and H. Yuruk, "New method for the fusion of complementary information from infrared and visual images for object detection", IET Image Process., vol. 5, no. 1, pp. 36-48, 2011.
[http://dx.doi.org/10.1049/iet-ipr.2009.0374]
[5]
X. Lan, M. Ye, S. Zhang, and H. Zhou, "Modality-correlation-aware sparse representation for RGB-infrared object tracking", Pattern Recognit. Lett., vol. 130, pp. 12-20, 2020.
[http://dx.doi.org/10.1016/j.patrec.2018.10.002]
[6]
G. Bebis, A. Gyaourova, S. Singh, and I. Pavlidis, "Face recognition by fusing thermal infrared and visible imagery", Image Vis. Comput., vol. 24, no. 7, pp. 727-742, 2006.
[http://dx.doi.org/10.1016/j.imavis.2006.01.017]
[7]
S. Singh, A. Gyaourova, G. Bebis, and I. Pavlidis, "Infrared and visible image fusion for face recognition", Biomet. Technol. Hum. Identif., vol. 5404, pp. 585-596, 2004.
[8]
Z. Zhou, M. Dong, X. Xie, and Z. Gao, "Fusion of infrared and visible images for night-vision context enhancement", Appl. Opt., vol. 55, no. 23, pp. 6480-6490, 2016.
[http://dx.doi.org/10.1364/AO.55.006480] [PMID: 27534499]
[9]
Z. Zhang, X. Zhang, C. Peng, X. Xue, and J. Sun, "Exfuse: Enhancing feature fusion for semantic segmentation", In Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 269-284
[http://dx.doi.org/10.1007/978-3-030-01249-6_17]
[10]
Y. Bengio, A. Courville, and P. Vincent, "Representation learning: A review and new perspectives", IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798-1828, 2013.
[http://dx.doi.org/10.1109/TPAMI.2013.50] [PMID: 23787338]
[11]
Q. Zhang, Y. Fu, H. Li, and J. Zou, "Dictionary learning method for joint sparse representation-based image fusion", Opt. Eng., vol. 52, no. 5, p. 057006, 2013.
[http://dx.doi.org/10.1117/1.OE.52.5.057006]
[12]
Y. Liu, and C. Xun, "Image fusion with convolutional sparse representation", IEEE Signal Process. Lett., vol. 23, no. 12, pp. 1882-1886, 2016.
[http://dx.doi.org/10.1109/LSP.2016.2618776]
[13]
H. Li, and X.J. Wu, "Infrared and visible image fusion using latent low-rank representation", arXiv preprint arXiv: 1804.08992., 2018.
[14]
K.R. Prabhakar, V.S. Srikar, and R.V. Babu, "Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs", In Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4714-4722
[15]
H. Li, X.J. Wu, and J. Kittler, "Infrared and visible image fusion using a deep learning framework", In 2018 IEEE 24th International Conference on Pattern Recognition (ICPR), 20-24 Aug, 2018, Beijing, China,, 2018, pp. 2705-2710
[http://dx.doi.org/10.1109/ICPR.2018.8546006]
[16]
H. Li, X.J. Wu, and T.S. Durrani, "Infrared and visible image fusion with resNet and zero-phase component analysis", Infrared Phys. Technol., vol. 102, p. 103039, 2019.
[http://dx.doi.org/10.1016/j.infrared.2019.103039]
[17]
Z. Gao, Y. Zhang, and Y. Li, "Extracting features from infrared images using convolutional neural networks and transfer learning", Infrared Phys. Technol., vol. 105, p. 103237, 2020.
[http://dx.doi.org/10.1016/j.infrared.2020.103237]
[18]
J. Ma, "W. YU, P. Liang, C. Li, and J. Jiang, “Fusiongan: A generative adversarial network for infrared and visible image fusion”", Inf. Fusion, vol. 48, pp. 11-26, 2019.
[http://dx.doi.org/10.1016/j.inffus.2018.09.004]
[19]
J. Li, H. Huo, K. Liu, and C. Li, "Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance", Inf. Sci., vol. 529, pp. 28-41, 2020.
[http://dx.doi.org/10.1016/j.ins.2020.04.035]
[20]
J. Ma, P. Liang, W. YU, C. Chen, X. Guo, J. Wu, and J. Jiang, "Infrared and visible image fusion via detail preserving adversarial learning", Inf. Fusion, vol. 54, pp. 85-98, 2020.
[http://dx.doi.org/10.1016/j.inffus.2019.07.005]
[21]
J.J. Liu, Q. Hou, M.M. Cheng, C. Wang, and J. Feng, "Improving convolutional networks with self-calibrated convolutions", In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10096-10105
[22]
K. Cheng, and C. Wu, "Self-calibrated attention neural network for real-world super resolution", In 2020 European Conference on Computer Vision 30 Jan, 2020, Springer Nature: Switzerland AG, 2020, pp. 453-467
[http://dx.doi.org/10.1007/978-3-030-67070-2_27]
[23]
C. Wang, Y. Wu, Z. Su, and J. Chen, "Joint self-attention and scale-aggregation for self-calibrated deraining network", In Proceedings of the 28th ACM International Conference on Multimedia 12 Oct, 2020, New York, NY: USA, 2020, pp. 2517-2525
[http://dx.doi.org/10.1145/3394171.3413559]
[24]
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition", In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778
[25]
A. Toet, J.K. IJspeert, A.M. Waxman, and M. Aguilar, "Fusion of visible and thermal imagery improves situational awareness", Displays, vol. 18, no. 2, pp. 85-95, 1997.
[http://dx.doi.org/10.1016/S0141-9382(97)00014-0]
[26]
C. Xydeas, and V. Petrovic, "Objective image fusion performance measure", Electron. Lett., vol. 36, no. 4, pp. 308-309, 2000.
[http://dx.doi.org/10.1049/el:20000267]
[27]
V. Aslantas, and E. Bendes, "A new image quality metric for image fusion: The sum of the correlations of differences", AEU Int. J. Electron. Commun., vol. 69, no. 12, pp. 1890-1896, 2015.
[http://dx.doi.org/10.1016/j.aeue.2015.09.004]
[28]
K. Ma, K. Zeng, and Z. Wang, "Perceptual quality assessment for multi-exposure image fusion", IEEE Trans. Image Process., vol. 24, no. 11, pp. 3345-3356, 2015.
[http://dx.doi.org/10.1109/TIP.2015.2442920] [PMID: 26068317]
[29]
B.S. Kumar, "Image fusion based on pixel significance using cross bilateral filter", Signal Image Video Process., vol. 9, no. 5, pp. 1193-1204, 2015.
[http://dx.doi.org/10.1007/s11760-013-0556-9]
[30]
J. Ma, C. Chen, C. Li, and J. Huang, "Infrared and visible image fusion via gradient transfer and total variation minimization", Inf. Fusion, vol. 31, pp. 100-109, 2016.
[http://dx.doi.org/10.1016/j.inffus.2016.02.001]
[31]
H. Li, X.J. Wu, and J. Kittler, "MDLatLRR: A novel decomposition method for infrared and visible image fusion", IEEE Trans. Image Process., vol. 29, pp. 4733-4746, 2020.
[http://dx.doi.org/10.1109/TIP.2020.2975984] [PMID: 32142438]
[32]
G. Bhatnagar, Q.J. Wu, and Z. Liu, "Directive contrast based multimodal medical image fusion in NSCT domain", IEEE Trans. Multimed., vol. 15, no. 5, pp. 1014-1024, 2013.
[http://dx.doi.org/10.1109/TMM.2013.2244870]
[33]
D.P. Bavirisetti, and R. Dhuli, "Two-scale image fusion of visible and infrared images using saliency detection", Infrared Phys. Technol., vol. 76, pp. 52-64, 2016.
[http://dx.doi.org/10.1016/j.infrared.2016.01.009]
[34]
Y. Zhang, L. Zhang, X. Bai, and L. Zhang, "Infrared and visual image fusion through infrared feature extraction and visual information preservation", Infrared Phys. Technol., vol. 83, pp. 227-237, 2017.
[http://dx.doi.org/10.1016/j.infrared.2017.05.007]
[35]
J. Ma, Z. Zhou, B. Wang, and H. Zong, "Infrared and visible image fusion based on visual saliency map and weighted least square optimization", Infrared Phys. Technol., vol. 82, pp. 8-17, 2017.
[http://dx.doi.org/10.1016/j.infrared.2017.02.005]
[36]
Y. Liu, X. Chen, J. Cheng, H. Peng, and Z. Wang, "Infrared and visible image fusion with convolutional neural networks", Int. J. Wavelets Multiresolut. Inf. Process., vol. 16, no. 03, p. 1850018, 2018.
[http://dx.doi.org/10.1142/S0219691318500182]
[37]
M. Haghighat, and M.A. Razian, "Fast-fmi: Non-reference image fusion metric", In 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), 15-17 Oct, 2014, Astana, Kazakhstan,, 2014, pp. 1-3
[http://dx.doi.org/10.1109/ICAICT.2014.7036000]
[38]
A.M. Sharma, A. Dogra, B. Goyal, R. Vig, and S. Agrawal, "From pyramids to state-of-the-art: A study and comprehensive comparison of visible–infrared image fusion techniques", IET Image Process., vol. 14, no. 9, pp. 1671-1689, 2020.
[http://dx.doi.org/10.1049/iet-ipr.2019.0322]
[39]
J. Joseph, S. Jayaraman, R. Periyasamy, and S.V. Renuka, "An edge preservation index for evaluating nonlinear spatial restoration in MR images", Curr. Med. Imaging, vol. 13, no. 1, pp. 58-65, 2017.
[http://dx.doi.org/10.2174/1573405612666160609131149]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy