Generic placeholder image

Current Medical Imaging

Editor-in-Chief

ISSN (Print): 1573-4056
ISSN (Online): 1875-6603

Research Article

LFU-Net: A Lightweight U-Net with Full Skip Connections for Medical Image Segmentation

Author(s): Yunjiao Deng, Hui Wang, Yulei Hou*, Shunpan Liang and Daxing Zeng

Volume 19, Issue 4, 2023

Published on: 26 August, 2022

Article ID: e220622206296 Pages: 14

DOI: 10.2174/1573405618666220622154853

Price: $65

Abstract

Background: In the series of improved versions of U-Net, while the segmentation accuracy continues to improve, the number of parameters does not change, which makes the hardware required for training expensive, thus affecting the speed of training convergence.

Objective: The objective of this study is to propose a lightweight U-Net to balance the relationship between the parameters and the segmentation accuracy.

Methods: A lightweight U-Net with full skip connections and deep supervision (LFU-Net) was proposed. The full skip connections include skip connections from shallow encoders, deep decoders, and sub-networks, while the deep supervision learns hierarchical representations from full-resolution feature representations in outputs of sub-networks. The key lightweight design is that the number of output channels is based on 8 rather than 64 or 32. Its pruning scheme was designed to further reduce parameters. The code is available at: https://github.com/dengdy22/U-Nets.

Results: For the ISBI LiTS 2017 Challenge validation dataset, the LFU-Net with no pruning received a Dice value of 0.9699, which achieved equal or better performance with a mere about 1% of the parameters of existing networks. For the BraTS 2018 validation dataset, its Dice values were 0.8726, 0.9363, 0.8699 and 0.8116 on average, WT, TC and ET, respectively, and its Hausdorff95 distances values were 3.9514, 4.3960, 3.0607 and 4.3975, respectively, which was not inferior to the existing networks and showed that it can achieve balanced recognition of each region.

Conclusion: LFU-Net can be used as a lightweight and effective method in the segmentation tasks of two and multiple classification medical imaging datasets.

Keywords: Semantic segmentation, Medical image, Full skip connection, Deep supervision, Model pruning, Lightweight.

Graphical Abstract

[1]
Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell 2017; 39(4): 640-51.
[http://dx.doi.org/10.1109/TPAMI.2016.2572683] [PMID: 27244717]
[2]
Badrinarayanan V, Kendall A, Cipolla R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 2017; 39(12): 2481-95.
[http://dx.doi.org/10.1109/TPAMI.2016.2644615] [PMID: 28060704]
[3]
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI). In: Lecture Notes in Computer Science. Cham: Springer 2018; pp. 234-41.
[http://dx.doi.org/10.1007/978-3-319-24574-4_28]
[4]
Chen LC, Papandreou G, Kokkinos I, et al. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans Pattern Anal Mach Intell 2018; 40(4): 834-48.
[http://dx.doi.org/10.1109/TPAMI.2017.2699184] [PMID: 28463186]
[5]
Fu J, Liu J, Jiang J, et al. Scene segmentation with dual relation-aware attention network. IEEE Trans Neural Netw Learn Syst 2021; 32(6): 2547-60.
[http://dx.doi.org/10.1109/TNNLS.2020.3006524] [PMID: 32745005]
[6]
Siddique N, Paheding S, Elkin CP, et al. U-Net and its variants for medical image segmentation: A review of theory and applications. IEEE Access 2021; 9: 82031-57.
[http://dx.doi.org/10.1109/ACCESS.2021.3086020]
[7]
Hu H, Zheng Y, Zhou Q, et al. MC-Unet: Multi-scale convolution Unet for bladder cancer cell segmentation in phase-contrast microscopy images. IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Nov 18-21; San Diego, CA, USA. 2019; pp. 1197-9.
[http://dx.doi.org/10.1109/BIBM47256.2019.8983121]
[8]
Wu J, Chen EZ, Rong R, et al. Skin lesion segmentation with C-U Net. 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). July 23-27; Berlin, Germany. 2019; 2785-8.
[http://dx.doi.org/10.1109/EMBC.2019.8857773] [PMID: 31946471]
[9]
Song T, Meng F, Rodríguez-Patón A, et al. U-Next: A novel convolution neural network with an aggregation U-Net architecture for gallstone segmentation in CT images. IEEE Access 2019; 7: 166823-32.
[http://dx.doi.org/10.1109/ACCESS.2019.2953934]
[10]
Li X, Chen H, Qi X, et al. H-DenseUNet: Hybrid densely connected U Net for liver and tumor segmentation from CT volumes. IEEE Trans Med Imaging 2018; 37(12): 2663-74.
[http://dx.doi.org/10.1109/TMI.2018.2845918] [PMID: 29994201]
[11]
Zhao W, Li K, Zhao D, et al. Liver segemtation in CT image with noedge- cuting UNet. 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). Dec 11- 13; Chongqing, China, 2020; 2315-8.
[http://dx.doi.org/10.1109/ITAIC49862.2020.9338815]
[12]
Maji D, Sigedar P, Singh M. Attention Res-UNet with Guided Decoder for semantic segmentation of brain tumors. Biomed Signal Processing and Control 2022; 71(A): 103077.
[http://dx.doi.org/10.1016/j.bspc.2021.103077]
[13]
Liu Y, Du J, Vong C, et al. Scale-adaptive super-feature based MetricUNet for brain tumor segmentation. Biomed Signal Process Control 2022; 73: 103442.
[http://dx.doi.org/10.1016/j.bspc.2021.103442]
[14]
Xiuqin P, Zhang Q, Zhang H. A fundus retinal vessels segmentation scheme based on the improved deep learning U-Net model. IEEE Access 2019; 7: 122634-43.
[http://dx.doi.org/10.1109/ACCESS.2019.2935138]
[15]
Xiao X, Lian S, Luo Z, et al. Weighted Res-U Net for high-quality retina vessel segmentation. 2018 9th international conference on Information Technology in Medicine and Education (ITME). Oct 19-21; Hangzhou, China, 2018. pp. 327-331.
[http://dx.doi.org/10.1109/ITME.2018.00080]
[16]
Yang Y, Wang Y, Zhu C, et al. Mixed-scale unet based on dense atrous pyramid for monocular depth estimation. IEEE Access 2021; 9: 114070-84.
[http://dx.doi.org/10.1109/ACCESS.2021.3104605]
[17]
Luo Z, Zhang Y, Zhou L, et al. Micro-Vessel image segmentation based on the AD-UNet model. IEEE Access 2019; 7: 143402-11.
[http://dx.doi.org/10.1109/ACCESS.2019.2945556]
[18]
Alfonso Francia G, Pedraza C, Aceves M, et al. Chaining a U-Net with a residual U-Net for retinal blood vessels segmentation. IEEE Access 2020; 8: 38493-500.
[http://dx.doi.org/10.1109/ACCESS.2020.2975745]
[19]
Tian Y, Hu Y, Ma Y, et al. Multi-scale U-Net with edge guidance for multimodal retinal image deformable registration. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). Jul 20-24; Montreal, QC, Canada, 2020; 1360-3.
[http://dx.doi.org/10.1109/EMBC44109.2020.9175613] [PMID: 33018241]
[20]
Wang B, Wang S, Qiu S, et al. CSU-Net: A context spatial U-Net for accurate blood vessel segmentation in fundus images. IEEE J Biomed Health Inform 2021; 25(4): 1128-38.
[http://dx.doi.org/10.1109/JBHI.2020.3011178] [PMID: 32750968]
[21]
Jethi AK, Murugesan B, Ram K, et al. Dual-Encoder-Unet for fast MRI reconstruction. 2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops). Apr 4-4; Iowa City, IA, USA. 2020; pp. 1-4.
[http://dx.doi.org/10.1109/ISBIWorkshops50223.2020.9153453]
[22]
Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks. IEEE conference on computer vision and pattern recognition (CVPR). Jul 21-26; Honolulu, HI, USA. 2017; pp. 2261-9.
[http://dx.doi.org/10.1109/CVPR.2017.243]
[23]
Zhou Z, Rahman SM, Tajbakhsh N, et al. UNet++: A nested U-Net architecture for medical image segmentation. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (DLMIA ML-CDS). In: Lecture Notes in Computer Science. Cham: Springer 2018; pp. 3-11.
[http://dx.doi.org/10.1007/978-3-030-00889-5_1] [PMID: 32613207]
[24]
Huang H, Lin L, Tong R, et al. Unet 3+: A full-scale connected unet for medical image segmentation. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). May 4-8; Barcelona, Spain. 2020; pp. 1055-9.
[http://dx.doi.org/10.1109/ICASSP40776.2020.9053405]
[25]
Li C, Tan Y, Chen W, et al. Attention unet++: A nested attention-aware U-Net for liver CT image segmentation. 2020 IEEE International Conference on Image Processing (ICIP). Oct 25-28; Abu Dhabi, United Arab Emirates. 2020; pp. 345-9.
[http://dx.doi.org/10.1109/ICIP40778.2020.9190761]
[26]
Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). July 21-26; Honolulu, HI, USA. 2017; pp. 6230-9.
[http://dx.doi.org/10.1109/CVPR.2017.660]
[27]
Chen LC, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). July 21-26; Honolulu, HI, USA. 2017; pp. 1-14.
[http://dx.doi.org/10.48550/arXiv.1706.05587]
[28]
Chen LC, Zhu Y, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation. European Conference on Computer Vision (ECCV). September 8-14; Munich, Germany. 2018; pp. 801-18.
[http://dx.doi.org/10.1007/978-3-030-01234-2_49]
[29]
Oktay O, Schlemper J, Folgoc LL, et al. Attention u-net: Learning where to look for the pancreas. IEEE/CVF Conference on Computer Vision and Pattern Recognition. June 18-22; Salt Lake City, UT, USA. 2018; pp. 1-10.
[http://dx.doi.org/10.48550/arXiv.1804.03999]
[30]
Yuan Y. Hierarchical convolutional-deconvolutional neural networks for automatic liver and tumor segmentation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). July 21-26; Honolulu, HI, USA. 2017; pp. 1-4.
[http://dx.doi.org/10.48550/arXiv.1710.04540]
[31]
Wang S, Cao S, Chai Z, et al. Conquering data variations in resolution: A slice-aware multi-branch decoder network. IEEE Trans Med Imaging 2020; 39(12): 4174-85.
[http://dx.doi.org/10.1109/TMI.2020.3014433] [PMID: 32755853]
[32]
McKinley R, Meier R, Wiest R. Ensembles of densely-connected CNNs with label-uncertainty for brain tumor segmentation. International MICCAI Brainlesion Workshop. 456-65.
[http://dx.doi.org/10.1007/978-3-030-11726-9_40]
[33]
Zhou C, Ding C, Wang X, et al. One-pass multi-task networks with cross-task guided attention for brain tumor segmentation. IEEE Trans Image Process 2020; 29: 4516-29.
[http://dx.doi.org/10.1109/TIP.2020.2973510] [PMID: 32086210]
[34]
Kermi A, Mahmoudi I, Khadir MT. Deep convolutional neural networks using U-Net for automatic brain tumor segmentation in multimodal MRI volumes. International MICCAI Brainlesion Workshop. 2018; pp. 37-48.
[http://dx.doi.org/10.1007/978-3-030-11726-9_4]
[35]
Albiol A, Albiol A, Albiol F. Extending 2D deep learning architectures to 3D image segmentation problems. International MICCAI Brainlesion Workshop. 2018; pp. 73-82.
[http://dx.doi.org/10.1007/978-3-030-11726-9_7]
[36]
Feng X, Tustison N, Meyer C. Brain tumor segmentation using an ensemble of 3D U-Nets and overall survival prediction using radiomic features. International MICCAI Brainlesion Workshop. 2018; pp. 279-288.
[http://dx.doi.org/10.1007/978-3-030-11726-9_25]
[37]
Ahmad P, Hai J, Roobaea A, et al. MH U Net: A multi-scale hierarchical based architecture for medical image segmentation. IEEE Access 2021; 9: 148384-408.
[http://dx.doi.org/10.1109/ACCESS.2021.3122543]
[38]
Myronenko A. 3D MRI brain tumor segmentation using autoencoder regularization. International MICCAI Brainlesion Workshop. 2018; pp. 311-20.
[http://dx.doi.org/10.1007/978-3-030-11726-9_28]
[39]
Isensee F, Kickingereder P, Wick W, et al. No new-net. International MICCAI Brainlesion Workshop. 2018; pp. 234-44.
[http://dx.doi.org/10.1007/978-3-030-11726-9_21]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy