Abstract
Background: In the series of improved versions of U-Net, while the segmentation accuracy continues to improve, the number of parameters does not change, which makes the hardware required for training expensive, thus affecting the speed of training convergence.
Objective: The objective of this study is to propose a lightweight U-Net to balance the relationship between the parameters and the segmentation accuracy.
Methods: A lightweight U-Net with full skip connections and deep supervision (LFU-Net) was proposed. The full skip connections include skip connections from shallow encoders, deep decoders, and sub-networks, while the deep supervision learns hierarchical representations from full-resolution feature representations in outputs of sub-networks. The key lightweight design is that the number of output channels is based on 8 rather than 64 or 32. Its pruning scheme was designed to further reduce parameters. The code is available at: https://github.com/dengdy22/U-Nets.
Results: For the ISBI LiTS 2017 Challenge validation dataset, the LFU-Net with no pruning received a Dice value of 0.9699, which achieved equal or better performance with a mere about 1% of the parameters of existing networks. For the BraTS 2018 validation dataset, its Dice values were 0.8726, 0.9363, 0.8699 and 0.8116 on average, WT, TC and ET, respectively, and its Hausdorff95 distances values were 3.9514, 4.3960, 3.0607 and 4.3975, respectively, which was not inferior to the existing networks and showed that it can achieve balanced recognition of each region.
Conclusion: LFU-Net can be used as a lightweight and effective method in the segmentation tasks of two and multiple classification medical imaging datasets.
Keywords: Semantic segmentation, Medical image, Full skip connection, Deep supervision, Model pruning, Lightweight.
Graphical Abstract
[http://dx.doi.org/10.1109/TPAMI.2016.2572683] [PMID: 27244717]
[http://dx.doi.org/10.1109/TPAMI.2016.2644615] [PMID: 28060704]
[http://dx.doi.org/10.1007/978-3-319-24574-4_28]
[http://dx.doi.org/10.1109/TPAMI.2017.2699184] [PMID: 28463186]
[http://dx.doi.org/10.1109/TNNLS.2020.3006524] [PMID: 32745005]
[http://dx.doi.org/10.1109/ACCESS.2021.3086020]
[http://dx.doi.org/10.1109/BIBM47256.2019.8983121]
[http://dx.doi.org/10.1109/EMBC.2019.8857773] [PMID: 31946471]
[http://dx.doi.org/10.1109/ACCESS.2019.2953934]
[http://dx.doi.org/10.1109/TMI.2018.2845918] [PMID: 29994201]
[http://dx.doi.org/10.1109/ITAIC49862.2020.9338815]
[http://dx.doi.org/10.1016/j.bspc.2021.103077]
[http://dx.doi.org/10.1016/j.bspc.2021.103442]
[http://dx.doi.org/10.1109/ACCESS.2019.2935138]
[http://dx.doi.org/10.1109/ITME.2018.00080]
[http://dx.doi.org/10.1109/ACCESS.2021.3104605]
[http://dx.doi.org/10.1109/ACCESS.2019.2945556]
[http://dx.doi.org/10.1109/ACCESS.2020.2975745]
[http://dx.doi.org/10.1109/EMBC44109.2020.9175613] [PMID: 33018241]
[http://dx.doi.org/10.1109/JBHI.2020.3011178] [PMID: 32750968]
[http://dx.doi.org/10.1109/ISBIWorkshops50223.2020.9153453]
[http://dx.doi.org/10.1109/CVPR.2017.243]
[http://dx.doi.org/10.1007/978-3-030-00889-5_1] [PMID: 32613207]
[http://dx.doi.org/10.1109/ICASSP40776.2020.9053405]
[http://dx.doi.org/10.1109/ICIP40778.2020.9190761]
[http://dx.doi.org/10.1109/CVPR.2017.660]
[http://dx.doi.org/10.48550/arXiv.1706.05587]
[http://dx.doi.org/10.1007/978-3-030-01234-2_49]
[http://dx.doi.org/10.48550/arXiv.1804.03999]
[http://dx.doi.org/10.48550/arXiv.1710.04540]
[http://dx.doi.org/10.1109/TMI.2020.3014433] [PMID: 32755853]
[http://dx.doi.org/10.1007/978-3-030-11726-9_40]
[http://dx.doi.org/10.1109/TIP.2020.2973510] [PMID: 32086210]
[http://dx.doi.org/10.1007/978-3-030-11726-9_4]
[http://dx.doi.org/10.1007/978-3-030-11726-9_7]
[http://dx.doi.org/10.1007/978-3-030-11726-9_25]
[http://dx.doi.org/10.1109/ACCESS.2021.3122543]
[http://dx.doi.org/10.1007/978-3-030-11726-9_28]
[http://dx.doi.org/10.1007/978-3-030-11726-9_21]