Abstract
Background: Glioma is one of the most common and aggressive primary brain tumors that endanger human health. Tumors segmentation is a key step in assisting the diagnosis and treatment of cancer disease. However, it is a relatively challenging task to precisely segment tumors considering characteristics of brain tumors and the device noise. Recently, with the breakthrough development of deep learning, brain tumor segmentation methods based on fully convolutional neural network (FCN) have illuminated brilliant performance and attracted more and more attention.
Methods: In this work, we propose a novel FCN based network called SDResU-Net for brain tumor segmentation, which simultaneously embeds dilated convolution and separable convolution into residual U-Net architecture. SDResU-Net introduces dilated block into a residual U-Net architecture, which largely expends the receptive field and gains better local and global feature descriptions capacity. Meanwhile, to fully utilize the channel and region information of MRI brain images, we separate the internal and inter-slice structures of the improved residual U-Net by employing separable convolution operator. The proposed SDResU-Net captures more pixel-level details and spatial information, which provides a considerable alternative for the automatic and accurate segmentation of brain tumors.
Results and Conclusion: The proposed SDResU-Net is extensively evaluated on two public MRI brain image datasets, i.e., BraTS 2017 and BraTS 2018. Compared with its counterparts and stateof- the-arts, SDResU-Net gains superior performance on both datasets, showing its effectiveness. In addition, cross-validation results on two datasets illuminate its satisfying generalization ability.
Keywords: Brain tumor, image segmentation, separable convolution, dilated convolution, residual U-Net, fully convolutional network.
Graphical Abstract
[http://dx.doi.org/10.1038/nature21056] [PMID: 28117445]
[http://dx.doi.org/10.1016/j.asoc.2015.12.022]
[http://dx.doi.org/10.1016/j.neuroimage.2014.12.061] [PMID: 25562829]
[http://dx.doi.org/10.1002/ima.22207]
[http://dx.doi.org/10.1155/2016/8356294] [PMID: 27069501]
[http://dx.doi.org/10.1109/TMI.2016.2528129] [PMID: 26886975]
[http://dx.doi.org/10.1007/978-3-319-60964-5_44]
[http://dx.doi.org/10.1109/ISBI.2018.8363540]
[http://dx.doi.org/10.1016/j.neuroimage.2017.04.041]
[http://dx.doi.org/10.1109/TMI.2016.2538465] [PMID: 26960222]
[http://dx.doi.org/10.1016/j.media.2016.05.004] [PMID: 27310171]
[http://dx.doi.org/10.1016/j.media.2016.10.004] [PMID: 27865153]
[http://dx.doi.org/10.1016/j.media.2017.10.002] [PMID: 29040911]
[http://dx.doi.org/10.1109/ICCV.2015.179]
[http://dx.doi.org/10.1109/TMI.2014.2377694] [PMID: 25494501]
[http://dx.doi.org/10.1109/CVPR.2017.398]
[http://dx.doi.org/10.1134/S0032946018010076]
[http://dx.doi.org/10.1109/CVPR.2017.195]
[http://dx.doi.org/10.1038/sdata.2017.117] [PMID: 28872634]
[http://dx.doi.org/10.1109/ICCV.2015.123]