Abstract
Background: Accurate detection of brain tumor and its severity is a challenging task in the medical field. So there is a need for developing brain tumor detecting algorithms and it is an emerging one for diagnosis, planning the treatment and outcome evaluation.
Materials and Methods: Brain tumor segmentation method using deep learning classification and multi-modal composition has been developed using the deep convolutional neural networks. The different modalities of MRI such as T1, flair, T1C and T2 are given as input for the proposed method. The MR images from the different modalities are used in proportion to the information contents in the particular modality. The weights for the different modalities are calculated blockwise and the standard deviation of the block is taken as a proxy for the information content of the block. Then the convolution is performed between the input image of the T1, flair, T1C and T2 MR images and corresponding to the weight of the T1, flair, T1C, and T2 images. The convolution is summed between the different modalities of the MR images and its corresponding weight of the different modalities of the MR images to obtain a new composite image which is given as an input image to the deep convolutional neural network. The deep convolutional neural network performs segmentation through the different layers of CNN and different filter operations are performed in each layer to obtain the enhanced classification and segmented spatial consistency results. The analysis of the proposed method shows that the discriminatory information from the different modalities is effectively combined to increase the overall accuracy of segmentation.
Results: The proposed deep convolutional neural network for brain tumor segmentation method has been analysed by using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013). The complete, core and enhancing regions are validated with Dice Similarity Coefficient and Jaccard similarity index metric for the Challenge, Leaderboard, and Synthetic data set. To evaluate the classification rates, the metrics such as accuracy, precision, sensitivity, specificity, under-segmentation, incorrect segmentation and over segmentation also evaluated and compared with the existing methods. Experimental results exhibit a higher degree of precision in the segmentation compared to existing methods.
Conclusion: In this work, deep convolution neural network with different modalities of MR image are used to detect the brain tumor. The new input image was created by convoluting the input image of the different modalities and their weights. The weights are determined using the standard deviation of the block. Segmentation accuracy is high with efficient appearance and spatial consistency. The assessment of segmented images is completely evaluated by using well-established metrics. In future, the proposed method will be considered and evaluated with other databases and the segmentation accuracy results should be analysed with the presence of different kind of noises.
Keywords: Deep convolutional neural networks, brain tumor, classification, magnetic resonance image, segmentation, modalities.
Graphical Abstract
[http://dx.doi.org/10.1016/j.neucom.2017.08.051]
[http://dx.doi.org/10.1016/j.media.2017.10.002] [PMID: 29040911]
[http://dx.doi.org/10.1016/j.compmedimag.2008.08.004] [PMID: 18818051]
[http://dx.doi.org/10.1016/j.media.2017.07.005] [PMID: 28778026]
[http://dx.doi.org/10.1007/s11042-018-6385-7]
[http://dx.doi.org/10.1016/j.compbiomed.2013.10.029] [PMID: 24377691]
[http://dx.doi.org/10.1007/s00521-018-3489-y]
[http://dx.doi.org/10.1109/TMI.2016.2548501] [PMID: 27046893]
[http://dx.doi.org/10.1016/0031-3203(95)00047-X]
[http://dx.doi.org/10.1016/j.jneumeth.2016.06.017] [PMID: 27329005]
[http://dx.doi.org/10.1016/j.bspc.2017.07.007]
[http://dx.doi.org/10.3758/s13428-010-0042-z] [PMID: 21287105]
[http://dx.doi.org/10.1016/S0730-725X(03)00185-1] [PMID: 14599536]
[http://dx.doi.org/10.1016/j.artmed.2011.01.004] [PMID: 21435851]
[http://dx.doi.org/10.1109/42.251125] [PMID: 18218469]
[http://dx.doi.org/10.1016/j.bspc.2006.05.002]
[http://dx.doi.org/10.1145/3065386]
[http://dx.doi.org/10.1109/CVPR.2014.81]
[http://dx.doi.org/10.1016/j.media.2016.05.004] [PMID: 27310171]
[http://dx.doi.org/10.1016/j.cviu.2017.04.002]
[http://dx.doi.org/10.1109/TMI.2016.2538465] [PMID: 26960222]
[http://dx.doi.org/10.1109/TMI.2010.2046908] [PMID: 20378467]
[http://dx.doi.org/10.1016/j.jocs.2017.02.009]
[http://dx.doi.org/10.1016/j.asoc.2015.09.016]
[http://dx.doi.org/10.1007/s00521-017-2953-4]
[http://dx.doi.org/10.3758/s13428-010-0042-z] [PMID: 21287105]