Abstract
Objective: The aim of the study was to verify the ability of the deep learning model to identify five subtypes and normal images in non-contrast enhancement CT of intracranial hemorrhage.
Methods: A total of 351 patients (39 patients in the normal group, 312 patients in the intracranial hemorrhage group) who underwent intracranial hemorrhage noncontrast enhanced CT were selected, obtaining 2768 images in total (514 images for the normal group, 398 images for the epidural hemorrhage group, 501 images for the subdural hemorrhage group, 497 images for the intraventricular hemorrhage group, 415 images for the cerebral parenchymal hemorrhage group, and 443 images for the subarachnoid hemorrhage group). Based on the diagnostic reports of two radiologists with more than 10 years of experience, the ResNet-18 and DenseNet-121 deep learning models were selected. Transfer learning was used. 80% of the data was used for training models, 10% was used for validating model performance against overfitting, and the last 10% was used for the final evaluation of the model. Assessment indicators included accuracy, sensitivity, specificity, and AUC values.
Results: The overall accuracy of ResNet-18 and DenseNet-121 models was obtained as 89.64% and 82.5%, respectively. The sensitivity and specificity of identifying five subtypes and normal images were above 0.80. The sensitivity of the DenseNet-121 model to recognize intraventricular hemorrhage and cerebral parenchymal hemorrhage was lower than 0.80, 0.73, and 0.76, respectively. The AUC values of the two deep learning models were found to be above 0.9.
Conclusion: The deep learning model can accurately identify the five subtypes of intracranial hemorrhage and normal images, and it can be used as a new tool for clinical diagnosis in the future.
Keywords: Deep learning, transfer learning, intracranial hemorrhage, ResNet, DenseNet diagnosis, CT scanning.
[http://dx.doi.org/10.5853/jos.2016.00563] [PMID: 28030895]
[http://dx.doi.org/10.1056/NEJM200105103441907] [PMID: 11346811]
[http://dx.doi.org/10.1213/ANE.0b013e3181d568c8] [PMID: 20332192]
[http://dx.doi.org/10.1162/neco.1989.1.4.541]
[http://dx.doi.org/10.1016/j.media.2016.05.004] [PMID: 27310171]
[http://dx.doi.org/10.1016/j.media.2016.10.004] [PMID: 27865153]
[http://dx.doi.org/10.1016/j.media.2016.01.005] [PMID: 26917105]
[http://dx.doi.org/10.1109/TMI.2016.2528162] [PMID: 26886976]
[http://dx.doi.org/10.1007/s10278-016-9929-2] [PMID: 27896451]
[http://dx.doi.org/10.1109/JBHI.2018.2879188] [PMID: 30387757]
[http://dx.doi.org/10.1001/jama.2016.17216] [PMID: 27898976]
[http://dx.doi.org/10.1016/j.media.2017.07.005] [PMID: 28778026]
[http://dx.doi.org/10.1148/radiol.2019191293] [PMID: 31793848]
[http://dx.doi.org/10.1007/s00330-019-06163-2] [PMID: 31041565]
[http://dx.doi.org/10.1073/pnas.1908021116] [PMID: 31636195]
[http://dx.doi.org/10.1161/STROKEAHA.119.026561] [PMID: 31735138]
[http://dx.doi.org/10.1155/2019/2912458] [PMID: 30838122]
[http://dx.doi.org/10.1007/s11263-015-0816-y]
[http://dx.doi.org/10.1016/S0196-0644(95)70319-5] [PMID: 7832342]
[PMID: 10669236]
[PMID: 11827881]
[http://dx.doi.org/10.1038/s41551-018-0324-9] [PMID: 30948806]