Abstract
Aims: COVID-19 is a widespread infectious disease that affects millions of people worldwide. On account of the alarming rate of the spread of COVID-19, scientists are looking for new strategies for the diagnosis of this disease. X-rays are much more affordable and widely available compared to CT screening. The PCR testing process is time-consuming and experiences false negative rates, these traditional medical imaging modalities play a vital role in the control of the pandemic. In this paper, we have developed and examined different CNN models to identify the best method for diaognosing this disease.
Background and Objective: The efforts of providing testing kits have increased due to the transmission of COVID 19. The preparation of these kits are complicated, rare, and expensive moreover, the difficulty of using them is another issue. The results have shown that the testing kits take crucial time to diagnose the virus, in addition to the fact that they have a 30 % loss rate.
Methods: In this article, we have studied the usage of ubiquitous X-ray imaging, for the classification of COVID-19 chest images, using existing convolutional neural networks (CNNs). Different CNN architectures, including VGG19, Densnet-121, and Xception are applied to train the network by chest X-rays of infected patients but not the infected ones.
Results: After applying these methods the results showed different accuracies but were more precise than the state-of-the-art models. The DenseNet-121 network obtained 97% accuracy, 98% precision, and 96% F1 score.
Conclusion: COVID-19 is a widespread infectious disease that affects millions of people worldwide. On account of the alarming rate of the spread of COVID-19 scientists are looking for new strategies for the diagnosis of this disease. In this article, we have examined the performance of different CNN models to identify the best method for the classification of this disease. The VGG 19 method showed 93 % accuracy.
Graphical Abstract
[http://dx.doi.org/10.1148/radiol.2020200702]
[http://dx.doi.org/10.1136/bmj.2.2745.286] [PMID: 20766747]
[http://dx.doi.org/10.1109/CCUBE.2013.6718539]
[http://dx.doi.org/10.1016/j.cmpb.2021.106450] [PMID: 34619600]
[http://dx.doi.org/10.1016/B978-0-323-91197-9.00012-6]
[http://dx.doi.org/10.3390/jimaging6120131] [PMID: 34460528]
[http://dx.doi.org/10.1016/j.media.2021.102125] [PMID: 34171622]
[http://dx.doi.org/10.1016/j.asoc.2022.109319] [PMID: 36034154]
[http://dx.doi.org/10.3389/fmed.2020.00427]
[http://dx.doi.org/10.1016/j.cmpb.2020.105581]
[http://dx.doi.org/10.1016/j.media.2020.101794]
[http://dx.doi.org/10.1007/s13246-020-00865-4] [PMID: 32524445]
[http://dx.doi.org/10.36548/jismac.2021.2.006]
[http://dx.doi.org/10.1109/CVPR.2017.243]
[http://dx.doi.org/10.1109/CVPR.2019.00949]
[http://dx.doi.org/10.1007/978-1-4757-4321-0]
[http://dx.doi.org/10.1109/CVPR.2007.383157]
[http://dx.doi.org/10.1142/S0218488598000094]
[http://dx.doi.org/10.1109/CVPR.2015.7298594]
[http://dx.doi.org/10.1016/j.compbiomed.2020.103792] [PMID: 32568675]