Abstract
Background: Airway segmentation is a way to quantify the diagnosis of pulmonary diseases, including chronic obstructive problems and bronchiectasis. Manual analysis by radiologists is a challenging task due to the complex airway structure. Additionally, tree-like patterns, varied shapes, sizes, and intensity make the manual airway segmentation task more complex. Deeper airways are even more difficult to segment as their intensity starts matching the lung parenchyma as the diameter of the airway cross-section gets reduced.
Objective: Many earlier works have proposed different deep learning networks for airway segmentation but were unable to achieve the desired performance; hence the task of airway segmentation still possesses challenges in this field.
Methods: This work proposes a convolutional neural network based on deep U-Net architecture and employs an attention block technique for airway segmentation. The attention mechanism aids in the extraction of the complicated and multi-sized airways found in the lung region, hence increasing the efficiency of the U-Net architecture.
Results: The model has been validated using VESSEL12 and EXACT09 datasets, individually and combined, with and without trachea images. The best DSC scores on EXACT09 and VESSEL12 datasets are 95.21% and 95.80%, respectively. The performance on both datasets combined gave a DSC score of 94.1%, showing that the overall performance of the proposed methodology is quite satisfactory. The generalizability of the model is also confirmed using k-fold cross-validation. The comparison of the proposed model to existing research on airway segmentation found competitive results.
Conclusion: The use of an attention unit in the proposed model highlights the relevant information and reduces the irrelevant features, which helps to improve the performance and saves time.
Keywords: Airway segmentation, deep learning, convolutional neural network, U-Net, attention mechanism, medical image processing.
Graphical Abstract
[http://dx.doi.org/10.1109/TMI.2003.815905] [PMID: 12906248]
[http://dx.doi.org/10.1109/TMI.2004.826945] [PMID: 15554124]
[http://dx.doi.org/10.1117/12.467061]
[http://dx.doi.org/10.1016/j.acra.2004.01.012] [PMID: 15147620]
[http://dx.doi.org/10.1006/gmip.1999.0495]
[http://dx.doi.org/10.1016/j.cag.2014.02.003]
[http://dx.doi.org/10.13005/bpj/1325]
[http://dx.doi.org/10.1109/TMI.2012.2209674] [PMID: 22855226]
[http://dx.doi.org/10.3233/BME-141146] [PMID: 25227033]
[http://dx.doi.org/10.1016/j.media.2015.05.003] [PMID: 26026778]
[http://dx.doi.org/10.1016/j.neunet.2014.09.003] [PMID: 25462637]
[http://dx.doi.org/10.1016/j.media.2016.11.001] [PMID: 27842236]
[http://dx.doi.org/10.1016/j.media.2018.10.006] [PMID: 30388500]
[http://dx.doi.org/10.1007/978-3-030-32226-7_24]
[http://dx.doi.org/10.3390/app11083501]
[http://dx.doi.org/10.1016/j.cmpb.2021.106610] [PMID: 35077902]
[http://dx.doi.org/10.3390/e23030283] [PMID: 33652728]
[http://dx.doi.org/10.1016/j.media.2014.07.003] [PMID: 25113321]