Generic placeholder image

Current Medical Imaging

Editor-in-Chief

ISSN (Print): 1573-4056
ISSN (Online): 1875-6603

Research Article

A Dynamic Context Encoder Network for Liver Tumor Segmentation

Author(s): Jun Liu, Jing Fang, Tao Jiang, Chaochao Zhou, Liren Shao and Yusheng Song*

Volume 20, 2024

Published on: 13 June, 2024

Article ID: e15734056303257 Pages: 13

DOI: 10.2174/0115734056303257240529100041

Price: $65

conference banner
Abstract

Background: Accurate segmentation of liver tumor regions in medical images is of great significance for clinical diagnosis and the planning of surgical treatments. Recent advancements in machine learning have shown that convolutional neural networks are powerful in such image processing while largely reducing human labor. However, the variable shape, fuzzy boundary, and discontinuous tumor region of liver tumors in medical images bring great challenges to accurate segmentation. The feature extraction capability of a neural network can be improved by expanding its architecture, but it inevitably demands more computing resources in training and hyperparameter tuning.

Methods: This study presents a Dynamic Context Encoder Network (DCE-Net), which incorporates multiple new modules, such as the Involution Layer, Dynamic Residual Module, Context Extraction Module, and Channel Attention Gates, for feature extraction and enhancement.

Results: In the experiment, we used a liver tumor CT dataset of LiTS2017 to train and test the DCE-Net for liver tumor segmentation. The experimental results showed that the four evaluation indexes of the method, precision, recall, dice, and AUC, were 0.8961, 0.9711, 0.9270, and 0.9875, respectively. Furthermore, our ablation study reported that the accuracy and training efficiency of our network were markedly superior to the networks without involution or dynamic residual modules.

Conclusion: Therefore, the DCE-Net proposed in this study has great potential for automatic segmentation of liver lesion tumors in the clinical diagnostic environment.


Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy