Generic placeholder image

Current Medical Imaging

Editor-in-Chief

ISSN (Print): 1573-4056
ISSN (Online): 1875-6603

Research Article

Comparative Study of Encoder-decoder-based Convolutional Neural Networks in Cartilage Delineation from Knee Magnetic Resonance Images

Author(s): Ching Wai Yong, Khin Wee Lai*, Belinda Pingguan Murphy and Yan Chai Hum

Volume 17, Issue 8, 2021

Published on: 14 December, 2020

Page: [981 - 987] Pages: 7

DOI: 10.2174/1573405616666201214122409

Abstract

Background: Osteoarthritis (OA) is a common degenerative joint inflammation that may lead to disability. Although OA is not lethal, this disease will remarkably affect patient’s mobility and their daily lives. Detecting OA at an early stage allows for early intervention and may slow down disease progression.

Introduction: Magnetic resonance imaging is a useful technique to visualize soft tissues within the knee joint. Cartilage delineation in magnetic resonance (MR) images helps in understanding the disease progressions. Convolutional neural networks (CNNs) have shown promising results in computer vision tasks, and various encoder-decoder-based segmentation neural networks are introduced in the last few years. However, the performances of such networks are unknown in the context of cartilage delineation.

Methods: This study trained and compared 10 encoder-decoder-based CNNs in performing cartilage delineation from knee MR images. The knee MR images are obtained from the Osteoarthritis Initiative (OAI). The benchmarking process is to compare various CNNs based on physical specifications and segmentation performances.

Results: LadderNet has the least trainable parameters with the model size of 5 MB. UNetVanilla crowned the best performances by having 0.8369, 0.9108, and 0.9097 on JSC, DSC, and MCC.

Conclusion: UNetVanilla can be served as a benchmark for cartilage delineation in knee MR images, while LadderNet served as an alternative if there are hardware limitations during production.

Keywords: Comparative study, convolutional neural network, encoder-decoder neural network, knee cartilage segmentation, magnetic resonance imaging, osteoarthritis.

Graphical Abstract

[1]
Robin Poole A. Osteoarthritis as a whole joint disease. HSS J 2012; 8(1): 4-6.
[2]
Pang J, Li P, Qiu M, Chen W. Automatic articular cartilage segmentation based on pattern recognition from knee MRI images. J Digit Imaging 2015; 28(6): 695-703.
[3]
Culvenor AG, Øiestad BE, Hart HF, Stefanik JJ, Guermazi A. Prevalence of knee osteoarthritis features on magnetic resonance imaging in asymptomatic uninjured adults: a systematic review and meta-analysis. Br J Sports Med 2019; 53(20): 1268-78.
[4]
Guermazi A, Roemer FW, Haugen IK, Crema MD, Hayashi D. MRI-based semiquantitative scoring of joint pathology in osteoarthritis. Nat Rev Rheumatol 2013; 9(4): 236-51.
[http://dx.doi.org/10.1038/nrrheum.2012.223]
[5]
Nagai K, Nakamura T, Fu FH. The diagnosis of early osteoarthritis of the knee using magnetic resonance imaging. Ann Joint 2018; 3: 110.
[6]
Luyten FP, Denti M, Filardo G, Kon E, Engebretsen L. Definition and classification of early osteoarthritis of the knee. Knee Surg Sports Traumatol Arthrosc 2012; 20(3): 401-6.
[PMID: 22068268]
[7]
Xu J, Xie G, Di Y, Bai M, Zhao X. Value of T2-mapping and DWI in the diagnosis of early knee cartilage injury. J Radiol Case Rep 2011; 5(2): 13-8.
[PMID: 22470777]
[8]
Faisal A, Ng S-C, Goh S-L, Lai KWJM. Knee cartilage segmentation and thickness computation from ultrasound images. Med Biol Eng Comput 2018; 56(4): 657-9.
[PMID: 28849317]
[9]
Hossain MB, Pingguan-Murphy B, Chai HY, et al. Improved ultrasound imaging for knee osteoarthritis detection. medical imaging technology. Springer 2015; pp. 1-40.
[10]
Lee S, Park SH, Shim H, Yun ID, Lee SUK. Optimization of local shape and appearance probabilities for segmentation of knee cartilage in 3-D MR images. Comput Vis Image Underst 2011; 115(12): 1710-20.
[11]
Folkesson J, Dam EB, Olsen OF, Pettersen PC. Segmenting articular cartilage automatically using a voxel classification approach. IEEE Trans Med Imaging 2007; 26(1): 106-5.
[12]
Li K, Millington S, Wu X, Chen DZ, Sonka M, Eds. Simultaneous segmentation of multiple closed surfaces using optimal graph searching. Inf Process Med Imaging. 19: 406-17.
[http://dx.doi.org/10.1007/11505730_34] [PMID: 17354713]
[13]
Rakhlin A, Shvets A, Iglovikov V, Kalinin AA, Eds. Deep convolutional neural networks for breast cancer histology image analysis. Image Analysis and Recognition ICIAR 2018 Lecture Notes in Computer Science, vol 10882. Campilho A, Karray F, ter Haar Romeny B, Eds.
[http://dx.doi.org/10.1007/978-3-319-93000-8_83]
[14]
Tiulpin A, Thevenot J, Rahtu E, Lehenkari P. Automatic knee osteoarthritis diagnosis from plain radiographs: A deep learning-based approach. Sci Rep 2018; 8(1): 1727.
[15]
Iglovikov VI, Rakhlin A, Kalinin AA, Shvets AA. Paediatric bone age assessment using deep convolutional neural networks.Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer 2018; pp. 300-8.
[http://dx.doi.org/10.1007/978-3-030-00889-5_34]
[16]
Salih AAM, Hasikin K, Isa ANAM. Adaptive fuzzy exposure local contrast enhancement. IEEE Access 2018; 6: 58794-806.
[17]
Hum YC, Lai KW, Mohamad Salim MI. Multiobjectives bihistogram equalization for image contrast enhancement. Complexity 2014; 20(2): 22-36.
[http://dx.doi.org/10.1002/cplx.21499]
[18]
Ramli R, Idris MYI, Hasikin K, et al. Feature-based retinal image registration using D-saddle feature. J Healthc Eng 2017; 2017: 1489524.
[http://dx.doi.org/10.1155/2017/1489524] [PMID: 29204257]
[19]
Ronneberger O, Fischer P, Brox T, Eds. U-net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 Lecture Notes in Computer Science, vol 9351; Springer, Cham.
[http://dx.doi.org/10.1007/978-3-319-24574-4_28]
[20]
Long J, Shelhamer E, Darrell T, Eds. Fully convolutional networks for semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. Boston, MA, USA.
[21]
Nair V, Hinton GE, Eds. Rectified linear units improve restricted boltzmann machines. Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel 2010.
[22]
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ, Eds. Densely connected convolutional networks. Proceedings of the IEEE conference on computer vision and pattern recognition. Honolulu, HI, USA.
[23]
Jégou S, Drozdzal M, Vazquez D, Romero A, Bengio Y, Eds. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition workshops. Honolulu, HI, USA.
[http://dx.doi.org/10.1109/CVPRW.2017.156]
[24]
Chaurasia A, Culurciello E, Eds. Linknet: Exploiting encoder representations for efficient semantic segmentation. 2017 IEEE Visual Communications and Image Processing (VCIP) 2017. St. Petersburg, FL, USA.
[25]
Glorot X, Bengio Y, Eds. Understanding the difficulty of training deep feedforward neural networks. Proceedings of the thirteenth international conference on artificial intelligence and statistics PMLR. 9: 249-56.
[26]
He K, Zhang X, Ren S, Sun J, Eds. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings of the IEEE international conference on computer vision 2015. Santiago, Chile.
[http://dx.doi.org/10.1109/ICCV.2015.123]
[27]
Iglovikov V. Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv:180105746 [csCV] 2018.
[28]
Shvets AA, Iglovikov VI, Rakhlin A, Kalinin AA. Angiodysplasia detection and localization using deep convolutional neural networks. 2018 17th IEEE international conference on machine learning and applications (icmla). Orlando, FL, USA.
[29]
Simonyan K. Very deep convolutional networks for large-scale image recognition. 2014. arXiv:1409.1556 [cs.CV].
[30]
He K, Zhang X, Ren S, Sun J, Eds. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. Las Vegas, NV, USA.
[31]
Anderson P, He X, Buehler C, Teney D, Johnson M, Gould S, Eds. Bottom-up and top-down attention for image captioning and visual question answering. Proceedings of the IEEE conference on computer vision and pattern recognition. Salt Lake City, UT, USA.
[http://dx.doi.org/10.1109/CVPR.2018.00636]
[32]
Bahdanau D, Cho K. Neural machine translation by jointly learning to align and translate. 2014. arXiv:1409.0473 [cs.CL].
[33]
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Eds. Attention is all you need. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA 2017.
[34]
Jetley S, Lord NA, Lee N, Torr PH. Learn to pay attention. 2018. arXiv:1804.02391 [cs.CV].
[35]
Veličković P, Cucurull G, Casanova A, Romero A, Lio P. Graph attention networks. 2017. arXiv:1710.10903 [stat.ML].
[36]
Oktay O, Schlemper J, Folgoc LL, et al. Attention U-net: Learning where to look for the pancreas. 2018. arXiv:1804.03999 [cs.CV].
[37]
Alom MZ, Hasan M, Yakopcic C, Taha TM. Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. 2018. arXiv:1802.06955 [cs.CV].
[38]
Jin Q, Meng Z, Sun C, Wei L. RA-UNet: A hybrid deep attention-aware network to extract liver and tumor in CT scans. 2018. arXiv:1811.01328 [cs.CV].
[39]
Zhuang J. Laddernet: Multi-path networks based on u-net for medical image segmentation. 2018. arXiv:1810.07810 [cs.CV].

© 2024 Bentham Science Publishers | Privacy Policy