Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Research Article

PUPC-GANs: A Novel Image Conversion Model using Modified CycleGANs in Healthcare

Author(s): Shweta Taneja*, Bhawna Suri, Aman Kumar, Ashish Chowdhry, Harsh Kumar and Kautuk Dwivedi

Volume 16, Issue 7, 2023

Published on: 03 May, 2023

Article ID: e300323215192 Pages: 8

DOI: 10.2174/2666255816666230330100005

Price: $65

Abstract

Introduction: Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) both have their areas of specialty in the medical imaging world. MRI is considered to be a safer modality as it exploits the magnetic properties of the hydrogen nucleus. Whereas a CT scan uses multiple X-rays, which is known to contribute to carcinogenesis and is associated with affecting the patient's health.

Methods: In scenarios, such as radiation therapy, where both MRI and CT are required for medical treatment, a unique approach to getting both scans would be to obtain MRI and generate a CT scan from it. Current deep learning methods for MRI to CT synthesis purely use either paired data or unpaired data. Models trained with paired data suffer due to a lack of availability of wellaligned data.

Results: Training with unpaired data might generate visually realistic images, although it still does not guarantee good accuracy. To overcome this, we proposed a new model called PUPCGANs (Paired Unpaired CycleGANs), based on CycleGANs (Cycle-Consistent Adversarial Networks).

Conclusion: This model is capable of learning transformations utilizing both paired and unpaired data. To support this, a paired loss is introduced. Comparing MAE, MSE, NRMSE, PSNR, and SSIM metrics, PUPC-GANs outperform CycleGANs.

Graphical Abstract

[1]
R. Smith-Bindman, D.L. Miglioretti, and E.B. Larson, "Rising use of diagnostic medical imaging in a large integrated health system", Health Aff., vol. 27, no. 6, pp. 1491-1502, 2008.
[http://dx.doi.org/10.1377/hlthaff.27.6.1491] [PMID: 18997204]
[2]
P. Metcalfe, G.P. Liney, L. Holloway, A. Walker, M. Barton, G.P. Delaney, S. Vinod, and W. Tomé, "The potential for an enhanced role for MRI in radiation-therapy treatment planning", Technol. Cancer Res. Treat., vol. 12, no. 5, pp. 429-446, 2013.
[http://dx.doi.org/10.7785/tcrt.2012.500342] [PMID: 23617289]
[3]
T. Karras, T. Aila, S. Laine, and J. Lehtinen, Progressive growing of GANs for improved quality, stability, and variation., ArXiv, 2018.
[4]
J. Yanghua, Z. Jiakai, L. Minjun, T. Yingtao, Z. Huachun, and F. Zhihao, "Towards the automatic anime characters creation with generative adversarial networks", arXiv, 2017.
[5]
P. Isola, J-Y. Zhu, T. Zhou, and A.A. Efros, "Image-to-image translation with conditional adversarial networks",
arXiv.org, 26- Nov-2018. [Online]. Available:https://arxiv.org/abs/1611.07004 [http://dx.doi.org/10.1109/CVPR.2017.632]
[6]
H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas,
Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks,” Penn State, 22-Dec-2017. [Online]. Available: https://pennstate.pure.elsevier.com/en/publications/stackgan-text-to-photo-realistic-image-synthesis-with-stacked-gen [http://dx.doi.org/10.1109/TPAMI.2018.2856256]
[7]
C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, Photo-realistic single image super-resolution using a generative adversarial network. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
[http://dx.doi.org/10.1109/CVPR.2017.19]
[8]
D. Shen, G. Wu, and H-I. Suk, "Deep learning in medical image analysis", Annu. Rev. Biomed. Eng., vol. 19, no. 1, pp. 221-248, 2017.
[9]
B. Yu, Y. Wang, L. Wang, D. Shen, and L. Zhou, Medical image synthesis via deep learning In: G. Lee, and H. Fujita, Eds Deep Learning in Medical Image Analysis. Advances in Experimental Medicine and Biology, vol. 1213, Springer: Cham, 2020.
[http://dx.doi.org/10.1007/978-3-030-33128-3_2]
[10]
Z. Zhang, L. Yang, and Y. Zheng,
Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network,” [1802.09655] Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape- Consistency Generative Adversarial Network, 16-Mar-2019. [Online]. Available: http://export.arxiv.org/abs/1802.09655 [http://dx.doi.org/10.1109/CVPR.2018.00963]
[11]
D. Nie, R. Trullo, C. Petitjean, S. Ruan, and D. Shen, "Medical image synthesis with context-aware generative adversarial networks", arXiv, 2016.
[12]
Y. Huang, L. Shao, and A.F. Frangi, Simultaneous superresolution and cross-modality synthesis of 3D medical images using weakly-supervised joint convolutional sparse coding,” arXiv.org, 07-May-2017. [Online]. Available:https://arxiv.org/abs/1705.02596
[13]
K. Kamnitsas, C. Baumgartner, C. Ledig, V.F.J. Newcombe, J.P. Simpson, A.D. Kane, D.K. Menon, A. Nori, A. Criminisi, D. Rueckert, and B. Glocker,
Unsupervised domain adaptation in brain lesion segmentation with adversarial networks,” arXiv.org, 28-Dec-2016. [Online]. Available:https://arxiv.org/abs/1612.08894 [http://dx.doi.org/10.1007/978-3-319-59050-9_47]
[14]
C. Bermudez, A.J. Plassard, L.T. Davis, A.T. Newton, S.M. Resnick, and B.A. Landman, Learning implicit brain MRI manifolds with deep learning,” NASA/ADS. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2018SPIE10574E..1LB/abstract
[15]
B. Christoph, A. Shadi, and N. Nassir, "Generating Highly Realistic Images of Skin Lesions with GANs", arXiv, 2018.
[http://dx.doi.org/10.1007/978-3-030-01201-4_28]
[16]
M.J.M. Chuquicusma, S. Hussein, J. Burt, and U. Bagci,
How to fool radiologists with Generative Adversarial Networks? A Visual Turing test for lung cancer diagnosis,” Northwestern Scholars, 23- May-2018. [Online]. Available: https://www.scholars. northwestern.edu/en/publications/how-to-fool-radiologists-with-generative-adversarial-networks-a-v [http://dx.doi.org/10.1109/ISBI.2018.8363564]
[17]
H-C. Shin, N. Tenenholtz, J. Rogers, C. Schwarz, M. Senjem, J. Gunter, K. Andriole, and M. Michalski, Medical image synthesis for data augmentation and anonymization using generative adversarial networks", In: A. Gooya, O. Goksel, I. Oguz, and N. Burgos, Eds., Simulation and Synthesis in Medical Imaging. SASHIMI 2018. Lecture Notes in Computer Science(), vol. 11037. Springer: Cham,, 2018.
[http://dx.doi.org/10.1007/978-3-030-00536-8_1]
[18]
K. Bodo, and A. Shadi, "MRI to CT translation with GANs", arXiv, 2019.
[19]
J. Chengbin, J. Wonmo, J. Seongsu, P. Ensik, S. Ahn, H. In, L. Jae, and C. Xuenan, "Deep CT to MR synthesis using paired and unpaired data", Sensors, vol. 19, no. 10, p. 2361, 2019.
[20]
G. Yiftach, D. Dov, A.-E. Hadar, and C.-O. Daniel, "Implicit Pairs for Boosting Unpaired Image-to-Image Translation", arXiv, 2019.
[21]
darren2020, “CT and MRI brain scans,” Kaggle, 11-May-2021. [Online]. Available: https://www.kaggle.com/datasets/darren2020/ ct-to-mri-cgan
[22]
K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition", arXiv, 2016.
[http://dx.doi.org/10.1109/CVPR.2016.90]
[23]
Z. Hang, G. Orazio, F. Iuri, and K. Jan, "Loss functions for image restoration with neural networks", IEEE Transactions on Computational Imaging, vol. PP, no. 99, pp. 1-1, 2016.
[http://dx.doi.org/10.1109/TCI.2016.2644865]
[24]
P. Diederik, "Adam: A method for stochastic optimization", arXiv, 2014.
[25]
Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, "Image quality assessment: From error visibility to structural similarity", IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, 2004.
[http://dx.doi.org/10.1109/TIP.2003.819861] [PMID: 15376593]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy