Abstract
Introduction: Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) both have their areas of specialty in the medical imaging world. MRI is considered to be a safer modality as it exploits the magnetic properties of the hydrogen nucleus. Whereas a CT scan uses multiple X-rays, which is known to contribute to carcinogenesis and is associated with affecting the patient's health.
Methods: In scenarios, such as radiation therapy, where both MRI and CT are required for medical treatment, a unique approach to getting both scans would be to obtain MRI and generate a CT scan from it. Current deep learning methods for MRI to CT synthesis purely use either paired data or unpaired data. Models trained with paired data suffer due to a lack of availability of wellaligned data.
Results: Training with unpaired data might generate visually realistic images, although it still does not guarantee good accuracy. To overcome this, we proposed a new model called PUPCGANs (Paired Unpaired CycleGANs), based on CycleGANs (Cycle-Consistent Adversarial Networks).
Conclusion: This model is capable of learning transformations utilizing both paired and unpaired data. To support this, a paired loss is introduced. Comparing MAE, MSE, NRMSE, PSNR, and SSIM metrics, PUPC-GANs outperform CycleGANs.
Graphical Abstract
[http://dx.doi.org/10.1377/hlthaff.27.6.1491] [PMID: 18997204]
[http://dx.doi.org/10.7785/tcrt.2012.500342] [PMID: 23617289]
arXiv.org, 26- Nov-2018. [Online]. Available:https://arxiv.org/abs/1611.07004 [http://dx.doi.org/10.1109/CVPR.2017.632]
Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks,” Penn State, 22-Dec-2017. [Online]. Available: https://pennstate.pure.elsevier.com/en/publications/stackgan-text-to-photo-realistic-image-synthesis-with-stacked-gen [http://dx.doi.org/10.1109/TPAMI.2018.2856256]
[http://dx.doi.org/10.1109/CVPR.2017.19]
[http://dx.doi.org/10.1007/978-3-030-33128-3_2]
Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network,” [1802.09655] Translating and Segmenting Multimodal Medical Volumes with Cycle- and Shape- Consistency Generative Adversarial Network, 16-Mar-2019. [Online]. Available: http://export.arxiv.org/abs/1802.09655 [http://dx.doi.org/10.1109/CVPR.2018.00963]
Unsupervised domain adaptation in brain lesion segmentation with adversarial networks,” arXiv.org, 28-Dec-2016. [Online]. Available:https://arxiv.org/abs/1612.08894 [http://dx.doi.org/10.1007/978-3-319-59050-9_47]
[http://dx.doi.org/10.1007/978-3-030-01201-4_28]
How to fool radiologists with Generative Adversarial Networks? A Visual Turing test for lung cancer diagnosis,” Northwestern Scholars, 23- May-2018. [Online]. Available: https://www.scholars. northwestern.edu/en/publications/how-to-fool-radiologists-with-generative-adversarial-networks-a-v [http://dx.doi.org/10.1109/ISBI.2018.8363564]
[http://dx.doi.org/10.1007/978-3-030-00536-8_1]
[http://dx.doi.org/10.1109/CVPR.2016.90]
[http://dx.doi.org/10.1109/TCI.2016.2644865]
[http://dx.doi.org/10.1109/TIP.2003.819861] [PMID: 15376593]