Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Research Article

Improved SinGAN for Single-Sample Airport Runway Destruction Image Generation

Author(s): JinYu Wang, ChangGong Zhang and HaiTao Yang*

Volume 16, Issue 5, 2023

Published on: 23 September, 2022

Article ID: e260422204091 Pages: 9

DOI: 10.2174/2666255815666220426132637

Price: $65

Abstract

Aims: To solve the problem of difficult acquisition of airport runway destruction image data.

Objectives: This paper introduces SinGAN, a single-sample generative adversarial network algorithm.

Methods: To address the shortcomings of SinGAN in image realism and diversity generation, an improved algorithm based on the combination of Gaussian error linear unit GELU and efficient channel attention mechanism ECANet is proposed.

Results: Experiments show that its generated image results are subjectively better than SinGAN and its lightweight algorithm ConSinGAN, and the model can obtain an effective balance in both quality and diversity of image generation.

Conclusion: The algorithm effect is also verified using three objective evaluation metrics, and the results show that the method in this paper effectively improves the generation effect compared with SinGAN, in which the SIFID metric is reduced by 46.67%.

Keywords: Deep learning, single-sample image generation, ruined airport runways, gaussian error linear units, attention mechanisms, evaluation metrics.

Graphical Abstract

[1]
E. Xiang Nan, "Application Research and Implementation of Single Target Missile Destruction Model", Institute of Electronic Science of China Electronics Technology Group Corporation. Beijing, China, 2020.
[2]
Z.T. Fei, C.Y. Zhou, and X.F. Cen, "Research on airport runway damage assessment system", Command Control Simul., vol. 34, pp. 66-69, 2012.
[3]
Z.G. Liang, K. Liu, and Z.P. Pan, "Evaluation of the blockade effectiveness of submunitions on airport runways", J. Arms Equip. Eng., vol. 42, pp. 35-38, 2021.
[4]
Y. Li, Q. S. Zhao, and Z. G. Zou, "An effect-oriented method for evaluating airfield runway strike options", J. Arms. Equip. Eng.,, vol. 42, pp. 97-101, 2021.
[5]
Y.J. Kang, H. Dong, and Z.Y. Chen, "Airport runway destruction effect assessment and functional recovery model", Firepower Command Control, vol. 43, pp. 148-151, 2018.
[6]
Y.S. Shi, and T.Y. Lin, "Geometric image feature recognition of airport runway craters", Comput. Eng. Des., vol. 36, pp. 2775-2780, 2015.
[7]
X.D. Zhang, and X. Liu, "Research on the evaluation model of strike effect based on image information", Ship Electronics Engineering, vol. 36, pp. 21-23, 2016.
[8]
T. Li, L. Chen, and Z. J. Ma, "Research on the application of reconnaissance information in the assessment of target destruction effect", J. Arti, Firing Control., vol. 41, pp. 36-47, 2020.
[9]
A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks", Available from:https://arxiv.org/pdf/1511.06434v1.pdf
[10]
L.M. Hu, and Y. Zhang, "Short-wave infrared-visible face image translation based on generative adversarial networks", J. Opt., vol. 40, no. 05, pp. 75-84, 2020.
[11]
Z.J. Xu, and D. Wang, "A two-way recurrent generative adversarial network-based approach for multi-pose face recognition", J. Opt., vol. 40, pp. 63-72, 2020.
[12]
M. Mirza, and S. Osindero, "Conditional generative adversarial nets", Available from: https://arxiv.org/pdf/1411.1784.pdf (Accessed:2014).
[13]
X. Chen, Y. Duan, and R. Houthooft, "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", Adv. Neural Inf. Process. Syst., vol. 29, pp. 2172-2180, 2016.
[14]
T. Karras, T. Aila, and S. Laine, "Progressive growing of gans for improved quality, stability, and variation", Available from:https://arxiv.org/pdf/1710.10196.pdf (Accessed:2017).
[15]
T.R. Shaham, T. Dekel, and T. Michaeli, "Singan: Learning a generative model from a single natural image", In: Proceedings of the IEEE/CVF International Conference on Computer Vision 27 Oct.-2 Nov. 2019, Seoul, Korea (South), IEEE, 2019, pp. 4570-4580.
[http://dx.doi.org/10.1109/ICCV.2019.00467]
[16]
T. Hinz, M. Fisher, and O. Wang, "Improved techniques for training single-image gans", In Proceedings of the IEEE/CVF International Conference on Computer Vision 3-8 Jan. 2021, Waikoloa, HI, USA, IEEE,, 2021, pp. 1300-1309
[17]
J. Chen, Q. Xu, and Q. Kang, Mogan: Morphologic-structure-aware generative learning from a single image. Available from:https://arxiv.org/pdf/2103.02997.pdf(Accessed:2021).
[18]
V. Sushko, J. Gall, and A. Khoreva, "One-shot gan: Learning to generate samples from single images and videos", In In: Proceedings of the IEEE/ CVF Conference on Computer Vision and Pattern Recognition, 19-25 June 2021, Nashville, TN, USA, IEEE,, 2021, pp. 2596-2600
[http://dx.doi.org/10.1109/CVPRW53098.2021.00293]
[19]
H. Qin, M.A. El-Yacoubi, Y. Li, and C. Liu, "Multi-scale and multi-direction GAN for CNN-based single palm-vein identification", IEEE Trans. Inf. Forensics Security, vol. 16, pp. 2652-2666, 2021.
[http://dx.doi.org/10.1109/TIFS.2021.3059340]
[20]
T. Zhang, Z. Yang, and C. Sun, "Stochastic simulation of fan deltas using parallel multi-stage generative adversarial networks", J. Petrol. Sci. Eng., vol. 208, 2022.109442
[http://dx.doi.org/10.1016/j.petrol.2021.109442]
[21]
S. Gu, R. Zhang, H. Luo, M. Li, H. Feng, and X. Tang, "Improved SinGAN integrated with an attentional mechanism for remote sensing image classification", Remote Sens. (Basel), vol. 13, no. 9, p. 1713, 2021.
[http://dx.doi.org/10.3390/rs13091713]
[22]
Y. Li, X. Lyu, A.C. Frery, and P. Ren, "Oil spill detection with multiscale conditional adversarial networks with small-data training", Remote Sens. (Basel), vol. 13, no. 12, p. 2378, 2021.
[http://dx.doi.org/10.3390/rs13122378]
[23]
P. Zhang, Y. Zhong, Y. Deng, X. Tang, and X. Li, "CoSinGAN: Learning COVID-19 infection segmentation from a single radiological image", Diagnostics (Basel), vol. 10, no. 11, p. 901, 2020.
[http://dx.doi.org/10.3390/diagnostics10110901] [PMID: 33153105]
[24]
X. Chen, H. Zhao, D. Yang, Y. Li, Q. Kang, and H. Lu, "SA-SinGAN: Self-attention for single-image generation adversarial networks", Mach. Vis. Appl., vol. 32, no. 4, pp. 1-14, 2021.
[http://dx.doi.org/10.1007/s00138-021-01228-z]
[25]
J. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141, 2018.
[26]
Q.L.Z.Y.B. Yang, "SA-Net: Shuffle attention for deep convolutional neural networks", State Key Laboratory for Novel Software Technology at Nanjing University.. Available from:https://arxiv.org/pdf/2102.00240.pdf(Accessed:2018).
[27]
D. Hendrycks, and K. Gimpel, Gaussian error linear units (gelus). Available from:https://arxiv.org/pdf/1606.08415v3.pdf(Accessed:2016).

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy