Generic placeholder image

Recent Advances in Computer Science and Communications

Editor-in-Chief

ISSN (Print): 2666-2558
ISSN (Online): 2666-2566

Letter Article

Improved Two Stage Generative Adversarial Networks for Adversarial Example Generation with Real Exposure

Author(s): Priyanka Goyal* and Deepesh Singh

Volume 16, Issue 7, 2023

Published on: 13 September, 2023

Article ID: e080623217784 Pages: 10

DOI: 10.2174/2666255816666230608104148

Price: $65

Abstract

Introduction: Deep neural networks due to their linear nature are sensitive to adversarial examples. They can easily be broken just by a small disturbance to the input data. Some of the existing methods to perform these kinds of attacks are pixel-level perturbation and spatial transformation of images.

Method: These methods generate adversarial examples that can be fed to the network for wrong predictions. The drawback that comes with these methods is that they are really slow and computationally expensive. This research work performed a black box attack on the target model classifier by using the generative adversarial networks (GAN) to generate adversarial examples that can fool a classifier model to classify the images as wrong classes. The proposed method used a biased dataset that does not contain any data of the target label to train the first generator Gnorm of the first stage GAN, and after the first training has finished, the second stage generator Gadv, which is a new generator model that does not take random noise as input but the output of the first generator Gnorm.

Result: The generated examples have been superimposed with the Gnorm output with a small constant, and then the superimposed data have been fed to the target model classifier to calculate the loss. Some additional losses have been included to constrain the generation from generating target examples.

Conclusion: The proposed model has shown a better fidelity score, as evaluated using Fretchet inception distance score (FID), which was up to 42.43 in the first stage and up to 105.65 in the second stage with the attack success rate of up to 99.13%.

Graphical Abstract

[1]
A. Kurakin, I.J. Goodfellow, and S. Bengio, "Adversarial examples in the physical world", Arxiv, no. July, 2016.
[http://dx.doi.org/10.1201/9781351251389-8]
[2]
M. Sharif, S. Bhagavatula, L. Bauer, and M.K. Reiter, "Accessorize to a crime: Real and stealthy attacks on State-of-the-Art face recognition", In Proceedings of the 2016 acm sigsac conference on computer and communications security, 2016, pp. 1528-1540.
[3]
J. Liu, Y. Tian, R. Zhang, Y. Sun, and C. Wang, "A two-stage generative adversarial networks with semantic content constraints for adversarial Example generation", IEEE Access, vol. 8, pp. 205766-205777, 2020.
[http://dx.doi.org/10.1109/ACCESS.2020.3037329]
[4]
I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative Adversarial Nets", arXiv:1406.2661, 2014, p. 27.
[5]
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks", Comput. Sci., 2014.
[6]
I.J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harness-ing adversarial examples", Proc. Int. Conf. Learn. Represent, arXiv preprint arXiv:1412.6572, 2015, pp. 1-11. San Diego, CA, USA.
[7]
Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, "Boosting adversarial attacks with momentum", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit, 2018, pp. 9185-9193. Salt Lake City, UT, USA
[8]
(a) C. Xiao, B. Li, J.Y. Zhu, W. He, M. Liu, and D. Song, "Generating adversarial examples with adversarial networks", Proc. 27th Int. Joint Conf. Artif. Intell., pp. 3905-3911, 2018. Stockholm, Sweden;
(b) M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein generative adversarial networks", Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 214-223, 2017. PMLR, > Westminster, London
[9]
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, "Improved training of wasserstein GANs", arXiv:1704.00028, 2017.
[10]
T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, "Spectral normalization for generative adversarial networks", arXiv:1802.05957, 2018.
[11]
F. Chen, Y. Shang, J. Hu, and B. Xu, "Few features attack to fool machine learning models through mask-based GAN", IEEE, 2020, pp. 1-7.
2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK [http://dx.doi.org/10.1109/IJCNN48605.2020.9206922]
[12]
G. Zhao, M. Zhang, J. Liu, and J-R. Wen, "Unsupervised adversarial attacks on deep feature-based retrieval with GAN", arXiv:1907.05793, 2019.
[13]
L. Jiang, K. Qiao, R. Qin, L. Wang, W. Yu, J. Chen, H. Bu, and B. Yan, "Cycle-consistent adversarial GAN: The integration of adversarial attack and defense", Secur. Commun. Netw., vol. 2020, no. Feb, pp. 1-9, 2020.
[http://dx.doi.org/10.1155/2020/3608173]
[14]
S. Asre, and A. Anwar, "Synthetic energy data generation using time variant generative adversarial network", Electronics, vol. 11, no. 3, p. 355, 2022.
[http://dx.doi.org/10.3390/electronics11030355]
[15]
G. Zehai, M. Cunbao, Z. Jianfeng, and X. Weijun, "Remaining useful life prediction of integrated modular avionics using ensemble enhanced online sequential parallel extreme learning machine", Int. J. Mach. Learn. Cybern., vol. 12, no. 7, pp. 1893-1911, 2021.
[http://dx.doi.org/10.1007/s13042-021-01283-y]

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy