Exploring Magnitude Perturbation in Adversarial Attack & Defense

Authors

  • Bhasha Anjaria Faculty of Engineering and Technology, Parul Institute of Technology, Parul University, Vadodara, Gujarat, India,
  • Jaimeel Shah Faculty of Engineering and Technology, Parul Institute of Engineering and Technology, Parul University, Vadodara, Gujarat, India

Keywords:

Magnitude perturbation, Adversarial attacks, Defense mechanisms, Robustness Security, Deep learning models, Success rate, Perturbation strengths, Attack effectiveness, Defense strategy evaluation, Limitations, Strengths, Security enhancement

Abstract

Adversarial attacks pose a significant threat to the robustness and security of machine learning models. In recent years, researchers have focused on developing defense mechanisms to mitigate the impact of adversarial attacks. One such avenue of investigation involves examining the effects of magnitude perturbation on the success rate and effectiveness of these attacks, as well as evaluating the performance of various defense strategies in countering them. In this study, present a comprehensive analysis of the impact of magnitude perturbation in adversarial attacks and the effectiveness of defense mechanisms against them. The investigation of the influence of perturbation magnitudes on the success rate and transferability of attacks across different models and datasets. Furthermore, evaluation the performance of state-of-the-art defense mechanisms under varying perturbation strengths. The experimental results reveal intriguing insights into the behavior of adversarial attacks and the efficacy of defense mechanisms. The observation shows that increasing the magnitude of perturbations can significantly amplify the success rate of attacks, rendering models more vulnerable. Additionally, demonstrate that certain defense mechanisms exhibit varying levels of resilience against different perturbation magnitudes, shedding light on their limitations and strengths. The findings of this study contribute to a deeper understanding of the role of magnitude perturbation in adversarial attacks and the effectiveness of defense mechanisms. This knowledge can aid in the development of robust defense strategies and provide valuable insights for enhancing the security of machine learning systems in the face of adversarial threats.

Downloads

Download data is not yet available.

References

Akhtar, N., Mian, A., Kardan, N., Shah, M.: Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey. IEEE Access. 9, 155161–155196 (2021). https://doi.org/10.1109/ACCESS.2021.3127960.

Almuflih, A.S., Vyas, D., Kapdia, V. V, Qureshi, M.R.N.M., Qureshi, K.M.R., Makkawi, E.A.: Novel exploit feature-map-based detection of adversarial attacks. Applied Sciences. 12, 5161 (2022).

Bakhti, Y., Fezza, S.A., Hamidouche, W., Deforges, O.: DDSA: A Defense against Adversarial Attacks Using Deep Denoising Sparse Autoencoder. IEEE Access. 7, 160397–160407 (2019). https://doi.org/10.1109/ACCESS.2019.2951526.

Dasgupta, D., Gupta, K.D.: Dual-filtering (DF) schemes for learning systems to prevent adversarial attacks. Complex and Intelligent Systems. (2022). https://doi.org/10.1007/s40747-022-00649-1.

Deldjoo, Y., Noia, T. Di, Merra, F.A.: A Survey on Adversarial Recommender Systems: From Attack/Defense Strategies to Generative Adversarial Networks. ACM Comput. Surv. 54, (2021). https://doi.org/10.1145/3439729.

Despiegel, V., Despiegel, V.: ScienceDirect Dynamic Dynamic Autoencoders Autoencoders Against Against Adversarial Adversarial Attacks Attacks. 00, (2023). https://doi.org/10.1016/j.procs.2023.03.104.

Folz, J., Palacio, S., Hees, J., Dengel, A.: Adversarial defense based on structure-to-signal autoencoders. Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020. 3568–3577 (2020). https://doi.org/10.1109/WACV45572.2020.9093310.

Han, S., Lin, C., Shen, C., Wang, Q., Guan, X.: Interpreting Adversarial Examples in Deep Learning: A Review. (2023). https://doi.org/10.1145/3594869.

Hu, W., Tan, Y.: Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN BT - Data Mining and Big Data. Presented at the (2022).

Jin, L., Tan, F., Jiang, S.: Generative Adversarial Network Technologies and Applications in Computer Vision. Computational Intelligence and Neuroscience. 2020, (2020). https://doi.org/10.1155/2020/1459107.

Kuribayashi, M.: Defense Against Adversarial Attacks BT - Frontiers in Fake Media Generation and Detection. Presented at the (2022). https://doi.org/10.1007/978-981-19-1524-6_6.

Laykaviriyakul, P., Phaisangittisagul, E.: Collaborative Defense-GAN for protecting adversarial attacks on classification system. Expert Systems with Applications. 214, 118957 (2023). https://doi.org/https://doi.org/10.1016/j.eswa.2022.118957.

Li, F., Du, X., Zhang, L.: Adversarial Attacks Defense Method Based on Multiple Filtering and Image Rotation. Discrete Dynamics in Nature and Society. 2022, (2022). https://doi.org/10.1155/2022/6124895.

Liang, H., He, E., Zhao, Y., Jia, Z., Li, H.: Adversarial Attack and Defense: A Survey. Electronics (Switzerland). 11, 1–19 (2022). https://doi.org/10.3390/electronics11081283.

Liu, S., Zhuang, Y., Ma, X., Wang, H., Cao, D.: An Adversarial Sample Defense Method Based on Saliency Information BT - Ubiquitous Security. Presented at the (2023).

Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial Attacks and Defenses in Deep Learning. Engineering. 6, 346–360 (2020). https://doi.org/10.1016/j.eng.2019.12.012.

Ryu, G., Choi, D.: A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples. Applied Intelligence. 9174–9187 (2022). https://doi.org/10.1007/s10489-022-03991-6.

Samangouei, P., Kabkab, M., Chellappa, R.: Defense-Gan: Protecting classifiers against adversarial attacks using generative models. 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings. (2018).

Singh, A.B., Awasthi, L.K., Urvashi: Defense Against Adversarial Attacks Using Chained Dual-GAN Approach BT - Smart Data Intelligence. Presented at the (2022).

Taheri, S., Khormali, A., Salem, M., Yuan, J.S.: Developing a robust defensive system against adversarial examples using generative adversarial networks. Big Data and Cognitive Computing. 4, 1–15 (2020). https://doi.org/10.3390/bdcc4020011.

Wang, J., Wu, H., Wang, H., Zhang, J., Luo, X., Ma, B.: Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the Generation of Adversarial Examples. (2023).

Wang, Y., Sun, T., Li, S., Yuan, X., Ni, W., Hossain, E., Poor, H.V.: Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A Contemporary Survey. 1–46 (2023).

Yadav, A., Upadhyay, A., Sharanya, S.: An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks. (2022).

Yang, J., Shao, M., Liu, H., Zhuang, X.: Generating adversarial samples by manipulating image features with auto-encoder. International Journal of Machine Learning and Cybernetics. 14, 2499–2509 (2023). https://doi.org/10.1007/s13042-023-01778-w.

Yu, F., Wang, L., Fang, X., Zhang, Y.: The defense of adversarial example with conditional generative adversarial networks. Security and Communication Networks. 2020, (2020). https://doi.org/10.1155/2020/3932584.

Zhao, W., Mahmoud, Q.H., Alwidian, S.: Evaluation of GAN-Based Model for Adversarial Training. Sensors. 23, (2023). https://doi.org/10.3390/s23052697.

Zhou, M., Niu, Z., Wang, L., Zhang, Q., Hua, G.: Adversarial Ranking Attack and Defense. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 12359 LNCS, 781–799 (2020). https://doi.org/10.1007/978-3-030-58568-6_46.

Downloads

Published

29.01.2024

How to Cite

Anjaria, B. ., & Shah, J. . (2024). Exploring Magnitude Perturbation in Adversarial Attack & Defense. International Journal of Intelligent Systems and Applications in Engineering, 12(13s), 220 –. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/4589

Issue

Section

Research Article