Optimizing Adversarial Attacks on Graph Neural Networks via Honey Badger Energy Valley Optimization
Keywords:
Energy valley optimization, Honey Badger Optimization Algorithm, Graph Neural Network, Energy Honey Badger OptimizationAbstract
In recent years, Graph Neural Networks (GNN) has gained considerable attention due to the practical importance of graph structure data in graph representation learning. It is most commonly utilized in fraud detection, privacy-inference attacks, completion of knowledge graphs, item recommendation, and so on. The GNN is highly vulnerable to adversarial attacks, which affect the reliability of the system, reduce the accuracy of prediction on test data, and increase the loss function of training data. However, the existing approaches utilized to reduce the impacts of adversarial attacks on GNN focus only on highly linked training process. Thus, a GNN_Attacker model is designed in this research for the generation of adversarial attacks in GNN. The binary image is allowed for graph construction and the adversarial attacks are generated in the constructed graph using GNN. Here, the Energy Honey Badger Optimization (EHBO) is introduced for the generation of training samples and GNN is again utilized for testing the generated adversarial attacks. Moreover, the adversarial attack generation performance of GNN_Attacker is validated. It demonstrates that the GNN_Attacker attained superior performance with maximum visual similarity, classification accuracy, and attack success rate of 90.77%, 94.68%, and 96.54% respectively.
Downloads
References
Wu, Y., Liu, W., Hu, X. and Yu, X., “Parameter discrepancy hypothesis: Adversarial attack for graph data”, Information Sciences, vol. 577, pp.234-244, 2021.
Chen, J., Huang, G., Zheng, H., Yu, S., Jiang, W. and Cui, C., “Graph-fraudster: Adversarial attacks on graph neural network-based vertical federated learning”, IEEE Transactions on Computational Social Systems, 10(2), pp.492-506, 2022.
Xian, X., Wu, T., Qiao, S., Wang, W., Wang, C., Liu, Y. and Xu, G., “DeepEC: Adversarial attacks against graph structure prediction models”, Neurocomputing, vol. 437, pp.168-185, 2021.
Zhang, C., Zhang, S., Yu, J.J. and Yu, S., “SAM: Query-Efficient Adversarial Attacks against Graph Neural Networks”, ACM Transactions on Privacy and Security, 2023.
Qiao, Z., Wu, Z., Chen, J., Ren, P.A. and Yu, Z., “A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks”, Entropy, vol. 25, no. 1, pp.39, 2022.
Ganesh Ingle and Sanjesh Pawale, “Enhancing Model Robustness and Accuracy against Adversarial Attacks via Adversarial Input Training” International Journal of Advanced Computer Science and Applications (IJACSA), 15(3), 2024. http://dx.doi.org/10.14569/IJACSA.2024.01503120
Alarab, I. and Prakoonwit, S., “Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks”, Soft Computing, pp.1-13, 2023.
Wan, X., Kenlay, H., Ru, B., Blaas, A., Osborne, M.A. and Dong, X., “Adversarial attacks on graph classification via bayesian optimisation”, arXiv preprint arXiv:2111.02842, 2021.
Muller, E., “Graph clustering with graph neural networks”, Journal of Machine Learning Research, vol. 24, pp.1-21, 2023.
Hashim, F.A., Houssein, E.H., Hussain, K., Mabrouk, M.S. and Al-Atabany, W., “Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems”, Mathematics and Computers in Simulation, vol.192, pp.84-110, 2022.
Azizi, M., Aickelin, U., A. Khorshidi, H. and Baghalzadeh Shishehgarkhaneh, M., “Energy valley optimizer: a novel metaheuristic algorithm for global and engineering optimization”, Scientific Reports, vol. 13, no. 1, pp.226.
Ganesh Ingle and Sanjesh Pawale, “Enhancing Adversarial Defense in Neural Networks by Combining Feature Masking and Gradient Manipulation on the MNIST Dataset” International Journal of Advanced Computer Science and Applications(IJACSA), 15(1), 2024. http://dx.doi.org/10.14569/IJACSA.2024.01501114
Ganesh Ingle and Sanjesh Pawale, “Generate Adversarial Attack on Graph Neural Network using K-Means Clustering and Class Activation Mapping” International Journal of Advanced Computer Science and Applications(IJACSA), 14(11), 2023. http://dx.doi.org/10.14569/IJACSA.2023.01411143
Zang, X., Xie, Y., Chen, J. and Yuan, B., “Graph universal adversarial attacks: A few bad actors ruin graph learning models”, arXiv preprint arXiv: 2002.04784, 2020.
Wang, B. and Gong, N.Z., “Attacking graph-based classification via manipulating the graph structure”, In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 2023-2040, November 2019.
Takahashi, T., “Indirect adversarial attacks via poisoning neighbors for graph convolutional networks”, In Proceedings of 2019 IEEE International Conference on Big Data (Big Data), pp. 1395-1400, December 2019.
Zhang, C.Y., Hu, J., Yang, L., Chen, C.P. and Yao, Z., “Graph deconvolutional networks”, Information Sciences, vol. 518, pp.330-340, 2020.
Sun, L., Dou, Y., Yang, C., Zhang, K., Wang, J., Philip, S.Y., He, L. and Li, B., “Adversarial attack and defense on graph data: A survey”, IEEE Transactions on Knowledge and Data Engineering, 2022.
Cai, H., Zheng, V.W. and Chang, K.C.C., “A comprehensive survey of graph embedding: Problems, techniques, and applications”, IEEE transactions on knowledge and data engineering, vol. 30, no. 9, pp.1616-1637, 2018.
Li, M., Wang, Y., Zhang, D., Jia, Y. and Cheng, X., “Link prediction in knowledge graphs: A hierarchy-constrained approach”, IEEE Transactions on Big Data, vol. 8, no. 3, pp.630-643, 2018.
Ingle, G.B., Kulkarni, M.V. (2021). Adversarial Deep Learning Attacks—A Review. In: Kaiser, M.S., Xie, J., Rathore, V.S. (eds) Information and Communication Technology for Competitive Strategies (ICTCS 2020). Lecture Notes in Networks and Systems, vol 190. Springer, Singapore. https://doi.org/10.1007/978-981-16-0882-7_26
Modified National Institute of Standards and Technology database is taken from “https://github.com/graphdeeplearning/benchmarking-gnns/blob/master/data/superpixels/prepare_superpixels_MNIST.ipynb” accessed on October 2023.
Canadian Institute For Advanced Research database is taken from “https://github.com/graphdeeplearning/benchmarking-gnns/blob/master/data/superpixels/prepare_superpixels_CIFAR.ipynb” accessed on October 2023.
Zhao, J., Liu, X., Yan, Q., Li, B., Shao, M. and Peng, H., “Multi-attributed heterogeneous graph convolutional network for bot detection”, Information Sciences, vol.537, pp.380-393, 2020.
Wang, Q., Mao, Z., Wang, B. and Guo, L., “Knowledge graph embedding: A survey of approaches and applications”, IEEE Transactions on Knowledge and Data Engineering, vol.29, no.12, pp.2724-2743, 2017.
Dai, Q., Shen, X., Zhang, L., Li, Q. and Wang, D., “Adversarial training methods for network embedding”, In The World Wide Web Conference, pp. 329-339, May 2019.
Zhu, D., Zhang, Z., Cui, P. and Zhu, W., “Robust graph convolutional networks against adversarial attacks”, In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 1399-1407, July 2019.
Wu, X.G., Wu, H.J., Zhou, X., Zhao, X. and Lu, K., “Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training”, Journal of Computer Science and Technology, vol. 37, no. 5, pp.1161-1175, 2022.
Shukla, S., Balasubramanian, S. and Pavlović, M., “A generalized Banach fixed point theorem”, Bulletin of the Malaysian Mathematical Sciences Society, vol.39, pp.1529-1539, 2016
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.