An Improved Explainable Artificial Intelligence for Intrusion Detection System

Authors

  • Saahira Banu Ahamed Maricar Department of computer science, College of Computer Science and Information Technology, Mahalya Campus for Girls, Jazan University, Jazan 45142, Saudi Arabia
  • Anne Anoop Lecturer, Computer Science, College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
  • Betty Elezebeth Samuel Lecturer, Department of Computer Science and Information Technology Jazan University, Kingdom of Saudi Arabia
  • Anjali Appukuttan Lecturer, Computer Science, College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
  • Khalid Hasan Alsinjlawi lecturer, Computer Science, College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia

Keywords:

IDS, Ensemble techniques, CICIDS2017dataset, XAI, LIME, ML

Abstract

Cybersecurity professionals rely heavily on Intrusion Detection Systems (IDS) to identify as well as stop potential dangers. Networks may be better protected with the use of IDS. A variety of Machine Learning (ML) approaches are aimed at the development of successful IDSs. Ensemble methods in ML have a history of successful learning. This research proposes a cutting-edge IDS using ensemble methods of ML. This research used preprocessing data after the CICIDS-2017 dataset to enhance classification accuracy and suppress false positives. Using ML methods including Logistic Regression, XGBoost (XGB) ID classifiers, along with Light Gradient Boosting Machine (LGBM) classifiers, proposes an IDS. An ensemble technique classifier was applied after these models were trained, and accuracy was obtained. The suggested model also includes the Explainable Artificial Intelligence (XAI) algorithm Local interpretable model-agnostic explanation (LIME), which makes the for reliable ID easier to understand and explain. The XAI LIME is faster, more responsive, and easier to explain.

Downloads

Download data is not yet available.

References

Mahbooba, Basim, Mohan Timilsina, Radhya Sahal, and Martin Serrano. "Explainable artificial intelligence (XAI) to enhance trust management in Intrusion Detection systems using decision tree model." Complexity, pp. 1-11,2021.

Wang, Maonan, Kangfeng Zheng, Yanqing Yang, and Xiujuan Wang. "An explainable Machine Learning framework for Intrusion Detection systems." IEEE Access 8 73127-73141, 2020.

Islam, Sheikh Rabiul, William Eberle, Sheikh K. Ghafoor, Ambareen Siraj, and Mike Rogers. "Domain knowledge aided explainable artificial intelligence for Intrusion Detection and response." arXiv preprint arXiv:1911.09853 ,2019.

Kuruvila, Abraham Peedikayil, Xingyu Meng, Shamik Kundu, Gaurav Pandey, and Kanad Basu. "Explainable Machine Learning for Intrusion Detection via hardware performance counters." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 41, no. 11 pp. 4952-4964, 2022.

Ahmed, Imran, Gwanggil Jeon, and Francesco Piccialli. "From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where." IEEE Transactions on Industrial Informatics 18, no. 8, pp. 5031-5042,2022.

Mathews, Sherin Mary. "Explainable artificial intelligence applications in NLP, biomedical, and malware classification: a literature review." In Intelligent Computing: Proceedings of the 2019 Computing Conference, Volume 2, pp. 1269-1292. Springer International Publishing, 2019.

Machlev, R., L. Heistrene, M. Perl, K. Y. Levy, J. Belikov, S. Mannor, and Y. Levron. "Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities." Energy and AI 100169,2022.

Kuppa, Aditya, and Nhien-An Le-Khac. "Black box attacks on explainable artificial intelligence (XAI) methods in cyber security." In 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2020.

Abou El Houda, Zakaria, Bouziane Brik, and Lyes Khoukhi. "“why should i trust your ids?”: An explainable deep learning framework for Intrusion Detection systems in internet of things networks." IEEE Open Journal of the Communications Society 3 pp .1164-1176,2022.

Barnard, Pieter, Nicola Marchetti, and Luiz A. DaSilva. "Robust Network Intrusion Detection through Explainable Artificial Intelligence (XAI)." IEEE Networking Letters 4, no. 3, pp.167-171, 2022.

Zhang, Zhibo, Hussam Al Hamadi, Ernesto Damiani, Chan Yeob Yeun, and Fatma Taher. "Explainable artificial intelligence applications in cyber security: State-of-the-art in research." IEEE Access ,2022.

Javed, Abdul Rehman, Waqas Ahmed, Sharnil Pandya, Praveen Kumar Reddy Maddikunta, Mamoun Alazab, and Thippa Reddy Gadekallu. "A survey of explainable artificial intelligence for smart cities." Electronics 12, no. 4, 1020, 2023.

A. Reyes, Abel, Francisco D. Vaca, Gabriel A. Castro Aguayo, Quamar Niyaz, and Vijay Devabhaktuni. "A Machine Learning based two-stage Wi-Fi network Intrusion Detection system." Electronics 9, no. 10, 1689,2020.

Liu, Hong, Chen Zhong, Awny Alnusair, and Sheikh Rabiul Islam. "FAIXID: A framework for enhancing ai explainability of Intrusion Detection results using data cleaning techniques." Journal of network and systems management 29, no. 4, 40,2021.

Fernandez, Alberto, Francisco Herrera, Oscar Cordon, Maria Jose del Jesus, and Francesco Marcelloni. "Evolutionary fuzzy systems for explainable artificial intelligence: Why, when, what for, and where to?." IEEE Computational intelligence magazine 14, no. 1, pp.69-81, 2019.

Thakker, Dhavalkumar, Bhupesh Kumar Mishra, Amr Abdullatif, Suvodeep Mazumdar, and Sydney Simpson. "Explainable artificial intelligence for developing smart cities solutions." Smart Cities 3, no. 4, pp. 1353-1382,2020.

Elmrabit, N., Zhou, F., Li, F. and Zhou, H., “Evaluation of Machine Learning Algorithms for Anomaly Detection.” [online] IEEE Xplore,2020.

D. Kwon, H. Kim, J. Kim, S. C. Suh, I. Kim, and K. J. Kim, “A survey of Deep Learning-based network anomaly detection,” Cluster Comput., vol. 22, no. S1, pp. 949–961, 2019.

Imteaj, Ahmed, and M. Hadi Amini. "Leveraging Asynchronous Federated Learning to Predict Customers Financial Distress." Intelligent Systems with Applications ,2022.

M. M. Hassan, A. Gumaei, A. Alsanad, M. Alrubaian, and G. Fortino, “A hybrid Deep Learning model for efficient Intrusion Detection in big data environment,” Inf. Sci. (Ny), vol. 513, pp. 386–396, 2020

S. Seth, G. Singh, and K. K. Chahal, ‘‘A novel time efficient learning-based approach for smart Intrusion Detection system,’’ J. Big Data, vol. 8, no. 1, pp. 1–28, Dec. 2021.

D. Jin, Y. Lu, J. Qin, Z. Cheng, and Z. Mao, ‘‘SwiftIDS: Real-time Intrusion Detection system based on LightGBM and parallel Intrusion Detection mechanism,’’ Computer. Security., vol. 97, pp. 1–17, Oct. 2020.

Md. K. Islam, P. Hridi, Md. S. Hossain, and H. S. Narman, ‘‘Network anomaly detection using LightGBM: A gradient boosting classifier,’’ in Proc. 30th Int. Telecommun. Netw. Appl. Conf. (ITNAC), Melbourne, VIC, Australia, pp. 1–7, Nov. 2020.

D. Rani, N. S. Gill, P. Gulia, and J. M. Chatterjee, ‘‘An ensemble-based multiclass classifier for Intrusion Detection using Internet of Things,’’ Comput. Intell. Neurosci., vol. 2022, pp. 1–16, May 2022

Tjoa, E.; Guan, C. A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transact. Neural Netw. Learn. Syst., 32, 4793–4813,2020.

Wolf, C.T. Explainability scenarios: Towards scenario-based XAI design. In Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 17–20; pp. 252–257, March 2019.

Das, A.; Rad, P. Opportunities and challenges in explainable artificial intelligence (XAI): A survey. arXiv 2020, arXiv:2006.11371.

Byrne, R.M.J. Counterfactuals in explainable artificial intelligence (XAI): Evidence from human reasoning. IJCAI 2019, 6276–6282.

Booij, T.M.; Chiscop, I.; Meeuwissen, E.; Moustafa, N.; Hartog, F.T.H.d. ToN_IoT: The role of heterogeneity and the need for standardization of features and attack types in IoT network intrusion data sets. IEEE Internet Things J. , 9,pp. 485–496,2022.

Patil, Shruti, Vijayakumar Varadarajan, Siddiqui Mohd Mazhar, Abdulwodood Sahibzada, Nihal Ahmed, Onkar Sinha, Satish Kumar, Kailash Shaw, and Ketan Kotecha. "Explainable artificial intelligence for Intrusion Detection system." Electronics 11, no. 19, 2022.

Jairu, Pankaj, and Akalanka B. Mailewa. "Network anomaly uncovering on CICIDS-2017 dataset: a supervised artificial intelligence approach." In 2022 IEEE International Conference on Electro Information Technology (eIT), pp. 606-615. IEEE, 2022.

Bhardwaj, Parth. "Finding IoT privacy issues through malware Detection using XGBoost Machine Learning technique." PhD diss., Dublin, National College of Ireland, 2022.

Rani, Deepti, Nasib Singh Gill, Preeti Gulia, Fabio Arena, and Giovanni Pau. "Design of an Intrusion Detection Model for IoT-Enabled Smart Home." IEEE Access, 2023.

Downloads

Published

02.02.2024

How to Cite

Ahamed Maricar, S. B. ., Anoop, A. ., Samuel, B. E. ., Appukuttan, A. ., & Alsinjlawi, K. H. . (2024). An Improved Explainable Artificial Intelligence for Intrusion Detection System. International Journal of Intelligent Systems and Applications in Engineering, 12(14s), 108–115. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/4642

Issue

Section

Research Article

Most read articles by the same author(s)