duminică, 12 martie 2023

XAI for cybersecurity - May - August 2021

Introduction

    XAI is a relatively new field that aims to develop artificial intelligence systems that can be easily understood by humans. In cybersecurity, XAI is becoming increasingly important as AI is being used more and more to detect and prevent cyber attacks. One of the main challenges with using AI in cybersecurity is that it can often be difficult for humans to understand how the AI is making its decisions. This lack of transparency can be a major problem when it comes to cybersecurity, as it can make it difficult to determine why an AI system may have missed a particular threat or falsely flagged a legitimate activity as malicious. XAI aims to address this problem by developing AI systems that are transparent and explainable. By providing human analysts with a clear explanation of how an AI system arrived at a particular decision, XAI can help to improve the accuracy of cyber threat detection and response, while also reducing the risk of false positives and false negatives. Overall, XAI has the potential to revolutionize the field of cybersecurity by providing human analysts with the ability to easily understand and interpret the decisions made by AI systems, ultimately improving our ability to defend against cyber threats.

1. Evaluating Standard Feature Sets Towards Increased Generalisability and Explainability of ML-based Network Intrusion Detection

    The abstract discusses the benefits of Machine Learning (ML)-based network intrusion detection systems for enhancing cybersecurity in organizations. Although many ML-based systems have been developed and evaluated in the research community, there is a gap between research and practical deployments. The paper aims to address this gap by evaluating the generalizability of two common feature sets, NetFlow and CICFlowMeter, across different network environments and attack scenarios. The study found that the NetFlow feature set improved the accuracy of ML models in detecting various network attacks. Additionally, SHapley Additive exPlanations (SHAP), an explainable AI methodology, was used to interpret the classification decisions of the ML models and analyze the influence of each feature towards the final prediction.



2. Zero-shot learning approach to adaptive Cybersecurity using Explainable AI

    The abstract describes a novel approach to address the alarm flooding problem faced by cybersecurity systems such as Security Information and Event Management (SIEM) and Intrusion Detection Systems (IDS). The proposed approach utilizes zero-shot learning and explainable AI to identify and categorize new cyber-attacks without any prior knowledge of them. The method leverages explanations generated by machine learning models to identify the features that contribute to the classification of a cyber-attack and allocate credit to specific features based on their influence. The system auto-generates labels for attacks based on the features that contribute to the attack, which can be presented to SIEM analysts. The approach was applied to a network flow dataset and demonstrated promising results for specific attack types such as IP sweep, denial of service, and remote to local.




3. On the Importance of Domain-specific Explanations in AI-based Cybersecurity Systems
(Technical Report)

    The paper highlights the issue of data-driven artificial intelligence systems not being able to provide information about the rationale behind their decisions in critical domains such as cybersecurity. To address this problem, the paper proposes three contributions: (i) the proposal and discussion of desiderata for the explanation of outputs generated by AI-based cybersecurity systems, (ii) a comparative analysis of approaches in the literature on Explainable Artificial Intelligence (XAI) under the lens of the desiderata and further dimensions used for examining XAI approaches, and (iii) a general architecture that can guide research efforts towards the development of explainable AI-based cybersecurity systems. The proposed roadmap combines several research lines in a novel way to tackle the unique challenges that arise in this context.




4. Considerations for Deploying xAI Tools in the Wild

    The paper discusses the development of explainable AI (xAI) techniques for cybersecurity operations, which are crucial for maintaining cyber defenses. While xAI tools have the potential to increase trust, they were not heavily utilized and did not improve analyst decision accuracy. The paper highlights the critical lessons learned in deploying xAI tools, including the importance of considering end users, their workflows, environments, and propensity to trust xAI outputs in their respective roles. The paper emphasizes the need for relevant and understandable explanations for end users to successfully assist in achieving user goals, reducing bias, and improving trust.

5. The Role of Cybersecurity and HPC in the Explainability of Autonomous Robots Behavior

    Autonomous robots are increasingly widespread in our society. These robots need to be safe, reliable, respectful of privacy, not manipulable by external agents, and capable of offering explanations of their behavior in order to be accountable and acceptable in our societies. Companies offering robotic services will need to provide mechanisms to address these issues using High Performance Computing (HPC) facilities, where logs and off-line forensic analysis could be addressed if required, but these solutions are still not available in software development frameworks for robots. The aim of this paper is to discuss the implications and interactions among cybersecurity, safety, and explainability with the goal of making autonomous robots more trustworthy.

6. STARdom: An Architecture for Trusted and Secure Human-Centered Manufacturing Systems

    There is a lack of a single architecture specification that addresses the needs of trusted and secure Artificial Intelligence systems with humans in the loop, such as human-centered manufacturing systems at the core of the evolution towards Industry 5.0. To realize this, we propose an architecture that integrates forecasts, Explainable Artificial Intelligence, supports collecting users’ feedback and uses Active Learning and Simulated Reality to enhance forecasts and provide decision-making recommendations. The architecture security is addressed at all levels. We align the proposed architecture with the Big Data Value Association Reference Architecture Model. We tailor it for the domain of demand forecasting and validate it on a real-world case study.

7. Using Mathematically-Grounded Metaphors to Teach AI-Related Cybersecurity

    This position paper describes our research project to improve middle school students’ use of security “best-practices” in their day-to-day online activities, while enhancing their fundamental understanding of the underlying security principles and math concepts that drive AI and cybersecurity technologies. The project involves the design and implementation of a time- and teacher-friendly learning module that can be readily integrated into existing middle school math curricula. We plan to deploy this module at a high-needs, rural-identifying middle school in South Carolina that serves underrepresented students.

8.  Detecting anomalies and attacks in network traffic monitoring with classification methods and XAI–based explainability

    Assuring the network traffic safety is a very important issue in a variety of today’s industries. 
Therefore, the development of anomalies and attacks detection methods has been the goal of analyses. In the paper the binary classification–based approach to network traffic safety monitoring is presented. The well known methods were applied to artificially modified network traffic data and their detection capabilities were tested. More detailed interpretation of the nature of detected anomalies is carried out with the help of the XAI approach. For the purpose of experiments a new benchmark network traffic data set was prepared, which is now commonly available.





9. Explainable artificial intelligence (XAI) interactively working with humans as a junior cyber 
analyst

    The paper discusses the importance of explainable AI and its potential benefits in various applications. To develop effective methods for explainable AI, it is necessary to understand the requirements of the human user and the information available from AI. The paper presents an example case where an operational planner for a cyber protection team could use a junior analyst virtual agent to analyze data on vulnerabilities and incidents. The interactions required to understand the outputs and integrate additional knowledge held by the human are also discussed. The paper highlights the importance of integrating XAI into real-world bidirectional workflows and provides an exemplar case for achieving this integration.




Bibliography

[1]  Sarhan, Mohanad, et al. Evaluating Standard Feature Sets Towards Increased Generalisability and Explainability of ML-Based Network Intrusion Detection. arXiv, 28 Aug. 2021. arXiv.org, https://doi.org/10.48550/arXiv.2104.07183.

[2] Rao, Dattaraj, and Shraddha Mane. Zero-Shot Learning Approach to Adaptive Cybersecurity Using Explainable AI. arXiv, 21 June 2021. arXiv.org, https://doi.org/10.48550/arXiv.2106.14647.

[3] Paredes, Jose N., et al. On the Importance of Domain-Specific Explanations in AI-Based Cybersecurity Systems (Technical Report). arXiv, 2 Aug. 2021. arXiv.org, https://doi.org/10.48550/arXiv.2108.02006. 

[4] Nyre-Yu, Megan, et al. Considerations for Deploying XAI Tools in the Wild: Lessons Learned from XAI Deployment in a Cybersecurity Operations Setting. SAND2021-6069C, Sandia National Lab. (SNL-NM), Albuquerque, NM (United States), 1 May 2021. www.osti.gov, https://doi.org/10.2172/1869535.

[5] Matellán, Vicente, et al. ‘The Role of Cybersecurity and HPC in the Explainability of Autonomous Robots Behavior’. 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO), 2021, pp. 1–5. IEEE Xplore, https://doi.org/10.1109/ARSO51874.2021.9542829.

[6] Rožanec, Jože M., et al. ‘STARdom: An Architecture for Trusted and Secure Human-Centered Manufacturing Systems’. Advances in Production Management Systems. Artificial Intelligence for Sustainable and Resilient Production Systems, edited by Alexandre Dolgui et al., Springer International Publishing, 2021, pp. 199–207. Springer Link, https://doi.org/10.1007/978-3-030-85910-7_21.

[7] Knijnenburg, Bart P., et al. ‘Using Mathematically-Grounded Metaphors to Teach AI-Related Cybersecurity’. IJCAI-21 Workshop on Adverse Impacts and Collateral Effects of Artificial Intelligence Technologies (AIofAI), Montréal, Canada, Aug. 2021. par.nsf.gov, https://par.nsf.gov/biblio/10273277-using-mathematically-grounded-metaphors-teach-ai-related-cybersecurity.

[8] Wawrowski, Łukasz, et al. ‘Detecting Anomalies and Attacks in Network Traffic Monitoring with Classification Methods and XAI-Based Explainability’. Procedia Computer Science, vol. 192, Jan. 2021, pp. 2259–68. ScienceDirect, https://doi.org/10.1016/j.procs.2021.08.239.

[9] Holder, Eric, and Ning Wang. ‘Explainable Artificial Intelligence (XAI) Interactively Working with Humans as a Junior Cyber Analyst’. Human-Intelligent Systems Integration, vol. 3, no. 2, June 2021, pp. 139–53. Springer Link, https://doi.org/10.1007/s42454-020-00021-z.



Niciun comentariu:

Trimiteți un comentariu

VPDA - Mall Customers Data Analysis

Introduction Exploring a dataset of mall customers can be important because it can uncover patterns in spending habits, help identify distin...