All Positions

Research
Computer sciences and mathematics

Towards Explainable Intrusion Detection with the Human in the Loop

DC-59
IMT Atlantique and University of Adelaide
Brest (FR) and Adelaide (AU)

Proposed projects

Option 1

Enhanced Anomaly/intrusion detection with the Human in the loop

The systems used to detect anomalies or intrusions have evolved considerably in recent years following the advances in artificial intelligence (AI) techniques and sensors integrated into modern devices and widely deployed (including in Security Operation Centres). Despite many cyber security tasks being automated, currently and in the future, decision-making still needs a human who can critically assess and spot mistakes in AI agents’ tasks and find the most effective ways of working together as a team. In particular, the risks associated with human (such as fatigue, difficulty in grasping/understanding/explaining the indicators provided by the detection system) and the lack of explicability of the results provided by the detection system can have detrimental consequences. Explainable AI (XAI) or making AI models interpretable to human users, is critical in the cyber security domain in that XAI may allow security operators, who are overwhelmed with tens of thousands of security alerts per day (most of which are false positives), to assess the potential threats better and reduce alert fatigue (Charmet et al. 2022). Therefore, an unbiased and empirical understanding of the human-AI cyber defence teams’ collaboration is critical.

This research project aims to:
– Study and propose enhancements to an AI-based intrusion detection system that provides indicators of compromise to explain the results provided by the AI model(s);
– Study and propose different types of explanations (e.g., semantic or causal, including counterfactual examples) of the
results (classification), rather than just features as in current solutions;
– Measure the adequacy of these indicators and explanations, taking into account the influence of human/analyst and environmental factors;
– Identify the biases, barriers and vulnerabilities in information sharing, risk assessment and effective decision-making introduced by human-AI teamwork.

The research will be performed with the support of the Defence Science and Technology Group (DSTG) in South Australia.

Option 2

Adverserial data poisoning attacks

The systems used to detect anomalies or intrusions have evolved considerably in recent years following the advances in artificial intelligence (AI) techniques and sensors integrated into modern devices and widely deployed (including in
Security Operation Centres). Despite significant advances, the detection process is error-prone due to its dynamic and evolving nature of threat landscape. The cybercriminals and threat actors are racing to use AI to find innovative new hacks (e.g., encompassing techniques for image, audio and video generation) and adversarial AI- and generative AIenabled attacks are already taking place. The attack techniques include poisoning a benign training dataset, evading (i.e., using a malicious input to get an unexpected output), oracling (i.e., stealing information by probing the model) and adversarial reprogramming. Thus, effective log-driven analysis with high-level of explainability is critical, as many of these attacks can have an impact on society and human life considering our digitalised world.

The research project aims to:
– identify and develop efficient AI-based algorithms to parse heterogenous log formats accurately to identify adversarial attacks, including evasion attacks designed to evade detection by manipulating log data, and extract relevant features from log data to improve detection accuracy and reduce false positives;
– design real-time log analysis frameworks capable of promptly detecting intrusions as they occur and support detection across diverse domains, such as cloud, IoT networks, and industrial control systems;
– investigate and propose scalable architectures and optimization methods to handle large volumes of log data without sacrificing detection accuracy or performance;
– Develop standardized benchmarks and evaluation metrics for assessing performance of log-driven intrusion detection systems, facilitating fair comparisons between different approaches and enabling reproducible research.

The research will be performed with the support of the Defence Science and Technology Group (DSTG) in South Australia.

Option 3

Combining Metagraphs and AI-Enhanced Anomaly Detection

The systems used to detect anomalies or intrusions have evolved considerably in recent years following the advances in artificial intelligence (AI) techniques and sensors integrated into modern devices and widely deployed (including in
Security Operation Centres). Despite significant advances, the detection process is error-prone due to its dynamic and evolving nature of threat landscape. The vast amount of log data presents a significant challenge, requiring efficient processing and analysis to detect and respond to threats effectively. Visualization is critical, offering analysts intuitive
insights into complex cybersecurity data for rapid threat detection and decision-making. This research aims to integrate metagraphs and AI-enhanced anomaly detection for intrusion detection context develop a versatile
framework applicable across diverse cyber domains.

The research project aims to:
– develop an innovative framework that integrates metagraphs with AI techniques for enhanced anomaly detection;
– utilize metagraphs to model complex systems, capturing hierarchical relationships and interdependencies to provide a holistic view of system dynamics, facilitating visual anomaly detection;
– employ AI algorithms, such as deep learning and reinforcement learning, to enhance anomaly detection capabilities by training AI models on metagraph-structured data to learn complex patterns and detect anomalies indicative of
cyber security threats;
– evaluate the performance of the integrated system using real-world datasets, assessing its effectiveness in detecting and mitigating cyber security threats.

The research will be performed with the support of the Defence Science and Technology Group (DSTG) in South Australia.

Supervisors

Françoise Sailhan
Kaie Maennel
Olaf Maennel

Research Areas

Computer Science, cybersecurity, human aspects, artificial intelligence, machine learning, mathematics