Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
34
result(s) for
"explanation AI (XAI)"
Sort by:
Classification and Explanation for Intrusion Detection System Based on Ensemble Trees and SHAP Method
2022
In recent years, many methods for intrusion detection systems (IDS) have been designed and developed in the research community, which have achieved a perfect detection rate using IDS datasets. Deep neural networks (DNNs) are representative examples applied widely in IDS. However, DNN models are becoming increasingly complex in model architectures with high resource computing in hardware requirements. In addition, it is difficult for humans to obtain explanations behind the decisions made by these DNN models using large IoT-based IDS datasets. Many proposed IDS methods have not been applied in practical deployments, because of the lack of explanation given to cybersecurity experts, to support them in terms of optimizing their decisions according to the judgments of the IDS models. This paper aims to enhance the attack detection performance of IDS with big IoT-based IDS datasets as well as provide explanations of machine learning (ML) model predictions. The proposed ML-based IDS method is based on the ensemble trees approach, including decision tree (DT) and random forest (RF) classifiers which do not require high computing resources for training models. In addition, two big datasets are used for the experimental evaluation of the proposed method, NF-BoT-IoT-v2, and NF-ToN-IoT-v2 (new versions of the original BoT-IoT and ToN-IoT datasets), through the feature set of the net flow meter. In addition, the IoTDS20 dataset is used for experiments. Furthermore, the SHapley additive exPlanations (SHAP) is applied to the eXplainable AI (XAI) methodology to explain and interpret the classification decisions of DT and RF models; this is not only effective in interpreting the final decision of the ensemble tree approach but also supports cybersecurity experts in quickly optimizing and evaluating the correctness of their judgments based on the explanations of the results.
Journal Article
The grammar of interactive explanatory model analysis
by
Parzych, Dariusz
,
Biecek, Przemyslaw
,
Baniecki, Hubert
in
Machine learning
,
Open source software
,
Prediction models
2024
The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, leading to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory, interpretations of the same phenomenon. Surprisingly, most methods developed for explainable and responsible machine learning focus on a single-aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper proposes how different Explanatory Model Analysis (EMA) methods complement each other and discusses why it is essential to juxtapose them. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe human-model interaction. It is implemented in a widely used human-centered open-source software framework that adopts interactivity, customizability and automation as its main traits. We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model may increase the accuracy and confidence of human decision making.
Journal Article
What is Interpretability?
by
Brunet, Tyler D. P
,
Fisher, Eyal
,
Erasmus, Adrian
in
Artificial intelligence
,
Confusion
,
Explanation
2021
We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: (1) Are networks explainable, and if so, what does it mean to explain the output of a network? And (2) what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In response to (1), we show how four familiar accounts of explanation apply to neural networks as they would to any scientific phenomenon. We diagnose the confusion about explaining neural networks within the machine learning literature as an equivocation on “explainability,” “understandability” and “interpretability.” To remedy this, we distinguish between these notions, and answer (2) by offering a theory and typology of interpretation in machine learning. Interpretation is something one does to an explanation with the aim of producing another, more understandable, explanation. As with explanation, there are various concepts and methods involved in interpretation: Total or Partial, Global or Local, and Approximative or Isomorphic. Our account of “interpretability” is consistent with uses in the machine learning literature, in keeping with the philosophy of explanation and understanding, and pays special attention to medical artificial intelligence systems.
Journal Article
Explaining Intrusion Detection-Based Convolutional Neural Networks Using Shapley Additive Explanations (SHAP)
by
Ahmad, Ashraf
,
Abu Al-Haija, Qasem
,
Younisse, Remah
in
Accuracy
,
Analysis
,
Artificial intelligence
2022
Artificial intelligence (AI) and machine learning (ML) models have become essential tools used in many critical systems to make significant decisions; the decisions taken by these models need to be trusted and explained on many occasions. On the other hand, the performance of different ML and AI models varies with the same used dataset. Sometimes, developers have tried to use multiple models before deciding which model should be used without understanding the reasons behind this variance in performance. Explainable artificial intelligence (XAI) models have presented an explanation for the models’ performance based on highlighting the features that the model considered necessary while making the decision. This work presents an analytical approach to studying the density functions for intrusion detection dataset features. The study explains how and why these features are essential during the XAI process. We aim, in this study, to explain XAI behavior to add an extra layer of explainability. The density function analysis presented in this paper adds a deeper understanding of the importance of features in different AI models. Specifically, we present a method to explain the results of SHAP (Shapley additive explanations) for different machine learning models based on the feature data’s KDE (kernel density estimation) plots. We also survey the specifications of dataset features that can perform better for convolutional neural networks (CNN) based models.
Journal Article
Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach
2022
Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction. Such a view is insufficient, especially when data are used in a secondary, noncontextual, and unpredictable manner—which is the inescapable nature of advanced artificial intelligence systems. We submit that an alternative view of informed consent—as an assurance of trust for incomplete transactions—allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.
Journal Article
Reverse Analysis Method and Process for Improving Malware Detection Based on XAI Model
by
Ma, Ki-Pyoung
,
Lee, Sang-Joon
,
Ryu, Dong-Ju
in
Anti-virus software
,
Artificial intelligence
,
Deep learning
2024
With the advancements in artificial intelligence (AI) technology, attackers are increasingly using sophisticated techniques, including ChatGPT. Endpoint Detection & Response (EDR) is a system that detects and responds to strange activities or security threats occurring on computers or endpoint devices within an organization. Unlike traditional antivirus software, EDR is more about responding to a threat after it has already occurred than blocking it. This study aims to overcome challenges in security control, such as increased log size, emerging security threats, and technical demands faced by control staff. Previous studies have focused on AI detection models, emphasizing detection rates and model performance. However, the underlying reasons behind the detection results were often insufficiently understood, leading to varying outcomes based on the learning model. Additionally, the presence of both structured or unstructured logs, the growth in new security threats, and increasing technical disparities among control staff members pose further challenges for effective security control. This study proposed to improve the problems of the existing EDR system and overcome the limitations of security control. This study analyzed data during the preprocessing stage to identify potential threat factors that influence the detection process and its outcomes. Additionally, eleven commonly-used machine learning (ML) models for malware detection in XAI were tested, with the five models showing the highest performance selected for further analysis. Explainable AI (XAI) techniques are employed to assess the impact of preprocessing on the learning process outcomes. To ensure objectivity and versatility in the analysis, five widely recognized datasets were used. Additionally, eleven commonly-used machine learning models for malware detection in XAI were tested with the five models showing the highest performance selected for further analysis. The results indicate that eXtreme Gradient Boosting (XGBoost) model outperformed others. Moreover, the study conducts an in-depth analysis of the preprocessing phase, tracing backward from the detection result to infer potential threats and classify the primary variables influencing the model’s prediction. This analysis includes the application of SHapley Additive exPlanations (SHAP), an XAI result, which provides insight into the influence of specific features on detection outcomes, and suggests potential breaches by identifying common parameters in malware through file backtracking and providing weights. This study also proposed a counter-detection analysis process to overcome the limitations of existing Deep Learning outcomes, understand the decision-making process of AI, and enhance reliability. These contributions are expected to significantly enhance EDR systems and address existing limitations in security control.
Journal Article
Exploring the potential of explainable AI in brain tumor detection and classification: a systematic review
by
Palanisamy, Gopinath
,
Abraham, Lincy Annet
,
Veerapu, Goutham
in
Accuracy
,
Adoption of innovations
,
Artificial Intelligence
2025
The analysis and treatment of brain tumors are among the most difficult medical conditions. Brain tumors must be detected accurately and promptly to improve patient outcomes and plan effective treatments. Recently used advanced technologies such as artificial intelligence (AI) and machine learning (ML) have increased interest in applying AI to detect brain tumors. However, concerns have emerged regarding the reliability and transparency of AI models in medical settings, as their decision-making processes are often opaque and difficult to interpret. This research is unique in its focus on explainability in AI-based brain tumor detection, prioritizing confidence, safety, and clinical adoption over mere accuracy. It gives a thorough overview of XAI methodologies, problems, and uses, linking scientific advances to the needs of real-world healthcare. XAI is a sub-section of artificial intelligence that seeks to solve this problem by offering understandable and straightforward and providing explanations for the choices made by AI representations. Applications such as healthcare, where the interpretability of AI models is essential for guaranteeing patient safety and fostering confidence between medical professionals and AI systems, have seen the introduction of XAI-based procedures. This paper reviews recent advancements in XAI-based brain tumor detection, focusing on methods that provide justifications for AI model predictions. The study highlights the advantages of XAI in improving patient outcomes and supporting medical decision-making. The findings reveal that ResNet 18 performed better, with 94% training accuracy, 96.86% testing accuracy, low loss (0.012), and a rapid time
. ResNet 50 was a little slower
but stable, with 92.86% test accuracy. DenseNet121 (Adam W) achieved the highest accuracy at 97.71%, but it was not consistent across all optimizers. ViT-GRU also got 97% accuracy with very little loss (0.008), although it took a long time to compute (around 49 s). On the other hand, VGG models (around 94% test accuracy) and MobileNetV2 (loss up to 6.024) were less reliable, even though they trained faster. Additionally, it explores various opportunities, challenges, and clinical applications. Based on these findings, this research offers a comprehensive analysis of XAI-based brain tumor detection and encourages further investigation in specific areas.
Journal Article
Contrasting Explanations for Understanding and Regularizing Model Adaptations
by
Hinder, Fabian
,
Artelt, André
,
Feldhans, Robert
in
Adaptation
,
Artificial Intelligence
,
Complex Systems
2023
Many of today’s decision making systems deployed in the real world are not static—they are changing and adapting over time, a phenomenon known as model adaptation takes place. Because of their wide reaching influence and potentially serious consequences, the need for transparency and interpretability of AI-based decision making systems is widely accepted and thus have been worked on extensively—e.g. a very prominent class of explanations are contrasting explanations which try to mimic human explanations. However, usually, explanation methods assume a static system that has to be explained. Explaining non-static systems is still an open research question, which poses the challenge how to explain model differences, adaptations and changes. In this contribution, we propose and (empirically) evaluate a general framework for explaining model adaptations and differences by contrasting explanations. We also propose a method for automatically finding regions in data space that are affected by a given model adaptation—i.e. regions where the internal reasoning of the other (e.g. adapted) model changed—and thus should be explained. Finally, we also propose a regularization for model adaptations to ensure that the internal reasoning of the adapted model does not change in an unwanted way.
Journal Article
Leveraging explainable artificial intelligence and spatial analysis for communicable diseases in Asia (2000–2022) based on health, climate, and socioeconomic factors
by
Shiddik, Md. Abu Bokkor
,
Rahman, Md. Siddikur
in
Artificial intelligence
,
Artificial Intelligence - trends
,
Asia - epidemiology
2025
Background
Communicable diseases remain a significant public health challenge in Asia, driven by diverse climatic, socioeconomic, and healthcare-related factors. Despite reductions in diseases such as tuberculosis and malaria, persistent hotspots highlight the need for deeper investigation. This study applies machine learning and spatial analysis techniques to examine patterns and determinants of communicable diseases across 41 countries from 2000 to 2022.
Methods
Data were sourced from global repositories, including WHO, CRU TS, WDI, and UNICEF, covering disease cases (e.g., tuberculosis, dengue, malaria), climaticvariables (e.g., precipitation, humidity), and healthcare metrics (e.g., hospital bed density). Missing values were imputed using random forest methods. Outlier detection was conducted using
Mahalanobis
distances, identifying and addressing significant deviations to ensure data consistency. Models like XGBoost and Random Forest were assessed using RMSE, MAE, and R². SHAP and XAI frameworks improved interpretability, while Gi* spatial statistics revealed disease hotspots and disparities.
Results
Tuberculosis cases declined from 8.01 million (2000) to 7.54 million (2022), with hotspots in India (Gi* = 3.07) and Nepal (Gi* = 4.67). Malaria cases dropped from 27.00 million (2000) to 7.96 million (2022), yet Bangladesh (Gi* = 4.13) and Pakistan (Gi* = 4.17) exhibited sustained risk. Dengue peaked at 2.71 million cases in 2019, with current hotspots in Malaysia (Gi* = 2.4) and Myanmar (Gi* = 0.79). Spatial disparities underscore the influence of precipitation, relative humidity, and healthcare gaps. XGBoost achieved remarkable accuracy (e.g., tuberculosis: RMSE = 0.94, R² = 0.91), and SHAP analysis revealed critical predictors such as climatic factors.
Conclusion
This study demonstrates the effectiveness of integrating machine learning, spatial analysis, and XAI to uncover disease determinants and guide targeted interventions. The findings offer actionable insights for improving disease surveillance, resource allocation, and public health strategies across Asia.
Journal Article
Interpreting deep learning models with marginal attribution by conditioning on quantiles
2022
A vast and growing literature on explaining deep learning models has emerged. This paper contributes to that literature by introducing a global gradient-based model-agnostic method, which we call Marginal Attribution by Conditioning on Quantiles (MACQ). Our approach is based on analyzing the marginal attribution of predictions (outputs) to individual features (inputs). Specifically, we consider variable importance by fixing (global) output levels, and explaining how features marginally contribute to these fixed global output levels. MACQ can be seen as a marginal attribution counterpart to approaches such as accumulated local effects, which study the sensitivities of outputs by perturbing inputs. Furthermore, MACQ allows us to separate marginal attribution of individual features from interaction effects and to visualize the 3-way relationship between marginal attribution, output level, and feature value.
Journal Article