Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
814
result(s) for
"XAI"
Sort by:
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System
2022
The emerging field of eXplainable AI (XAI) in the medical domain is considered to be of utmost importance. Meanwhile, incorporating explanations in the medical domain with respect to legal and ethical AI is necessary to understand detailed decisions, results, and current status of the patient’s conditions. Successively, we will be presenting a detailed survey for the medical XAI with the model enhancements, evaluation methods, significant overview of case studies with open box architecture, medical open datasets, and future improvements. Potential differences in AI and XAI methods are provided with the recent XAI methods stated as (i) local and global methods for preprocessing, (ii) knowledge base and distillation algorithms, and (iii) interpretable machine learning. XAI characteristics details with future healthcare explainability is included prominently, whereas the pre-requisite provides insights for the brainstorming sessions before beginning a medical XAI project. Practical case study determines the recent XAI progress leading to the advance developments within the medical field. Ultimately, this survey proposes critical ideas surrounding a user-in-the-loop approach, with an emphasis on human–machine collaboration, to better produce explainable solutions. The surrounding details of the XAI feedback system for human rating-based evaluation provides intelligible insights into a constructive method to produce human enforced explanation feedback. For a long time, XAI limitations of the ratings, scores and grading are present. Therefore, a novel XAI recommendation system and XAI scoring system are designed and approached from this work. Additionally, this paper encourages the importance of implementing explainable solutions into the high impact medical field.
Journal Article
Survey on ontology-based explainable AI in manufacturing
by
Elmhadhbi, Linda
,
Naqvi, Muhammad Raza
,
Karray, Mohamed Hedi
in
Advanced manufacturing technologies
,
Algorithms
,
Artificial intelligence
2024
Artificial intelligence (AI) has become an essential tool for manufacturers seeking to optimize their production processes, reduce costs, and improve product quality. However, the complexity of the underlying mechanisms of AI systems can render it difficult for humans to understand and trust AI-driven decisions. Explainable AI (XAI) is a rapidly evolving field that addresses this challenge, providing human-understandable explanations of AI decisions. Based on a systematic literature survey, We explore the latest techniques and approaches that are helping manufacturers gain transparency in the decision-making processes of their AI systems. In this survey, we focus on two of the most exciting areas of XAI: ontology-based and semantic-based XAI (O-XAI, S-XAI, respectively), which provide human-readable explanations of AI decisions by exploiting semantic information. These latter types of explanations are presented in natural language and are designed to be easily understood by non-experts. Translating the decision paths taken by AI algorithms to meaningful explanations through semantics, O-XAI, and S-XAI enables humans to identify various cross-cutting concerns that influence the decisions made by the AI system. This information can be used to improve the performance of the AI system, identify potential biases in the system, and ensure that the decisions are aligned with the goals and values of the manufacturing organization. Additionally, we highlight the benefits and challenges of using O-XAI and S-XAI in manufacturing and discuss the potential for future research, aiming to provide valuable guidance for researchers and practitioners looking to leverage the power of ontologies and general semantics for XAI.
Journal Article
Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
by
Hussain, Amir
,
Mahapatra, Atmesh
,
Mahmud, Mufti
in
Artificial Intelligence
,
Computation by Abstract Devices
,
Computational Biology/Bioinformatics
2024
Recent years have seen a tremendous growth in Artificial Intelligence (AI)-based methodological development in a broad range of domains. In this rapidly evolving field, large number of methods are being reported using machine learning (ML) and Deep Learning (DL) models. Majority of these models are inherently complex and lacks explanations of the decision making process causing these models to be termed as 'Black-Box'. One of the major bottlenecks to adopt such models in mission-critical application domains, such as banking, e-commerce, healthcare, and public services and safety, is the difficulty in interpreting them. Due to the rapid proleferation of these AI models, explaining their learning and decision making process are getting harder which require transparency and easy predictability. Aiming to collate the current state-of-the-art in interpreting the black-box models, this study provides a comprehensive analysis of the explainable AI (XAI) models. To reduce false negative and false positive outcomes of these back-box models, finding flaws in them is still difficult and inefficient. In this paper, the development of XAI is reviewed meticulously through careful selection and analysis of the current state-of-the-art of XAI research. It also provides a comprehensive and in-depth evaluation of the XAI frameworks and their efficacy to serve as a starting point of XAI for applied and theoretical researchers. Towards the end, it highlights emerging and critical issues pertaining to XAI research to showcase major, model-specific trends for better explanation, enhanced transparency, and improved prediction accuracy.
Journal Article
Explainable AI: A Review of Machine Learning Interpretability Methods
by
Kotsiantis, Sotiris
,
Papastefanopoulos, Vasilis
,
Linardatos, Pantelis
in
Algorithms
,
Artificial intelligence
,
Decision making
2020
Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
Journal Article
A Survey on Artificial Intelligence (AI) and eXplainable AI in Air Traffic Management: Current Trends and Development with Future Research Trajectory
by
Begum, Shahina
,
Rahman, Md Aquif
,
Poudel, Minesh
in
Air traffic management (ATM)
,
Air transportation industry
,
Aircraft
2022
Air Traffic Management (ATM) will be more complex in the coming decades due to the growth and increased complexity of aviation and has to be improved in order to maintain aviation safety. It is agreed that without significant improvement in this domain, the safety objectives defined by international organisations cannot be achieved and a risk of more incidents/accidents is envisaged. Nowadays, computer science plays a major role in data management and decisions made in ATM. Nonetheless, despite this, Artificial Intelligence (AI), which is one of the most researched topics in computer science, has not quite reached end users in ATM domain. In this paper, we analyse the state of the art with regards to usefulness of AI within aviation/ATM domain. It includes research work of the last decade of AI in ATM, the extraction of relevant trends and features, and the extraction of representative dimensions. We analysed how the general and ATM eXplainable Artificial Intelligence (XAI) works, analysing where and why XAI is needed, how it is currently provided, and the limitations, then synthesise the findings into a conceptual framework, named the DPP (Descriptive, Predictive, Prescriptive) model, and provide an example of its application in a scenario in 2030. It concludes that AI systems within ATM need further research for their acceptance by end-users. The development of appropriate XAI methods including the validation by appropriate authorities and end-users are key issues that needs to be addressed.
Journal Article
Explainable reinforcement learning for broad-XAI: a conceptual framework and survey
by
Vamplew, Peter
,
Cruz, Francisco
,
Dazeley, Richard
in
Algorithms
,
Artificial Intelligence
,
Communication
2023
Broad-XAI
moves away from interpreting individual decisions based on a single datum and aims to provide integrated explanations from multiple machine learning algorithms into a coherent explanation of an agent’s behaviour that is aligned to the communication needs of the explainee. Reinforcement Learning (RL) methods, we propose, provide a potential backbone for the cognitive model required for the development of Broad-XAI. RL represents a suite of approaches that have had increasing success in solving a range of sequential decision-making problems. However, these algorithms operate as black-box problem solvers, where they obfuscate their decision-making policy through a complex array of values and functions. EXplainable RL (XRL) aims to develop techniques to extract concepts from the agent’s: perception of the environment; intrinsic/extrinsic motivations/beliefs; Q-values, goals and objectives. This paper aims to introduce the Causal XRL Framework (CXF), that unifies the current XRL research and uses RL as a backbone to the development of Broad-XAI. CXF is designed to incorporate many standard RL extensions and integrated with external ontologies and communication facilities so that the agent can answer questions that explain outcomes its decisions. This paper aims to: establish XRL as a distinct branch of XAI; introduce a conceptual framework for XRL; review existing approaches explaining agent behaviour; and identify opportunities for future research. Finally, this paper discusses how additional information can be extracted and ultimately integrated into models of communication, facilitating the development of Broad-XAI.
Journal Article
A Comprehensive Review of Explainable Artificial Intelligence (XAI) in Computer Vision
by
Cai, Lingfeng
,
Li, Yule
,
Cheng, Zhihan
in
Algorithms
,
Artificial Intelligence
,
Computer vision
2025
Explainable Artificial Intelligence (XAI) is increasingly important in computer vision, aiming to connect complex model outputs with human understanding. This review provides a focused comparative analysis of representative XAI methods in four main categories, attribution-based, activation-based, perturbation-based, and transformer-based approaches, selected from a broader literature landscape. Attribution-based methods like Grad-CAM highlight key input regions using gradients and feature activation. Activation-based methods analyze the responses of internal neurons or feature maps to identify which parts of the input activate specific layers or units, helping to reveal hierarchical feature representations. Perturbation-based techniques, such as RISE, assess feature importance through input modifications without accessing internal model details. Transformer-based methods, which use self-attention, offer global interpretability by tracing information flow across layers. We evaluate these methods using metrics such as faithfulness, localization accuracy, efficiency, and overlap with medical annotations. We also propose a hierarchical taxonomy to classify these methods, reflecting the diversity of XAI techniques. Results show that RISE has the highest faithfulness but is computationally expensive, limiting its use in real-time scenarios. Transformer-based methods perform well in medical imaging, with high IoU scores, though interpreting attention maps requires care. These findings emphasize the need for context-aware evaluation and hybrid XAI methods balancing interpretability and efficiency. The review ends by discussing ethical and practical challenges, stressing the need for standard benchmarks and domain-specific tuning.
Journal Article
Explainable Deep Learning Models in Medical Image Analysis
by
Sengupta, Sourya
,
Singh, Amitojdeep
,
Lakshminarayanan, Vasudevan
in
Algorithms
,
Artificial intelligence
,
Classification
2020
Deep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those. However, the black-box nature of the algorithms has restricted their clinical use. Recent explainability studies aim to show the features that influence the decision of a model the most. The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations. A review of the current applications of explainable deep learning for different medical imaging tasks is presented here. The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.
Journal Article
A novel XAI framework for explainable AI-ECG using generative counterfactual XAI (GCX)
2025
Generative Counterfactual Explainable Artificial Intelligence (XAI) offers a novel approach to understanding how AI models interpret electrocardiograms (ECGs). Traditional explanation methods focus on highlighting important ECG segments but often fail to clarify why these segments matter or how their alteration affects model predictions. In contrast, the proposed framework explores “what-if” scenarios, generating counterfactual ECGs that increase or decrease a model’s predictive values. This approach has the potential to increase clinicians’ trust specific changes—such as increased T wave amplitude or PR interval prolongation—influence the model’s decisions. Through a series of validation experiments, the framework demonstrates its ability to produce counterfactual ECGs that closely align with established clinical knowledge, including characteristic alterations associated with potassium imbalances and atrial fibrillation. By clearly visualizing how incremental modifications in ECG morphology and rhythm affect artificial intelligence-applied ECG (AI-ECG) predictions, this generative counterfactual method moves beyond static attribution maps and has the potential to increase clinicians’ trust in AI-ECG systems. As a result, this approach offers a promising path toward enhancing the explainability and clinical reliability of AI-based tools for cardiovascular diagnostics.
Journal Article
Machine Learning Interpretability: A Survey on Methods and Metrics
by
Cardoso, Jaime S.
,
Carvalho, Diogo V.
,
Pereira, Eduardo M.
in
Accountability
,
Algorithms
,
Artificial intelligence
2019
Machine learning systems are becoming increasingly ubiquitous. These systems’s adoption has been expanding, accelerating the shift towards a more algorithmic society, meaning that algorithmically informed decisions have greater potential for significant social impact. However, most of these accurate decision support systems remain complex black boxes, meaning their internal logic and inner workings are hidden to the user and even experts cannot fully understand the rationale behind their predictions. Moreover, new regulations and highly regulated domains have made the audit and verifiability of decisions mandatory, increasing the demand for the ability to question, understand, and trust machine learning systems, for which interpretability is indispensable. The research community has recognized this interpretability problem and focused on developing both interpretable models and explanation methods over the past few years. However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? The aim of this article is to provide a review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics. Furthermore, a complete literature review is presented in order to identify future directions of work on this field.
Journal Article