Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
854
result(s) for
"Explainable Diagnosis"
Sort by:
ANC: Attention Network for COVID-19 Explainable Diagnosis Based on Convolutional Block Attention Module
by
Zhang, Xin
,
Zhang, Yudong
,
Zhu, Weiguo
in
Attention Mechanism
,
Convolutional Block Attention Module
,
Coronaviruses
2021
Aim: To diagnose COVID-19 more efficiently and more correctly, this study proposed a novel attention network for COVID-19 (ANC). Methods: Two datasets were used in this study. An 18-way data augmentation was proposed to avoid over tting. Then, convolutional block attention
module (CBAM) was integrated to our model, the structure of which is fine-tuned. Finally, Grad-CAM was used to provide an explainable diagnosis. Results: The accuracy of our ANC methods on two datasets are 96.32% ± 1.06%, and 96.00% ± 1.03%, respectively. Conclusions:
This proposed ANC method is superior to 9 state-of-the-art approaches.
Journal Article
Integrated Explainable Diagnosis of Gear Wear Faults Based on Dynamic Modeling and Data-Driven Representation
2025
Gear wear degrades transmission performance, necessitating highly reliable fault diagnosis methods. To address the limitations of existing approaches—where dynamic models rely heavily on prior knowledge, while data-driven methods lack interpretability—this study proposes an integrated bidirectional verification framework combining dynamic modeling and deep learning for interpretable gear wear diagnosis. First, a dynamic gear wear model is established to quantitatively reveal wear-induced modulation effects on meshing stiffness and vibration responses. Then, a deep network incorporating Gradient-weighted Class Activation Mapping (Grad-CAM) enables visualized extraction of frequency-domain sensitive features. Bidirectional verification between the dynamic model and deep learning demonstrates enhanced meshing harmonics in wear faults, leading to a quantitative diagnostic index that achieves 0.9560 recognition accuracy for gear wear across four speed conditions, significantly outperforming comparative indicators. This research provides a novel approach for gear wear diagnosis that ensures both high accuracy and interpretability.
Journal Article
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma
by
Krieghoff-Henning, Eva
,
Crnaric, Iva
,
Peternel, Sandra
in
692/308/53/2421
,
692/700/139/1735
,
692/700/459/1748
2024
Artificial intelligence (AI) systems have been shown to help dermatologists diagnose melanoma more accurately, however they lack transparency, hindering user acceptance. Explainable AI (XAI) methods can help to increase transparency, yet often lack precise, domain-specific explanations. Moreover, the impact of XAI methods on dermatologists’ decisions has not yet been evaluated. Building upon previous research, we introduce an XAI system that provides precise and domain-specific explanations alongside its differential diagnoses of melanomas and nevi. Through a three-phase study, we assess its impact on dermatologists’ diagnostic accuracy, diagnostic confidence, and trust in the XAI-support. Our results show strong alignment between XAI and dermatologist explanations. We also show that dermatologists’ confidence in their diagnoses, and their trust in the support system significantly increase with XAI compared to conventional AI. This study highlights dermatologists’ willingness to adopt such XAI systems, promoting future use in the clinic.
Artificial intelligence has become popular as a cancer classification tool, but there is distrust of such systems due to their lack of transparency. Here, the authors develop an explainable AI system which produces text- and region-based explanations alongside its classifications which was assessed using clinicians’ diagnostic accuracy, diagnostic confidence, and their trust in the system.
Journal Article
Deep learning in medicine: advancing healthcare with intelligent solutions and the future of holography imaging in early diagnosis
by
Nazir, Asifa
,
Hussain, Ahsan
,
Singh, Mandeep
in
Algorithms
,
Artificial intelligence
,
Computer Communication Networks
2025
Deep Learning (DL) is currently transforming health services by significantly improving early cancer diagnosis, drug discovery, protein–protein interaction analysis, and gene editing. The main purpose of this review study is to explore how the integration of the analytical capabilities of DL with medical datasets contributes to advancements in healthcare services. The scope of this study revolves around emphasizing the impact of DL strategies in contributing to healthcare services. It underscores how DL algorithms significantly improve accuracy in medical data analysis, helping diagnosis and treatment planning. It also highlights how integrating Artificial Intelligence (AI) with medical datasets can profoundly impact robotic surgery. The primary findings of the study involve exploring emerging ideas within this integrative field, particularly focusing on the roles of holography microscopic medical imaging and attention models in early disease identification. Also, the study examines Federated Learning (FL) concepts, with the primary focus on addressing the ethical implications of medical-related datasets. The authors further examine how Explainable AI (XAI) techniques such as Gradient-weighted Class Activation Mapping (Grad CAM), assist medical professionals in understanding the decision-making processes of AI algorithms promoting transparency and informed decision-making. After conducting an extensive review of DL in medicine, the authors have identified the challenges associated with this integrative journey and suggested emerging future research directions for researchers interested in this field.
Journal Article
Explainable artificial intelligence in information systems: A review of the status quo and future research directions
by
Sigler, Irina
,
ster, Maximilian
,
Klier, Mathias
in
Accountability
,
Artificial intelligence
,
Breast cancer
2023
The quest to open black box artificial intelligence (AI) systems evolved into an emerging phenomenon of global interest for academia, business, and society and brought about the rise of the research field of explainable artificial intelligence (XAI). With its pluralistic view, information systems (IS) research is predestined to contribute to this emerging field; thus, it is not surprising that the number of publications on XAI has been rising significantly in IS research. This paper aims to provide a comprehensive overview of XAI research in IS in general and electronic markets in particular using a structured literature review. Based on a literature search resulting in 180 research papers, this work provides an overview of the most receptive outlets, the development of the academic discussion, and the most relevant underlying concepts and methodologies. Furthermore, eight research areas with varying maturity in electronic markets are carved out. Finally, directions for a research agenda of XAI in IS are presented.
Journal Article
Revolutionizing the Early Detection of Alzheimer’s Disease through Non-Invasive Biomarkers: The Role of Artificial Intelligence and Deep Learning
by
Skolariki, Konstantina
,
Krokidis, Marios G.
,
Exarchos, Themis P.
in
Activities of daily living
,
Advertising executives
,
Alzheimer Disease - diagnosis
2023
Alzheimer’s disease (AD) is now classified as a silent pandemic due to concerning current statistics and future predictions. Despite this, no effective treatment or accurate diagnosis currently exists. The negative impacts of invasive techniques and the failure of clinical trials have prompted a shift in research towards non-invasive treatments. In light of this, there is a growing need for early detection of AD through non-invasive approaches. The abundance of data generated by non-invasive techniques such as blood component monitoring, imaging, wearable sensors, and bio-sensors not only offers a platform for more accurate and reliable bio-marker developments but also significantly reduces patient pain, psychological impact, risk of complications, and cost. Nevertheless, there are challenges concerning the computational analysis of the large quantities of data generated, which can provide crucial information for the early diagnosis of AD. Hence, the integration of artificial intelligence and deep learning is critical to addressing these challenges. This work attempts to examine some of the facts and the current situation of these approaches to AD diagnosis by leveraging the potential of these tools and utilizing the vast amount of non-invasive data in order to revolutionize the early detection of AD according to the principles of a new non-invasive medicine era.
Journal Article
Explainable Deep Learning Models in Medical Image Analysis
by
Sengupta, Sourya
,
Singh, Amitojdeep
,
Lakshminarayanan, Vasudevan
in
Algorithms
,
Artificial intelligence
,
Classification
2020
Deep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those. However, the black-box nature of the algorithms has restricted their clinical use. Recent explainability studies aim to show the features that influence the decision of a model the most. The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations. A review of the current applications of explainable deep learning for different medical imaging tasks is presented here. The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.
Journal Article
A new concordant partial AUC and partial c statistic for imbalanced data in the evaluation of machine learning algorithms
by
Holzinger, Andreas
,
Carrington, André M.
,
Fieguth, Paul W.
in
Algorithms
,
Area Under Curve
,
Area under the ROC curve
2020
Background
In classification and diagnostic testing, the receiver-operator characteristic (ROC) plot and the area under the ROC curve (AUC) describe how an adjustable threshold causes changes in two types of error: false positives and false negatives. Only part of the ROC curve and AUC are informative however when they are used with imbalanced data. Hence, alternatives to the AUC have been proposed, such as the partial AUC and the area under the precision-recall curve. However, these alternatives cannot be as fully interpreted as the AUC, in part because they ignore some information about actual negatives.
Methods
We derive and propose a new concordant partial AUC and a new partial
c
statistic for ROC data—as foundational measures and methods to help understand and explain parts of the ROC plot and AUC. Our partial measures are continuous and discrete versions of the same measure, are derived from the AUC and c statistic respectively, are validated as equal to each other, and validated as equal in summation to whole measures where expected. Our partial measures are tested for validity on a classic ROC example from Fawcett, a variation thereof, and two real-life benchmark data sets in breast cancer: the Wisconsin and Ljubljana data sets. Interpretation of an example is then provided.
Results
Results show the expected equalities between our new partial measures and the existing whole measures. The example interpretation illustrates the need for our newly derived partial measures.
Conclusions
The concordant partial area under the ROC curve was proposed and unlike previous partial measure alternatives, it maintains the characteristics of the AUC. The first partial c statistic for ROC plots was also proposed as an unbiased interpretation for part of an ROC curve. The expected equalities among and between our newly derived partial measures and their existing full measure counterparts are confirmed. These measures may be used with any data set but this paper focuses on imbalanced data with low prevalence.
Future work
Future work with our proposed measures may: demonstrate their value for imbalanced data with high prevalence, compare them to other measures not based on areas; and combine them with other ROC measures and techniques.
Journal Article
Explainable AI in Manufacturing and Industrial Cyber–Physical Systems: A Survey
by
Moosavi, Sajad
,
Saif, Mehrdad
,
Palade, Vasile
in
Artificial intelligence
,
Critical infrastructure
,
Cyber-physical systems
2024
This survey explores applications of explainable artificial intelligence in manufacturing and industrial cyber–physical systems. As technological advancements continue to integrate artificial intelligence into critical infrastructure and industrial processes, the necessity for clear and understandable intelligent models becomes crucial. Explainable artificial intelligence techniques play a pivotal role in enhancing the trustworthiness and reliability of intelligent systems applied to industrial systems, ensuring human operators can comprehend and validate the decisions made by these intelligent systems. This review paper begins by highlighting the imperative need for explainable artificial intelligence, and, subsequently, classifies explainable artificial intelligence techniques systematically. The paper then investigates diverse explainable artificial-intelligence-related works within a wide range of industrial applications, such as predictive maintenance, cyber-security, fault detection and diagnosis, process control, product development, inventory management, and product quality. The study contributes to a comprehensive understanding of the diverse strategies and methodologies employed in integrating explainable artificial intelligence within industrial contexts.
Journal Article
A lightweight xAI approach to cervical cancer classification
by
Domínguez-Morales, Manuel
,
Civit-Masot, Javier
,
Civit, Anton
in
Algorithms
,
Artificial neural networks
,
Biomedical and Life Sciences
2024
Cervical cancer is caused in the vast majority of cases by the human papilloma virus (HPV) through sexual contact and requires a specific molecular-based analysis to be detected. As an HPV vaccine is available, the incidence of cervical cancer is up to ten times higher in areas without adequate healthcare resources. In recent years, liquid cytology has been used to overcome these shortcomings and perform mass screening. In addition, classifiers based on convolutional neural networks can be developed to help pathologists diagnose the disease. However, these systems always require the final verification of a pathologist to make a final diagnosis. For this reason, explainable AI techniques are required to highlight the most significant data to the healthcare professional, as it can be used to determine the confidence in the results and the areas of the image used for classification (allowing the professional to point out the areas he/she thinks are most important and cross-check them against those detected by the system in order to create incremental learning systems). In this work, a 4-phase optimization process is used to obtain a custom deep-learning classifier for distinguishing between 4 severity classes of cervical cancer with liquid-cytology images. The final classifier obtains an accuracy over 97% for 4 classes and 100% for 2 classes with execution times under 1 s (including the final report generation). Compared to previous works, the proposed classifier obtains better accuracy results with a lower computational cost.
Graphical abstract
Journal Article