Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
7,053
result(s) for
"explainable artificial intelligence"
Sort by:
Advanced Non-linear Modeling and Explainable Artificial Intelligence Techniques for Predicting 30-Day Complications in Bariatric Surgery: A Single-Center Study
by
Mastronardi, Manuela
,
Zucchini, Nicolas
,
Giuffrè, Mauro
in
Artificial intelligence
,
Gastrointestinal surgery
,
Medicine
2024
Purpose
Metabolic bariatric surgery (MBS) became integral to managing severe obesity. Understanding surgical risks associated with MBS is crucial. Different scores, such as the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP), aid in patient selection and outcome prediction. This study aims to evaluate machine learning (ML) models performance in predicting 30-day post-operative complications and compare them with the MBSAQIP risk score.
Materials and Methods
We retrospectively evaluated 424 consecutive patients (2006–2020) who underwent MBS, analyzing 30-day surgical complications according to Clavien-Dindo Classification. ML models, including logistic regression, support vector machine, random forest, k-nearest neighbors, multi-layer perceptron, and extreme gradient boosting, were analyzed and compared to MBSAQIP risk score. Performance was measured by area under receiver operating characteristic curve (
AUROC
) analysis.
Results
Random forest showed the highest AUROC in the training (
AUROC
= 0.94) and the validation set (
AUROC
= 0.88). ML algorithms, particularly random forest, outperformed MBSAQIP in predicting negative 30-day outcomes in both the training and validation sets (
AUROC
= 0.64, DeLong’s Test
p
< 0.001). The five features that were more relevant for the prediction of the random forest model were serum alkaline phosphatase, platelet count, triglycerides, glycated hemoglobin, and albumin.
Conclusion
We developed several ML model that identifies patients at risk for 30-day complications after MBS. Among these, random forest is the most performing one and outperforms the already established MBSAQIP score. This model could increase the identification of high-risk patients before MBS.
Graphical Abstract
Journal Article
Explainable artificial intelligence (XAI) in finance: a systematic literature review
by
Černevičienė, Jurgita
,
Kabašinskas, Audrius
in
Artificial Intelligence
,
Artificial neural networks
,
Computer Science
2024
As the range of decisions made by Artificial Intelligence (AI) expands, the need for Explainable AI (XAI) becomes increasingly critical. The reasoning behind the specific outcomes of complex and opaque financial models requires a thorough justification to improve risk assessment, minimise the loss of trust, and promote a more resilient and trustworthy financial ecosystem. This Systematic Literature Review (SLR) identifies 138 relevant articles from 2005 to 2022 and highlights empirical examples demonstrating XAI's potential benefits in the financial industry. We classified the articles according to the financial tasks addressed by AI using XAI, the variation in XAI methods between applications and tasks, and the development and application of new XAI methods. The most popular financial tasks addressed by the AI using XAI were credit management, stock price predictions, and fraud detection. The three most commonly employed AI black-box techniques in finance whose explainability was evaluated were Artificial Neural Networks (ANN), Extreme Gradient Boosting (XGBoost), and Random Forest. Most of the examined publications utilise feature importance, Shapley additive explanations (SHAP), and rule-based methods. In addition, they employ explainability frameworks that integrate multiple XAI techniques. We also concisely define the existing challenges, requirements, and unresolved issues in applying XAI in the financial sector.
Journal Article
A Survey of Explainable Artificial Intelligence for Smart Cities
by
Maddikunta, Praveen Kumar Reddy
,
Pandya, Sharnil
,
Javed, Abdul Rehman
in
Algorithms
,
Artificial intelligence
,
City planning
2023
The emergence of Explainable Artificial Intelligence (XAI) has enhanced the lives of humans and envisioned the concept of smart cities using informed actions, enhanced user interpretations and explanations, and firm decision-making processes. The XAI systems can unbox the potential of black-box AI models and describe them explicitly. The study comprehensively surveys the current and future developments in XAI technologies for smart cities. It also highlights the societal, industrial, and technological trends that initiate the drive towards XAI for smart cities. It presents the key to enabling XAI technologies for smart cities in detail. The paper also discusses the concept of XAI for smart cities, various XAI technology use cases, challenges, applications, possible alternative solutions, and current and future research enhancements. Research projects and activities, including standardization efforts toward developing XAI for smart cities, are outlined in detail. The lessons learned from state-of-the-art research are summarized, and various technical challenges are discussed to shed new light on future research possibilities. The presented study on XAI for smart cities is a first-of-its-kind, rigorous, and detailed study to assist future researchers in implementing XAI-driven systems, architectures, and applications for smart cities.
Journal Article
Survey on ontology-based explainable AI in manufacturing
by
Elmhadhbi, Linda
,
Naqvi, Muhammad Raza
,
Karray, Mohamed Hedi
in
Advanced manufacturing technologies
,
Algorithms
,
Artificial intelligence
2024
Artificial intelligence (AI) has become an essential tool for manufacturers seeking to optimize their production processes, reduce costs, and improve product quality. However, the complexity of the underlying mechanisms of AI systems can render it difficult for humans to understand and trust AI-driven decisions. Explainable AI (XAI) is a rapidly evolving field that addresses this challenge, providing human-understandable explanations of AI decisions. Based on a systematic literature survey, We explore the latest techniques and approaches that are helping manufacturers gain transparency in the decision-making processes of their AI systems. In this survey, we focus on two of the most exciting areas of XAI: ontology-based and semantic-based XAI (O-XAI, S-XAI, respectively), which provide human-readable explanations of AI decisions by exploiting semantic information. These latter types of explanations are presented in natural language and are designed to be easily understood by non-experts. Translating the decision paths taken by AI algorithms to meaningful explanations through semantics, O-XAI, and S-XAI enables humans to identify various cross-cutting concerns that influence the decisions made by the AI system. This information can be used to improve the performance of the AI system, identify potential biases in the system, and ensure that the decisions are aligned with the goals and values of the manufacturing organization. Additionally, we highlight the benefits and challenges of using O-XAI and S-XAI in manufacturing and discuss the potential for future research, aiming to provide valuable guidance for researchers and practitioners looking to leverage the power of ontologies and general semantics for XAI.
Journal Article
Explainable deep learning approach for advanced persistent threats (APTs) detection in cybersecurity: a review
by
Wahab, Ainuddin Wahid Abdul
,
Abdullah, Erma Rahayu Mohd Faizal
,
Sabri, Aznul Qalid Md
in
Agnosticism
,
Artificial Intelligence
,
Computer Science
2024
In recent years, Advanced Persistent Threat (APT) attacks on network systems have increased through sophisticated fraud tactics. Traditional Intrusion Detection Systems (IDSs) suffer from low detection accuracy, high false-positive rates, and difficulty identifying unknown attacks such as remote-to-local (R2L) and user-to-root (U2R) attacks. This paper addresses these challenges by providing a foundational discussion of APTs and the limitations of existing detection methods. It then pivots to explore the novel integration of deep learning techniques and Explainable Artificial Intelligence (XAI) to improve APT detection. This paper aims to fill the gaps in the current research by providing a thorough analysis of how XAI methods, such as Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), can make black-box models more transparent and interpretable. The objective is to demonstrate the necessity of explainability in APT detection and propose solutions that enhance the trustworthiness and effectiveness of these models. It offers a critical analysis of existing approaches, highlights their strengths and limitations, and identifies open issues that require further research. This paper also suggests future research directions to combat evolving threats, paving the way for more effective and reliable cybersecurity solutions. Overall, this paper emphasizes the importance of explainability in enhancing the performance and trustworthiness of cybersecurity systems.
Journal Article
Toward interpretable credit scoring: integrating explainable artificial intelligence with deep learning for credit card default prediction
by
Badawy, Mahmoud
,
Elhosseini, Mostafa
,
Aljadani, Abdussalam
in
Accuracy
,
Artificial Intelligence
,
Computational Biology/Bioinformatics
2024
In recent years, the increasing prevalence of credit card usage has raised concerns about accurately predicting and managing credit card defaults. While machine learning and deep learning methods have shown promising results in default prediction, the black-box nature of these models often limits their interpretability and practical adoption. This study presents a new method for predicting credit card default using a combination of deep learning and explainable artificial intelligence (XAI) techniques. Integrating these methods aims to improve the interpretability of the decision-making process involved in credit card default prediction. The proposed approach is evaluated using a real-world dataset and compared to existing state-of-the-art models. Results show that the proposed approach achieves competitive prediction accuracy while providing meaningful insights into the factors driving credit card default risk. The present investigation adds to the increasing body of literature on explainable artificial intelligence (AI) in the realm of finance. Besides, it provides a pragmatic approach to assessing credit risk, balancing precision and comprehensibility. In conclusion, the model demonstrates strong potential as a credit risk assessment tool, with an accuracy of 0.8350, sensitivity of 0.8823, and specificity of 0.9879. Among the most important features identified by the model are payment delays and outstanding bill amounts. This study is a step toward more interpretable and transparent credit scoring models.
Journal Article
Explainable deep learning model for automatic mulberry leaf disease classification
by
Chowdhury, Muhammad E. H.
,
Ayari, Mohamed Arselene
,
Ahmed, Faruque
in
Accuracy
,
Agricultural production
,
Agriculture
2023
Mulberry leaves feed Bombyx mori silkworms to generate silk thread. Diseases that affect mulberry leaves have reduced crop and silk yields in sericulture, which produces 90% of the world’s raw silk. Manual leaf disease identification is tedious and error-prone. Computer vision can categorize leaf diseases early and overcome the challenges of manual identification. No mulberry leaf deep learning (DL) models have been reported. Therefore, in this study, two types of leaf diseases: leaf rust and leaf spot, with disease-free leaves, were collected from two regions of Bangladesh. Sericulture experts annotated the leaf images. The images were pre-processed, and 6,000 synthetic images were generated using typical image augmentation methods from the original 764 training images. Additional 218 and 109 images were employed for testing and validation respectively. In addition, a unique lightweight parallel depth-wise separable CNN model, PDS-CNN was developed by applying depth-wise separable convolutional layers to reduce parameters, layers, and size while boosting classification performance. Finally, the explainable capability of PDS-CNN is obtained through the use of SHapley Additive exPlanations (SHAP) evaluated by a sericulture specialist. The proposed PDS-CNN outperforms well-known deep transfer learning models, achieving an optimistic accuracy of 95.05 ± 2.86% for three-class classifications and 96.06 ± 3.01% for binary classifications with only 0.53 million parameters, 8 layers, and a size of 6.3 megabytes. Furthermore, when compared with other well-known transfer models, the proposed model identified mulberry leaf diseases with higher accuracy, fewer factors, fewer layers, and lower overall size. The visually expressive SHAP explanation images validate the models’ findings aligning with the predictions made the sericulture specialist. Based on these findings, it is possible to conclude that the explainable AI (XAI)-based PDS-CNN can provide sericulture specialists with an effective tool for accurately categorizing mulberry leaves.
Journal Article
eXplainable artificial intelligence for automatic defect detection in additively manufactured parts using CT scan analysis
by
Cersullo, Nicola
,
Philipp, Jens
,
Hühne, Christian
in
Additive manufacturing
,
Additive Manufacturing (AM)
,
Algorithms
2025
Additive Manufacturing (AM) and in particular has gained significant attention due to its capability to produce complex geometries using various materials, resulting in cost and mass reduction per part. However, metal AM parts often contain internal defects inherent to the manufacturing process. Non-Destructive Testing (NDT), particularly Computed Tomography (CT), is commonly employed for defect analysis. Today adopted standard inspection techniques are costly and time-consuming, therefore an automatic approach is needed. This paper presents a novel eXplainable Artificial Intelligence (XAI) methodology for defect detection and characterization. To classify pixel data from CT images as pores or inclusions, the proposed method utilizes Support Vector Machine (SVM), a supervised machine learning algorithm, trained with an Area Under the Curve (AUC) of 0.94. Density-Based Spatial Clustering with the Application of Noise (DBSCAN) is subsequently applied to cluster the identified pixels into separate defects, and finally, a convex hull is employed to characterize the identified clusters based on their size and shape. The effectiveness of the methodology is evaluated on Ti6Al4V specimens, comparing the results obtained from manual inspection and the ML-based approach with the guidance of a domain expert. This work establishes a foundation for automated defect detection, highlighting the crucial role of XAI in ensuring trust in NDT, thereby offering new possibilities for the evaluation of AM components.
Journal Article
Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing
2023
The opacity of deep learning makes its application challenging in the medical field. Therefore, there is a need to enable explainable artificial intelligence (XAI) in the medical field to ensure that models and their results can be explained in a manner that humans can understand. This study uses a high-accuracy computer vision algorithm model to transfer learning to medical text tasks and uses the explanatory visualization method known as gradient-weighted class activation mapping (Grad-CAM) to generate heat maps to ensure that the basis for decision-making can be provided intuitively or via the model. The system comprises four modules: pre-processing, word embedding, classifier, and visualization. We used Word2Vec and BERT to compare word embeddings and use ResNet and 1Dimension convolutional neural networks (CNN) to compare classifiers. Finally, the Bi-LSTM was used to perform text classification for direct comparison. With 25 epochs, the model that used pre-trained ResNet on the formalized text presented the best performance (recall of 90.9%, precision of 91.1%, and an F1 score of 90.2% weighted). This study uses ResNet to process medical texts through Grad-CAM-based explainable artificial intelligence and obtains a high-accuracy classification effect; at the same time, through Grad-CAM visualization, it intuitively shows the words to which the model pays attention when making predictions.
Journal Article
XRL-SHAP-Cache: an explainable reinforcement learning approach for intelligent edge service caching in content delivery networks
2024
Content delivery networks (CDNs) play a pivotal role in the modern internet infrastructure by enabling efficient content delivery across diverse geographical regions. As an essential component of CDNs, the edge caching scheme directly influences the user experience by determining the caching and eviction of content on edge servers. With the emergence of 5G technology, traditional caching schemes have faced challenges in adapting to increasingly complex and dynamic network environments. Consequently, deep reinforcement learning (DRL) offers a promising solution for intelligent zero-touch network governance. However, the black-box nature of DRL models poses challenges in understanding and making trusting decisions. In this paper, we propose an explainable reinforcement learning (XRL)-based intelligent edge service caching approach, namely XRL-SHAP-Cache, which combines DRL with an explainable artificial intelligence (XAI) technique for cache management in CDNs. Instead of focusing solely on achieving performance gains, this study introduces a novel paradigm for providing interpretable caching strategies, thereby establishing a foundation for future transparent and trustworthy edge caching solutions. Specifically, a multi-level cache scheduling framework for CDNs was formulated theoretically, with the D3QN-based caching scheme serving as the targeted interpretable model. Subsequently, by integrating Deep-SHAP into our framework, the contribution of each state input feature to the agent’s Q-value output was calculated, thereby providing valuable insights into the decision-making process. The proposed XRL-SHAP-Cache approach was evaluated through extensive experiments to demonstrate the behavior of the scheduling agent in the face of different environmental inputs. The results demonstrate its strong explainability under various real-life scenarios while maintaining superior performance compared to traditional caching schemes in terms of cache hit ratio, quality of service (QoS), and space utilization.
Journal Article