Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
751
result(s) for
"explainable artificial intelligence (XAI)"
Sort by:
A federated incremental blockchain framework with privacy preserving XAI optimization for securing healthcare data
2025
Federated learning (FL) has become more popular in the area of machine learning for protecting data privacy, its unique distributed data processing characteristics have garnered widespread attention. However, the implementation of FL faces many challenges, it can be difficult has to decide on a compromise between model security, data privacy, and system efficiency, often requiring the give up of efficiency for privacy, traceability, interpretability, and security. In this paper, privacy-preserving federated incremental learning blockchain-optimized explainable artificial intelligence (PPFILB-OXAI) leveraging the benefits of Blockchain, Federated Incremental Learning (FIL), and explainable artificial intelligence (XAI) with optimization. Chaotic Bobcat Optimization Algorithm (CBOA) is introduced to XAI for selecting most important features from the dataset. The CBOA mimics the instinctive behaviors of wild bobcats, incorporating a chaotic operator to randomly generate the population during the selection phase. It is inspired by the bobcat’s hunting tactics, particularly the approach and pursuit of prey. Throughout the algorithm iterations, the most optimal feature solution is gradually identified. The FIL algorithm is capable of adapting to increasing resources in real-time without the need for retraining, all while extracting meaningful patterns from the collective client side data. Meanwhile, Blockchain technology makes it possible to handle medical data securely and transparently, and XAI improves the clarity and understanding of model decisions. To coordinate client privacy protection, PPFILB-OXAI integrates the blockchain process, FIL, and privacy approach. It then uses an aggregate to filter out aberrant models. Lastly, Entropy Deep Belief Network (EDBN) has shown the ability to classify and identify attacks. PPFBXAIO provides the best performance on a breast cancer wisconsin and heart disease in terms of precision, recall, f-measure, accuracy, loss, latency, and throughput. Heart disease, the precision, recall, f-measure, and accuracy of the suggested system are 94.87%, 96.73%, 95.79%, and 95.71%, respectively. The precision, recall, f-measure, and accuracy of the suggested method for breast cancer wisconsin are 97.13%, 97.70%, 97.41%, and 96.84%, respectively.
Journal Article
A Malware Detection and Extraction Method for the Related Information Using the ViT Attention Mechanism on Android Operating System
2023
Artificial intelligence (AI) is increasingly being utilized in cybersecurity, particularly for detecting malicious applications. However, the black-box nature of AI models presents a significant challenge. This lack of transparency makes it difficult to understand and trust the results. In order to address this, it is necessary to incorporate explainability into the detection model. There is insufficient research to provide reasons why applications are detected as malicious or explain their behavior. In this paper, we propose a method of a Vision Transformer(ViT)-based malware detection model and malicious behavior extraction using an attention map to achieve high detection accuracy and high interpretability. Malware detection uses a ViT-based model, which takes an image as input. ViT offers a significant advantage for image detection tasks by leveraging attention mechanisms, enabling robust interpretation and understanding of the intricate patterns within the images. The image is converted from an application. An attention map is generated with attention values generated during the detection process. The attention map is used to identify factors that the model deems important. Class and method names are extracted and provided based on the identified factors. The performance of the detection was validated using real-world datasets. The malware detection accuracy was 80.27%, which is a high level of accuracy compared to other models used for image-based malware detection. The interpretability was measured in the same way as the F1-score, resulting in an interpretability score of 0.70. This score is superior to existing interpretable machine learning (ML)-based methods, such as Drebin, LIME, and XMal. By analyzing malicious applications, we also confirmed that the extracted classes and methods are related to malicious behavior. With the proposed method, security experts can understand the reason behind the model’s detection and the behavior of malicious applications. Given the growing importance of explainable artificial intelligence in cybersecurity, this method is expected to make a significant contribution to this field.
Journal Article
Development of a Prediction Method of Cell Density in Autotrophic/Heterotrophic Microorganism Mixtures by Machine Learning Using Absorbance Spectrum Data
by
Akihito Nakanishi
,
Hiroaki Fukunishi
,
Fumihito Eguchi
in
Absorbance
,
Algorithms
,
Artificial intelligence
2022
Microflora is actively used to produce value-added materials in industry, and each cell density should be controlled for stable microflora use. In this study, a simple system evaluating the cell density was constructed with artificial intelligence (AI) using the absorbance spectra data of microflora. To set up the system, the prediction system for cell density based on machine learning was constructed using the spectra data as the feature from the mixture of Saccharomyces cerevisiae and Chlamydomonas reinhardtii. As the results of predicting cell density by extremely randomized trees, when the cell densities of S. cerevisiae and C. reinhardtii were shifted and fixed, the coefficient of determination (R2) was 0.8495; on the other hand, when the cell densities of S. cerevisiae and C. reinhardtii were fixed and shifted, the R2 was 0.9232. To explain the prediction system, the randomized trees regressor of the decision tree-based ensemble learning method as the machine learning algorithm and Shapley additive explanations (SHAPs) as the explainable AI (XAI) to interpret the features contributing to the prediction results were used. As a result of the SHAP analyses, not only the optical density, but also the absorbance of the Soret and Q bands derived from the chloroplasts of C. reinhardtii could contribute to the prediction as the features. The simple cell density evaluating system could have an industrial impact.
Journal Article
Explainable artificial intelligence (XAI) in finance: a systematic literature review
by
Černevičienė, Jurgita
,
Kabašinskas, Audrius
in
Artificial Intelligence
,
Artificial neural networks
,
Computer Science
2024
As the range of decisions made by Artificial Intelligence (AI) expands, the need for Explainable AI (XAI) becomes increasingly critical. The reasoning behind the specific outcomes of complex and opaque financial models requires a thorough justification to improve risk assessment, minimise the loss of trust, and promote a more resilient and trustworthy financial ecosystem. This Systematic Literature Review (SLR) identifies 138 relevant articles from 2005 to 2022 and highlights empirical examples demonstrating XAI's potential benefits in the financial industry. We classified the articles according to the financial tasks addressed by AI using XAI, the variation in XAI methods between applications and tasks, and the development and application of new XAI methods. The most popular financial tasks addressed by the AI using XAI were credit management, stock price predictions, and fraud detection. The three most commonly employed AI black-box techniques in finance whose explainability was evaluated were Artificial Neural Networks (ANN), Extreme Gradient Boosting (XGBoost), and Random Forest. Most of the examined publications utilise feature importance, Shapley additive explanations (SHAP), and rule-based methods. In addition, they employ explainability frameworks that integrate multiple XAI techniques. We also concisely define the existing challenges, requirements, and unresolved issues in applying XAI in the financial sector.
Journal Article
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System
2022
The emerging field of eXplainable AI (XAI) in the medical domain is considered to be of utmost importance. Meanwhile, incorporating explanations in the medical domain with respect to legal and ethical AI is necessary to understand detailed decisions, results, and current status of the patient’s conditions. Successively, we will be presenting a detailed survey for the medical XAI with the model enhancements, evaluation methods, significant overview of case studies with open box architecture, medical open datasets, and future improvements. Potential differences in AI and XAI methods are provided with the recent XAI methods stated as (i) local and global methods for preprocessing, (ii) knowledge base and distillation algorithms, and (iii) interpretable machine learning. XAI characteristics details with future healthcare explainability is included prominently, whereas the pre-requisite provides insights for the brainstorming sessions before beginning a medical XAI project. Practical case study determines the recent XAI progress leading to the advance developments within the medical field. Ultimately, this survey proposes critical ideas surrounding a user-in-the-loop approach, with an emphasis on human–machine collaboration, to better produce explainable solutions. The surrounding details of the XAI feedback system for human rating-based evaluation provides intelligible insights into a constructive method to produce human enforced explanation feedback. For a long time, XAI limitations of the ratings, scores and grading are present. Therefore, a novel XAI recommendation system and XAI scoring system are designed and approached from this work. Additionally, this paper encourages the importance of implementing explainable solutions into the high impact medical field.
Journal Article
Survey on ontology-based explainable AI in manufacturing
by
Elmhadhbi, Linda
,
Naqvi, Muhammad Raza
,
Karray, Mohamed Hedi
in
Advanced manufacturing technologies
,
Algorithms
,
Artificial intelligence
2024
Artificial intelligence (AI) has become an essential tool for manufacturers seeking to optimize their production processes, reduce costs, and improve product quality. However, the complexity of the underlying mechanisms of AI systems can render it difficult for humans to understand and trust AI-driven decisions. Explainable AI (XAI) is a rapidly evolving field that addresses this challenge, providing human-understandable explanations of AI decisions. Based on a systematic literature survey, We explore the latest techniques and approaches that are helping manufacturers gain transparency in the decision-making processes of their AI systems. In this survey, we focus on two of the most exciting areas of XAI: ontology-based and semantic-based XAI (O-XAI, S-XAI, respectively), which provide human-readable explanations of AI decisions by exploiting semantic information. These latter types of explanations are presented in natural language and are designed to be easily understood by non-experts. Translating the decision paths taken by AI algorithms to meaningful explanations through semantics, O-XAI, and S-XAI enables humans to identify various cross-cutting concerns that influence the decisions made by the AI system. This information can be used to improve the performance of the AI system, identify potential biases in the system, and ensure that the decisions are aligned with the goals and values of the manufacturing organization. Additionally, we highlight the benefits and challenges of using O-XAI and S-XAI in manufacturing and discuss the potential for future research, aiming to provide valuable guidance for researchers and practitioners looking to leverage the power of ontologies and general semantics for XAI.
Journal Article
Artificial Intelligence for Predictive Maintenance Applications: Key Components, Trustworthiness, and Future Trends
2024
Predictive maintenance (PdM) is a policy applying data and analytics to predict when one of the components in a real system has been destroyed, and some anomalies appear so that maintenance can be performed before a breakdown takes place. Using cutting-edge technologies like data analytics and artificial intelligence (AI) enhances the performance and accuracy of predictive maintenance systems and increases their autonomy and adaptability in complex and dynamic working environments. This paper reviews the recent developments in AI-based PdM, focusing on key components, trustworthiness, and future trends. The state-of-the-art (SOTA) techniques, challenges, and opportunities associated with AI-based PdM are first analyzed. The integration of AI technologies into PdM in real-world applications, the human–robot interaction, the ethical issues emerging from using AI, and the testing and validation abilities of the developed policies are later discussed. This study exhibits the potential working areas for future research, such as digital twin, metaverse, generative AI, collaborative robots (cobots), blockchain technology, trustworthy AI, and Industrial Internet of Things (IIoT), utilizing a comprehensive survey of the current SOTA techniques, opportunities, and challenges allied with AI-based PdM.
Journal Article
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery
by
Lund, Jonathan
,
Zhang, Yiming
,
Weng, Ying
in
Algorithms
,
Artificial intelligence
,
Decision making
2022
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
Journal Article
A Survey of Explainable Artificial Intelligence for Smart Cities
by
Maddikunta, Praveen Kumar Reddy
,
Pandya, Sharnil
,
Javed, Abdul Rehman
in
Algorithms
,
Artificial intelligence
,
City planning
2023
The emergence of Explainable Artificial Intelligence (XAI) has enhanced the lives of humans and envisioned the concept of smart cities using informed actions, enhanced user interpretations and explanations, and firm decision-making processes. The XAI systems can unbox the potential of black-box AI models and describe them explicitly. The study comprehensively surveys the current and future developments in XAI technologies for smart cities. It also highlights the societal, industrial, and technological trends that initiate the drive towards XAI for smart cities. It presents the key to enabling XAI technologies for smart cities in detail. The paper also discusses the concept of XAI for smart cities, various XAI technology use cases, challenges, applications, possible alternative solutions, and current and future research enhancements. Research projects and activities, including standardization efforts toward developing XAI for smart cities, are outlined in detail. The lessons learned from state-of-the-art research are summarized, and various technical challenges are discussed to shed new light on future research possibilities. The presented study on XAI for smart cities is a first-of-its-kind, rigorous, and detailed study to assist future researchers in implementing XAI-driven systems, architectures, and applications for smart cities.
Journal Article
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
2021
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
Journal Article