Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
71 result(s) for "model agnostic explanations"
Sort by:
Can local explanation techniques explain linear additive models?
Local model-agnostic additive explanation techniques decompose the predicted output of a black-box model into additive feature importance scores. Questions have been raised about the accuracy of the produced local additive explanations. We investigate this by studying whether some of the most popular explanation techniques can accurately explain the decisions of linear additive models. We show that even though the explanations generated by these techniques are linear additives, they can fail to provide accurate explanations when explaining linear additive models. In the experiments, we measure the accuracy of additive explanations, as produced by, e.g., LIME and SHAP, along with the non-additive explanations of Local Permutation Importance (LPI) when explaining Linear and Logistic Regression and Gaussian naive Bayes models over 40 tabular datasets. We also investigate the degree to which different factors, such as the number of numerical or categorical or correlated features, the predictive performance of the black-box model, explanation sample size, similarity metric, and the pre-processing technique used on the dataset can directly affect the accuracy of local explanations.
A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME
eXplainable artificial intelligence (XAI) methods have emerged to convert the black box of machine learning (ML) models into a more digestible form. These methods help to communicate how the model works with the aim of making ML models more transparent and increasing the trust of end‐users in their output. SHapley Additive exPlanations (SHAP) and Local Interpretable Model Agnostic Explanation (LIME) are two widely used XAI methods, particularly with tabular data. In this perspective piece, the way the explainability metrics of these two methods are generated is discussed and a framework for the interpretation of their outputs, highlighting their weaknesses and strengths is proposed. Specifically, their outcomes in terms of model‐dependency and in the presence of collinearity among the features, relying on a case study from the biomedical domain (classification of individuals with or without myocardial infarction) are discussed. The results indicate that SHAP and LIME are highly affected by the adopted ML model and feature collinearity, raising a note of caution on their usage and interpretation. SHapley Additive exPlanations and Local Interpretable Model Agnostic Explanation are two widely used eXplainable artificial intelligence methods. However, they have limitations related to model‐dependency and the presence of collinearity among the features which result in unrealistic explanations. This perspective discusses these two issues through two case studies and provides possible solutions to overcome and eliminate their impacts
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
Stable and actionable explanations of black-box models through factual and counterfactual rules
Recent years have witnessed the rise of accurate but obscure classification models that hide the logic of their internal decision processes. Explaining the decision taken by a black-box classifier on a specific input instance is therefore of striking interest. We propose a local rule-based model-agnostic explanation method providing stable and actionable explanations. An explanation consists of a factual logic rule, stating the reasons for the black-box decision, and a set of actionable counterfactual logic rules, proactively suggesting the changes in the instance that lead to a different outcome. Explanations are computed from a decision tree that mimics the behavior of the black-box locally to the instance to explain. The decision tree is obtained through a bagging-like approach that favors stability and fidelity: first, an ensemble of decision trees is learned from neighborhoods of the instance under investigation; then, the ensemble is merged into a single decision tree. Neighbor instances are synthetically generated through a genetic algorithm whose fitness function is driven by the black-box behavior. Experiments show that the proposed method advances the state-of-the-art towards a comprehensive approach that successfully covers stability and actionability of factual and counterfactual explanations.
An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients
In recent years, artificial intelligence-based computer aided diagnosis (CAD) system for the hepatitis has made great progress. Especially, the complex models such as deep learning achieve better performance than the simple ones due to the nonlinear hypotheses of the real world clinical data. However,complex model as a black box, which ignores why it make a certain decision, causes the model distrust from clinicians. To solve these issues, an explainable artificial intelligence (XAI) framework is proposed in this paper to give the global and local interpretation of auxiliary diagnosis of hepatitis while retaining the good prediction performance. First, a public hepatitis classification benchmark from UCI is used to test the feasibility of the framework. Then, the transparent and black-box machine learning models are both employed to forecast the hepatitis deterioration. The transparent models such as logistic regression (LR), decision tree (DT)and k-nearest neighbor (KNN) are picked. While the black-box model such as the eXtreme Gradient Boosting (XGBoost), support vector machine (SVM), random forests (RF) are selected. Finally, the SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME) and Partial Dependence Plots (PDP) are utilized to improve the model interpretation of liver disease. The experimental results show that the complex models outperform the simple ones. The developed RF achieves the highest accuracy (91.9%) among all the models. The proposed framework combining the global and local interpretable methods improves the transparency of complex models, and gets insight into the judgments from the complex models, thereby guiding the treatment strategy and improving the prognosis of hepatitis patients. In addition, the proposed framework could also assist the clinical data scientists to design a more appropriate structure of CAD.
eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing
Artificial Intelligence (AI) has achieved immense progress in recent years across a wide array of application domains, with biomedical imaging and sensing emerging as particularly impactful areas. However, the integration of AI in safety-critical fields, particularly biomedical domains, continues to face a major challenge of explainability arising from the opacity of complex prediction models. Overcoming this obstacle falls within the realm of eXplainable Artificial Intelligence (XAI), which is widely acknowledged as an essential aspect for successfully implementing and accepting AI techniques in practical applications to ensure transparency, fairness, and accountability in the decision-making processes and mitigate potential biases. This article provides a systematic cross-domain review of XAI techniques applied to quantitative prediction tasks, with a focus on their methodological relevance and potential adaptation to biomedical imaging and sensing. To achieve this, following PRISMA guidelines, we conducted an analysis of 44 Q1 journal articles that utilised XAI techniques for prediction applications across different fields where quantitative databases were used, and their contributions to explaining the predictions were studied. As a result, 13 XAI techniques were identified for prediction tasks. Shapley Additive eXPlanations (SHAP) was identified in 35 out of 44 articles, reflecting its frequent computational use for feature-importance ranking and model interpretation. Local Interpretable Model-Agnostic Explanations (LIME), Partial Dependence Plots (PDPs), and Permutation Feature Index (PFI) ranked second, third, and fourth in popularity, respectively. The study also recognises theoretical limitations of SHAP and related model-agnostic methods, such as their additive and causal assumptions, which are particularly critical in heterogeneous biomedical data. Furthermore, a synthesis of the reviewed studies reveals that while many provide computational evaluation of explanations, none include structured human–subject usability validation, underscoring an important research gap for clinical translation. Overall, this study offers an integrated understanding of quantitative XAI techniques, identifies methodological and usability gaps for biomedical adaptation, and provides guidance for future research aimed at safe and interpretable AI deployment in biomedical imaging and sensing.
Explainable AI: Machine Learning Interpretation in Blackcurrant Powders
Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the ‘glass box’ group of Decision Tree, among others, and the ‘black box’ group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.
Comprehensible Machine-Learning-Based Models for the Pre-Emptive Diagnosis of Multiple Sclerosis Using Clinical Data: A Retrospective Study in the Eastern Province of Saudi Arabia
Multiple Sclerosis (MS) is characterized by chronic deterioration of the nervous system, mainly the brain and the spinal cord. An individual with MS develops the condition when the immune system begins attacking nerve fibers and the myelin sheathing that covers them, affecting the communication between the brain and the rest of the body and eventually causing permanent damage to the nerve. Patients with MS (pwMS) might experience different symptoms depending on which nerve was damaged and how much damage it has sustained. Currently, there is no cure for MS; however, there are clinical guidelines that help control the disease and its accompanying symptoms. Additionally, no specific laboratory biomarker can precisely identify the presence of MS, leaving specialists with a differential diagnosis that relies on ruling out other possible diseases with similar symptoms. Since the emergence of Machine Learning (ML) in the healthcare industry, it has become an effective tool for uncovering hidden patterns that aid in diagnosing several ailments. Several studies have been conducted to diagnose MS using ML and Deep Learning (DL) models trained using MRI images, achieving promising results. However, complex and expensive diagnostic tools are needed to collect and examine imaging data. Thus, the intention of this study is to implement a cost-effective, clinical data-driven model that is capable of diagnosing pwMS. The dataset was obtained from King Fahad Specialty Hospital (KFSH) in Dammam, Saudi Arabia. Several ML algorithms were compared, namely Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), Extreme Gradient Boosting (XGBoost), Adaptive Boosting (AdaBoost), and Extra Trees (ET). The results indicated that the ET model outpaced the rest with an accuracy of 94.74%, recall of 97.26%, and precision of 94.67%.
Ensemble-based genetic algorithm explainer with automized image segmentation: A case study on melanoma detection dataset
Explainable Artificial Intelligence (XAI) makes AI understandable to the human user particularly when the model is complex and opaque. Local Interpretable Model-agnostic Explanations (LIME) has an image explainer package that is used to explain deep learning models. The image explainer of LIME needs some parameters to be manually tuned by the expert in advance, including the number of top features to be seen and the number of superpixels in the segmented input image. This parameter tuning is a time-consuming task. Hence, with the aim of developing an image explainer that automizes image segmentation, this paper proposes Ensemble-based Genetic Algorithm Explainer (EGAE) for melanoma cancer detection that automatically detects and presents the informative sections of the image to the user. EGAE has three phases. First, the sparsity of chromosomes in GAs is determined heuristically. Then, multiple GAs are executed consecutively. However, the difference between these GAs are in different number of superpixels in the input image that result in different chromosome lengths. Finally, the results of GAs are ensembled using consensus and majority votings. This paper also introduces how Euclidean distance can be used to calculate the distance between the actual explanation (delineated by experts) and the calculated explanation (computed by the explainer) for accuracy measurement. Experimental results on a melanoma dataset show that EGAE automatically detects informative lesions, and it also improves the accuracy of explanation in comparison with LIME efficiently. The python codes for EGAE, the ground truths delineated by clinicians, and the melanoma detection dataset are available at https://github.com/KhaosResearch/EGAE. [Display omitted] •EGAE decreases the user intervention in comparison with LIME image explainer.•The surrogate model in LIME is substituted by genetic algorithm in EGAE accordingly.•EGAE illustrates the explanation through consensus and majority voting strategies.•The validity of the results has been evaluated on melanoma detection dataset.
Recent Applications of Explainable AI (XAI): A Systematic Literature Review
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being recent, high-quality XAI application articles published in English—and were analyzed in detail. Both qualitative and quantitative statistical techniques were used to analyze the identified articles: qualitatively by summarizing the characteristics of the included studies based on predefined codes, and quantitatively through statistical analysis of the data. These articles were categorized according to their application domains, techniques, and evaluation methods. Health-related applications were particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical imaging. Other significant areas of application included environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally, emerging applications in law, education, and social care highlight XAI’s expanding impact. The review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI applications. Future research should focus on developing comprehensive evaluation standards and improving the interpretability and stability of explanations. These advancements are essential for addressing the diverse demands of various application domains while ensuring trust and transparency in AI systems.