Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
6,385 result(s) for "Explanation."
Sort by:
Understanding how science explains the world
\"All people desire to know. We want to not only know what has happened, but also why it happened, how it happened, whether it will happen again, whether it can be made to happen or not happen, and so on. In short, what we want are explanations. Asking and answering explanatory questions lies at the very heart of scientific practice. The primary aim of this book is to help readers understand how science explains the world. This book explores the nature and contours of scientific explanation, how such explanations are evaluated, as well as how they lead to knowledge and understanding. As well as providing an introduction to scientific explanation, it also tackles misconceptions and misunderstandings, while remaining accessible to a general audience with little or no prior philosophical understanding\"-- Provided by publisher.
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
Explanation by status as empty-base explanation
This paper explores the practice of explanation by status, in which a truth with a certain status (i.e. necessary status, essential status, or status as a law) is supposed to be explained by its having that status. It first investigates whether such explanations are possible. Having found existing accounts of the practice wanting, it then argues for a novel account of explanation by status as empty-base explanation. The latter notion captures a certain limiting case of ordinary explanation so that according to the empty-base account, explanation by status can be fruitfully understood as a corresponding limiting case of ordinary explanation. One way in which the empty-base account is argued to be superior to other treatments of explanation by status is that it allows for a principled assessment of the possibility of particular kinds of explanation by status. Thus, one result of the present discussion is that explanation by essential status and status as a law are possible, while explanation by merely necessary status is not.
This explains everything : deep, beautiful, and elegant theories of how the world works
\"Drawn from the cutting-edge frontiers of science, This Explains Everything presents 150 of the most deep, surprising, and brilliant explanations of how the world works, with contributions by Jared Diamond, Richard Dawkins, Nassim Taleb, Brian Eno, Steven Pinker, and more\"-- Provided by publisher.
Explainable Image Classification: The Journey So Far and the Road Ahead
Explainable Artificial Intelligence (XAI) has emerged as a crucial research area to address the interpretability challenges posed by complex machine learning models. In this survey paper, we provide a comprehensive analysis of existing approaches in the field of XAI, focusing on the tradeoff between model accuracy and interpretability. Motivated by the need to address this tradeoff, we conduct an extensive review of the literature, presenting a multi-view taxonomy that offers a new perspective on XAI methodologies. We analyze various sub-categories of XAI methods, considering their strengths, weaknesses, and practical challenges. Moreover, we explore causal relationships in model explanations and discuss approaches dedicated to explaining cross-domain classifiers. The latter is particularly important in scenarios where training and test data are sampled from different distributions. Drawing insights from our analysis, we propose future research directions, including exploring explainable allied learning paradigms, developing evaluation metrics for both traditionally trained and allied learning-based classifiers, and applying neural architectural search techniques to minimize the accuracy–interpretability tradeoff. This survey paper provides a comprehensive overview of the state-of-the-art in XAI, serving as a valuable resource for researchers and practitioners interested in understanding and advancing the field.
A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME
eXplainable artificial intelligence (XAI) methods have emerged to convert the black box of machine learning (ML) models into a more digestible form. These methods help to communicate how the model works with the aim of making ML models more transparent and increasing the trust of end‐users in their output. SHapley Additive exPlanations (SHAP) and Local Interpretable Model Agnostic Explanation (LIME) are two widely used XAI methods, particularly with tabular data. In this perspective piece, the way the explainability metrics of these two methods are generated is discussed and a framework for the interpretation of their outputs, highlighting their weaknesses and strengths is proposed. Specifically, their outcomes in terms of model‐dependency and in the presence of collinearity among the features, relying on a case study from the biomedical domain (classification of individuals with or without myocardial infarction) are discussed. The results indicate that SHAP and LIME are highly affected by the adopted ML model and feature collinearity, raising a note of caution on their usage and interpretation. SHapley Additive exPlanations and Local Interpretable Model Agnostic Explanation are two widely used eXplainable artificial intelligence methods. However, they have limitations related to model‐dependency and the presence of collinearity among the features which result in unrealistic explanations. This perspective discusses these two issues through two case studies and provides possible solutions to overcome and eliminate their impacts
Principles for Conducting Critical Realist Case Study Research in Information Systems1,Principles for Conducting Critical Realist Case Study Research in Information Systems
Critical realism is emerging as a viable philosophical paradigm for conducting social science research, and has been proposed as an alternative to the more prevalent paradigms of positivism and interpretivism. Few papers, however, have offered clear guidance for applying this philosophy to actual research methodologies. Under critical realism, a causal explanation for a given phenomenon is inferred by explicitly identifying the means by which structural entities and contextual conditions interact to generate a given set of events. Consistent with this view of causality, we propose a set of methodological principles for conducting and evaluating critical realism-based explanatory case study research within the information systems field. The principles are derived directly from the ontological and epistemological assumptions of critical realism. We demonstrate the utility of each of the principles through examples drawn from existing critical realist case studies. The article concludes by discussing the implications of critical realism based research for IS research and practice.
Structural explanations: impossibilities vs failures
The bridges of Königsberg case has been widely cited in recent philosophical discussions on scientific explanation as a potential example of a structural explanation of a physical phenomenon. However, when discussing this case, different authors have focused on two different versions, depending on what they take the explanandum to be. In one version, the explanandum is the failure of a given individual in performing an Eulerian walk over the bridge system. In the other version, the explanandum is the impossibility of performing an Eulerian walk over the bridges. The goal of this paper is to show that only the latter version amounts to a real case of a structural explanation. I will also suggest how to fix the first version, and show how my remarks apply to other purported cases of structural explanations.