Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
by
Khan, Naimul
, Zafar, Muhammad Rehman
in
Ablation
/ Algorithms
/ Artificial intelligence
/ Cluster analysis
/ Clustering
/ Datasets
/ Decision making
/ Decision trees
/ deterministic explanations
/ explainable artificial intelligence (XAI)
/ Feature selection
/ interpretable machine learning
/ Lime
/ local explanations
/ Machine learning
/ model agnostic explanations
/ Perturbation methods
/ Regression analysis
/ stable explanations
2021
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
by
Khan, Naimul
, Zafar, Muhammad Rehman
in
Ablation
/ Algorithms
/ Artificial intelligence
/ Cluster analysis
/ Clustering
/ Datasets
/ Decision making
/ Decision trees
/ deterministic explanations
/ explainable artificial intelligence (XAI)
/ Feature selection
/ interpretable machine learning
/ Lime
/ local explanations
/ Machine learning
/ model agnostic explanations
/ Perturbation methods
/ Regression analysis
/ stable explanations
2021
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
by
Khan, Naimul
, Zafar, Muhammad Rehman
in
Ablation
/ Algorithms
/ Artificial intelligence
/ Cluster analysis
/ Clustering
/ Datasets
/ Decision making
/ Decision trees
/ deterministic explanations
/ explainable artificial intelligence (XAI)
/ Feature selection
/ interpretable machine learning
/ Lime
/ local explanations
/ Machine learning
/ model agnostic explanations
/ Perturbation methods
/ Regression analysis
/ stable explanations
2021
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
Journal Article
Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability
2021
Request Book From Autostore
and Choose the Collection Method
Overview
Local Interpretable Model-Agnostic Explanations (LIME) is a popular technique used to increase the interpretability and explainability of black box Machine Learning (ML) algorithms. LIME typically creates an explanation for a single prediction by any ML model by learning a simpler interpretable model (e.g., linear classifier) around the prediction through generating simulated data around the instance by random perturbation, and obtaining feature importance through applying some form of feature selection. While LIME and similar local algorithms have gained popularity due to their simplicity, the random perturbation methods result in shifts in data and instability in the generated explanations, where for the same prediction, different explanations can be generated. These are critical issues that can prevent deployment of LIME in sensitive domains. We propose a deterministic version of LIME. Instead of random perturbation, we utilize Agglomerative Hierarchical Clustering (AHC) to group the training data together and K-Nearest Neighbour (KNN) to select the relevant cluster of the new instance that is being explained. After finding the relevant cluster, a simple model (i.e., linear model or decision tree) is trained over the selected cluster to generate the explanations. Experimental results on six public (three binary and three multi-class) and six synthetic datasets show the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME), where we quantitatively determine the stability and faithfulness of DLIME compared to LIME.
Publisher
MDPI AG
MBRLCatalogueRelatedBooks
Related Items
Related Items
We currently cannot retrieve any items related to this title. Kindly check back at a later time.
This website uses cookies to ensure you get the best experience on our website.