Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
by
Linde, Per
, Holmberg, Lars
, Davidsson, Paul
in
Knowledge representation
/ Neural networks
/ Taxonomy
2022
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
by
Linde, Per
, Holmberg, Lars
, Davidsson, Paul
in
Knowledge representation
/ Neural networks
/ Taxonomy
2022
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
Paper
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
2022
Request Book From Autostore
and Choose the Collection Method
Overview
The success of neural networks builds to a large extent on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain the neural network's decisions, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we have performed a targeted review focusing on research that aims to associate internal representations with human understandable concepts. In doing this, we added a perspective on the existing research by using primarily deductive nomological explanations as a proposed taxonomy. We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability; is it understanding the ML model or, is it actionable explanations useful in the deployment domain?
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.