Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Interpretability Illusions in the Generalization of Simplified Models
by
Friedman, Dan
, Ghandeharioun, Asma
, Dixon, Lucas
, Chen, Danqi
, Lampinen, Andrew
in
Clustering
/ Datasets
/ Illusions
/ Representations
/ Singular value decomposition
/ Training
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Interpretability Illusions in the Generalization of Simplified Models
by
Friedman, Dan
, Ghandeharioun, Asma
, Dixon, Lucas
, Chen, Danqi
, Lampinen, Andrew
in
Clustering
/ Datasets
/ Illusions
/ Representations
/ Singular value decomposition
/ Training
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Interpretability Illusions in the Generalization of Simplified Models
Paper
Interpretability Illusions in the Generalization of Simplified Models
2024
Request Book From Autostore
and Choose the Collection Method
Overview
A common method to study deep learning systems is to use simplified model representations--for example, using singular value decomposition to visualize the model's hidden states in a lower dimensional space. This approach assumes that the results of these simplifications are faithful to the original model. Here, we illustrate an important caveat to this assumption: even if the simplified representations can accurately approximate the full model on the training set, they may fail to accurately capture the model's behavior out of distribution. We illustrate this by training Transformer models on controlled datasets with systematic generalization splits, including the Dyck balanced-parenthesis languages and a code completion task. We simplify these models using tools like dimensionality reduction and clustering, and then explicitly test how these simplified proxies match the behavior of the original model. We find consistent generalization gaps: cases in which the simplified proxies are more faithful to the original model on the in-distribution evaluations and less faithful on various tests of systematic generalization. This includes cases where the original model generalizes systematically but the simplified proxies fail, and cases where the simplified proxies generalize better. Together, our results raise questions about the extent to which mechanistic interpretations derived using tools like SVD can reliably predict what a model will do in novel situations.
Publisher
Cornell University Library, arXiv.org
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
We currently cannot retrieve any items related to this title. Kindly check back at a later time.
This website uses cookies to ensure you get the best experience on our website.