Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement
by
Kersting, Kristian
, Shao, Xiaoting
, Stelzner, Karl
in
Feedback
/ Machine learning
/ Statistical methods
2022
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement
by
Kersting, Kristian
, Shao, Xiaoting
, Stelzner, Karl
in
Feedback
/ Machine learning
/ Statistical methods
2022
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement
Paper
Right for the Right Latent Factors: Debiasing Generative Models via Disentanglement
2022
Request Book From Autostore
and Choose the Collection Method
Overview
A key assumption of most statistical machine learning methods is that they have access to independent samples from the distribution of data they encounter at test time. As such, these methods often perform poorly in the face of biased data, which breaks this assumption. In particular, machine learning models have been shown to exhibit Clever-Hans-like behaviour, meaning that spurious correlations in the training set are inadvertently learnt. A number of works have been proposed to revise deep classifiers to learn the right correlations. However, generative models have been overlooked so far. We observe that generative models are also prone to Clever-Hans-like behaviour. To counteract this issue, we propose to debias generative models by disentangling their internal representations, which is achieved via human feedback. Our experiments show that this is effective at removing bias even when human feedback covers only a small fraction of the desired distribution. In addition, we achieve strong disentanglement results in a quantitative comparison with recent methods.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.