Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Deep MMD Gradient Flow without adversarial training
by
de Bortoli, Valentin
, Galashov, Alexandre
, Gretton, Arthur
in
Generative adversarial networks
/ Gradient flow
/ Image processing
/ Lower bounds
/ Probabilistic models
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Deep MMD Gradient Flow without adversarial training
by
de Bortoli, Valentin
, Galashov, Alexandre
, Gretton, Arthur
in
Generative adversarial networks
/ Gradient flow
/ Image processing
/ Lower bounds
/ Probabilistic models
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Paper
Deep MMD Gradient Flow without adversarial training
2024
Request Book From Autostore
and Choose the Collection Method
Overview
We propose a gradient flow procedure for generative modeling by transporting particles from an initial source distribution to a target distribution, where the gradient field on the particles is given by a noise-adaptive Wasserstein Gradient of the Maximum Mean Discrepancy (MMD). The noise-adaptive MMD is trained on data distributions corrupted by increasing levels of noise, obtained via a forward diffusion process, as commonly used in denoising diffusion probabilistic models. The result is a generalization of MMD Gradient Flow, which we call Diffusion-MMD-Gradient Flow or DMMD. The divergence training procedure is related to discriminator training in Generative Adversarial Networks (GAN), but does not require adversarial training. We obtain competitive empirical performance in unconditional image generation on CIFAR10, MNIST, CELEB-A (64 x64) and LSUN Church (64 x 64). Furthermore, we demonstrate the validity of the approach when MMD is replaced by a lower bound on the KL divergence.
Publisher
Cornell University Library, arXiv.org
This website uses cookies to ensure you get the best experience on our website.