Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Exploiting Diffusion Prior for Real-World Image Super-Resolution
by
Yue, Zongsheng
, Zhou, Shangchen
, Loy, Chen Change
, Wang, Jianyi
, Chan, Kelvin C. K
in
Computer vision
/ Controllability
/ Image resolution
/ Methods
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Exploiting Diffusion Prior for Real-World Image Super-Resolution
by
Yue, Zongsheng
, Zhou, Shangchen
, Loy, Chen Change
, Wang, Jianyi
, Chan, Kelvin C. K
in
Computer vision
/ Controllability
/ Image resolution
/ Methods
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Exploiting Diffusion Prior for Real-World Image Super-Resolution
Journal Article
Exploiting Diffusion Prior for Real-World Image Super-Resolution
2024
Request Book From Autostore
and Choose the Collection Method
Overview
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution. Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at https://github.com/IceClear/StableSR.
Publisher
Springer Nature B.V
Subject
This website uses cookies to ensure you get the best experience on our website.