Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Do Vision-Language Foundational models show Robust Visual Perception?
by
Tandon, Pranav
, Chandhok, Shivam
in
Blurring
/ Image classification
/ Random noise
/ Robustness
/ Vision
/ Visual perception
/ Visual tasks
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Do Vision-Language Foundational models show Robust Visual Perception?
by
Tandon, Pranav
, Chandhok, Shivam
in
Blurring
/ Image classification
/ Random noise
/ Robustness
/ Vision
/ Visual perception
/ Visual tasks
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Do Vision-Language Foundational models show Robust Visual Perception?
Paper
Do Vision-Language Foundational models show Robust Visual Perception?
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Recent advances in vision-language foundational models have enabled development of systems that can perform visual understanding and reasoning tasks. However, it is unclear if these models are robust to distribution shifts, and how their performance and generalization capabilities vary under changes in data distribution. In this project we strive to answer the question \"Are vision-language foundational models robust to distribution shifts like human perception?\" Specifically, we consider a diverse range of vision-language models and compare how the performance of these systems is affected by corruption based distribution shifts (such as \\textit{motion blur, fog, snow, gaussian noise}) commonly found in practical real-world scenarios. We analyse the generalization capabilities qualitatively and quantitatively on zero-shot image classification task under aforementioned distribution shifts. Our code will be avaible at \\url{https://github.com/shivam-chandhok/CPSC-540-Project}
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.