Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints
by
Ruan, Shouwei
, Wei, Xingxing
, Kang, Caixin
, Dong, Yinpeng
, Zhu, Jun
, Su, Hang
in
Classifiers
/ Object recognition
/ Robustness
2022
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints
by
Ruan, Shouwei
, Wei, Xingxing
, Kang, Caixin
, Dong, Yinpeng
, Zhu, Jun
, Su, Hang
in
Classifiers
/ Object recognition
/ Robustness
2022
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints
Paper
ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints
2022
Request Book From Autostore
and Choose the Collection Method
Overview
Recent studies have demonstrated that visual recognition models lack robustness to distribution shift. However, current work mainly considers model robustness to 2D image transformations, leaving viewpoint changes in the 3D world less explored. In general, viewpoint changes are prevalent in various real-world applications (e.g., autonomous driving), making it imperative to evaluate viewpoint robustness. In this paper, we propose a novel method called ViewFool to find adversarial viewpoints that mislead visual recognition models. By encoding real-world objects as neural radiance fields (NeRF), ViewFool characterizes a distribution of diverse adversarial viewpoints under an entropic regularizer, which helps to handle the fluctuations of the real camera pose and mitigate the reality gap between the real objects and their neural representations. Experiments validate that the common image classifiers are extremely vulnerable to the generated adversarial viewpoints, which also exhibit high cross-model transferability. Based on ViewFool, we introduce ImageNet-V, a new out-of-distribution dataset for benchmarking viewpoint robustness of image classifiers. Evaluation results on 40 classifiers with diverse architectures, objective functions, and data augmentations reveal a significant drop in model performance when tested on ImageNet-V, which provides a possibility to leverage ViewFool as an effective data augmentation strategy to improve viewpoint robustness.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.