Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning
by
Byeong Cheon Kim
, Ro, Yong Man
, Lee, Hong Joo
, Kim, Jung Uk
, Yu, Youngjoon
in
Data integration
/ Deep learning
/ Machine learning
/ Performance prediction
/ Robustness
/ Semantic segmentation
2020
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning
by
Byeong Cheon Kim
, Ro, Yong Man
, Lee, Hong Joo
, Kim, Jung Uk
, Yu, Youngjoon
in
Data integration
/ Deep learning
/ Machine learning
/ Performance prediction
/ Robustness
/ Semantic segmentation
2020
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning
Paper
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning
2020
Request Book From Autostore
and Choose the Collection Method
Overview
The success of multimodal data fusion in deep learning appears to be attributed to the use of complementary in-formation between multiple input data. Compared to their predictive performance, relatively less attention has been devoted to the robustness of multimodal fusion models. In this paper, we investigated whether the current multimodal fusion model utilizes the complementary intelligence to defend against adversarial attacks. We applied gradient based white-box attacks such as FGSM and PGD on MFNet, which is a major multispectral (RGB, Thermal) fusion deep learning model for semantic segmentation. We verified that the multimodal fusion model optimized for better prediction is still vulnerable to adversarial attack, even if only one of the sensors is attacked. Thus, it is hard to say that existing multimodal data fusion models are fully utilizing complementary relationships between multiple modalities in terms of adversarial robustness. We believe that our observations open a new horizon for adversarial attack research on multimodal data fusion.
Publisher
Cornell University Library, arXiv.org
This website uses cookies to ensure you get the best experience on our website.