Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Adversarial image detection in deep neural networks
by
Amato, Giuseppe
, Becarelli, Rudy
, Falchi, Fabrizio
, Caldelli, Roberto
, Carrara, Fabio
in
Artificial neural networks
/ Computer vision
/ Image classification
/ Image detection
/ Machine learning
/ Neural networks
2019
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Adversarial image detection in deep neural networks
by
Amato, Giuseppe
, Becarelli, Rudy
, Falchi, Fabrizio
, Caldelli, Roberto
, Carrara, Fabio
in
Artificial neural networks
/ Computer vision
/ Image classification
/ Image detection
/ Machine learning
/ Neural networks
2019
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Journal Article
Adversarial image detection in deep neural networks
2019
Request Book From Autostore
and Choose the Collection Method
Overview
Deep neural networks are more and more pervading many computer vision applications and in particular image classification. Notwithstanding that, recent works have demonstrated that it is quite easy to create adversarial examples, i.e., images malevolently modified to cause deep neural networks to fail. Such images contain changes unnoticeable to the human eye but sufficient to mislead the network. This represents a serious threat for machine learning methods. In this paper, we investigate the robustness of the representations learned by the fooled neural network, analyzing the activations of its hidden layers. Specifically, we tested scoring approaches used for kNN classification, in order to distinguish between correctly classified authentic images and adversarial examples. These scores are obtained searching only between the very same images used for training the network. The results show that hidden layers activations can be used to reveal incorrect classifications caused by adversarial attacks.
Publisher
Springer Nature B.V
This website uses cookies to ensure you get the best experience on our website.