Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference
by
Bailly, Kevin
, Fischer, Xavier
, Dapogny, Arnaud
, Yvinec, Edouard
in
Artificial neural networks
/ Computer vision
/ Medical imaging
/ Neural networks
/ Neurons
/ Perturbation
/ Sensitivity analysis
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference
by
Bailly, Kevin
, Fischer, Xavier
, Dapogny, Arnaud
, Yvinec, Edouard
in
Artificial neural networks
/ Computer vision
/ Medical imaging
/ Neural networks
/ Neurons
/ Perturbation
/ Sensitivity analysis
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference
Paper
SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference
2023
Request Book From Autostore
and Choose the Collection Method
Overview
Deep neural networks (DNNs) demonstrate outstanding performance across most computer vision tasks. Some critical applications, such as autonomous driving or medical imaging, also require investigation into their behavior and the reasons behind the decisions they make. In this vein, DNN attribution consists in studying the relationship between the predictions of a DNN and its inputs. Attribution methods have been adapted to highlight the most relevant weights or neurons in a DNN, allowing to more efficiently select which weights or neurons can be pruned. However, a limitation of these approaches is that weights are typically compared within each layer separately, while some layers might appear as more critical than others. In this work, we propose to investigate DNN layer importance, i.e. to estimate the sensitivity of the accuracy w.r.t. perturbations applied at the layer level. To do so, we propose a novel dataset to evaluate our method as well as future works. We benchmark a number of criteria and draw conclusions regarding how to assess DNN layer importance and, consequently, how to budgetize layers for increased DNN efficiency (with applications for DNN pruning and quantization), as well as robustness to hardware failure (e.g. bit swaps).
Publisher
Cornell University Library, arXiv.org
This website uses cookies to ensure you get the best experience on our website.