Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Designing strong baselines for ternary neural network quantization through support and mass equalization
by
Bailly, Kevin
, Dapogny, Arnaud
, Yvinec, Edouard
in
Artificial neural networks
/ Computer vision
/ Floating point arithmetic
/ Kurtosis
/ Neural networks
/ Performance enhancement
/ Training
/ Weight
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Designing strong baselines for ternary neural network quantization through support and mass equalization
by
Bailly, Kevin
, Dapogny, Arnaud
, Yvinec, Edouard
in
Artificial neural networks
/ Computer vision
/ Floating point arithmetic
/ Kurtosis
/ Neural networks
/ Performance enhancement
/ Training
/ Weight
2023
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Designing strong baselines for ternary neural network quantization through support and mass equalization
by
Bailly, Kevin
, Dapogny, Arnaud
, Yvinec, Edouard
in
Artificial neural networks
/ Computer vision
/ Floating point arithmetic
/ Kurtosis
/ Neural networks
/ Performance enhancement
/ Training
/ Weight
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Designing strong baselines for ternary neural network quantization through support and mass equalization
Paper
Designing strong baselines for ternary neural network quantization through support and mass equalization
2023
Request Book From Autostore
and Choose the Collection Method
Overview
Deep neural networks (DNNs) offer the highest performance in a wide range of applications in computer vision. These results rely on over-parameterized backbones, which are expensive to run. This computational burden can be dramatically reduced by quantizing (in either data-free (DFQ), post-training (PTQ) or quantization-aware training (QAT) scenarios) floating point values to ternary values (2 bits, with each weight taking value in {-1,0,1}). In this context, we observe that rounding to nearest minimizes the expected error given a uniform distribution and thus does not account for the skewness and kurtosis of the weight distribution, which strongly affects ternary quantization performance. This raises the following question: shall one minimize the highest or average quantization error? To answer this, we design two operators: TQuant and MQuant that correspond to these respective minimization tasks. We show experimentally that our approach allows to significantly improve the performance of ternary quantization through a variety of scenarios in DFQ, PTQ and QAT and give strong insights to pave the way for future research in deep neural network quantization.
Publisher
Cornell University Library, arXiv.org
This website uses cookies to ensure you get the best experience on our website.