Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Model Compression for Deep Neural Networks: A Survey
by
Meng, Lin
, Li, Hengyi
, Li, Zhuo
in
Accuracy
/ Algorithms
/ Artificial intelligence
/ Artificial neural networks
/ Complexity
/ Computer vision
/ Data compression
/ Decomposition
/ deep neural networks
/ Distillation
/ Field programmable gate arrays
/ knowledge distillation
/ Learning
/ low-rank decomposition
/ Machine learning
/ Machine vision
/ Methods
/ model compression
/ model pruning
/ Neural networks
/ parameter quantization
/ Sparsity
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Model Compression for Deep Neural Networks: A Survey
by
Meng, Lin
, Li, Hengyi
, Li, Zhuo
in
Accuracy
/ Algorithms
/ Artificial intelligence
/ Artificial neural networks
/ Complexity
/ Computer vision
/ Data compression
/ Decomposition
/ deep neural networks
/ Distillation
/ Field programmable gate arrays
/ knowledge distillation
/ Learning
/ low-rank decomposition
/ Machine learning
/ Machine vision
/ Methods
/ model compression
/ model pruning
/ Neural networks
/ parameter quantization
/ Sparsity
2023
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Model Compression for Deep Neural Networks: A Survey
by
Meng, Lin
, Li, Hengyi
, Li, Zhuo
in
Accuracy
/ Algorithms
/ Artificial intelligence
/ Artificial neural networks
/ Complexity
/ Computer vision
/ Data compression
/ Decomposition
/ deep neural networks
/ Distillation
/ Field programmable gate arrays
/ knowledge distillation
/ Learning
/ low-rank decomposition
/ Machine learning
/ Machine vision
/ Methods
/ model compression
/ model pruning
/ Neural networks
/ parameter quantization
/ Sparsity
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Journal Article
Model Compression for Deep Neural Networks: A Survey
2023
Request Book From Autostore
and Choose the Collection Method
Overview
Currently, with the rapid development of deep learning, deep neural networks (DNNs) have been widely applied in various computer vision tasks. However, in the pursuit of performance, advanced DNN models have become more complex, which has led to a large memory footprint and high computation demands. As a result, the models are difficult to apply in real time. To address these issues, model compression has become a focus of research. Furthermore, model compression techniques play an important role in deploying models on edge devices. This study analyzed various model compression methods to assist researchers in reducing device storage space, speeding up model inference, reducing model complexity and training costs, and improving model deployment. Hence, this paper summarized the state-of-the-art techniques for model compression, including model pruning, parameter quantization, low-rank decomposition, knowledge distillation, and lightweight model design. In addition, this paper discusses research challenges and directions for future work.
Publisher
MDPI AG
Subject
This website uses cookies to ensure you get the best experience on our website.