Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Big2Small: Learning from masked image modelling with heterogeneous self‐supervised knowledge distillation
by
Han, Shumin
, Wang, Xiaodi
, Hao, Jing
, Wang, Ziming
, Zhang, Baochang
, Cao, Xianbin
in
artificial intelligence
/ Artificial neural networks
/ Datasets
/ deep neural network
/ Image classification
/ Image segmentation
/ Knowledge
/ Learning
/ machine intelligence
/ machine learning
/ Methods
/ Modelling
/ Object recognition
/ Probability distribution
/ Representations
/ Semantic segmentation
/ Success
/ Teachers
/ vision
/ Visual tasks
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Big2Small: Learning from masked image modelling with heterogeneous self‐supervised knowledge distillation
by
Han, Shumin
, Wang, Xiaodi
, Hao, Jing
, Wang, Ziming
, Zhang, Baochang
, Cao, Xianbin
in
artificial intelligence
/ Artificial neural networks
/ Datasets
/ deep neural network
/ Image classification
/ Image segmentation
/ Knowledge
/ Learning
/ machine intelligence
/ machine learning
/ Methods
/ Modelling
/ Object recognition
/ Probability distribution
/ Representations
/ Semantic segmentation
/ Success
/ Teachers
/ vision
/ Visual tasks
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Big2Small: Learning from masked image modelling with heterogeneous self‐supervised knowledge distillation
by
Han, Shumin
, Wang, Xiaodi
, Hao, Jing
, Wang, Ziming
, Zhang, Baochang
, Cao, Xianbin
in
artificial intelligence
/ Artificial neural networks
/ Datasets
/ deep neural network
/ Image classification
/ Image segmentation
/ Knowledge
/ Learning
/ machine intelligence
/ machine learning
/ Methods
/ Modelling
/ Object recognition
/ Probability distribution
/ Representations
/ Semantic segmentation
/ Success
/ Teachers
/ vision
/ Visual tasks
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Big2Small: Learning from masked image modelling with heterogeneous self‐supervised knowledge distillation
Journal Article
Big2Small: Learning from masked image modelling with heterogeneous self‐supervised knowledge distillation
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Small convolutional neural network (CNN)‐based models usually require transferring knowledge from a large model before they are deployed in computationally resource‐limited edge devices. Masked image modelling (MIM) methods achieve great success in various visual tasks but remain largely unexplored in knowledge distillation for heterogeneous deep models. The reason is mainly due to the significant discrepancy between the transformer‐based large model and the CNN‐based small network. In this paper, the authors develop the first heterogeneous self‐supervised knowledge distillation (HSKD) based on MIM, which can efficiently transfer knowledge from large transformer models to small CNN‐based models in a self‐supervised fashion. Our method builds a bridge between transformer‐based models and CNNs by training a UNet‐style student with sparse convolution, which can effectively mimic the visual representation inferred by a teacher over masked modelling. Our method is a simple yet effective learning paradigm to learn the visual representation and distribution of data from heterogeneous teacher models, which can be pre‐trained using advanced self‐supervised methods. Extensive experiments show that it adapts well to various models and sizes, consistently achieving state‐of‐the‐art performance in image classification, object detection, and semantic segmentation tasks. For example, in the Imagenet 1K dataset, HSKD improves the accuracy of Resnet‐50 (sparse) from 76.98% to 80.01%.
This website uses cookies to ensure you get the best experience on our website.