Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding
by
Tanno, Ryutaro
, Pathak, Shreya
, Schwarz, Jonathan
, Henaff, Olivier J
, Hamza Merzic
, Evans, Talfan
in
Active learning
/ Computation
/ Datasets
/ Policies
/ Training
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding
by
Tanno, Ryutaro
, Pathak, Shreya
, Schwarz, Jonathan
, Henaff, Olivier J
, Hamza Merzic
, Evans, Talfan
in
Active learning
/ Computation
/ Datasets
/ Policies
/ Training
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding
Paper
Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Power-law scaling indicates that large-scale training with uniform sampling is prohibitively slow. Active learning methods aim to increase data efficiency by prioritizing learning on the most relevant examples. Despite their appeal, these methods have yet to be widely adopted since no one algorithm has been shown to a) generalize across models and tasks b) scale to large datasets and c) yield overall FLOP savings when accounting for the overhead of data selection. In this work we propose a method which satisfies these three properties, leveraging small, cheap proxy models to estimate \"learnability\" scores for datapoints, which are used to prioritize data for the training of much larger models. As a result, our models require 46% and 51% fewer training updates and up to 25% less total computation to reach the same performance as uniformly trained visual classifiers on JFT and multimodal models on ALIGN. Finally, we find our data-prioritization scheme to be complementary with recent data-curation and learning objectives, yielding a new state-of-the-art in several multimodal transfer tasks.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.