Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning
by
Liang, Yingyu
, Wei, Junyi
, Shi, Zhenmei
, Xu, Zhuoyan
, Mu, Fangzhou
, Li, Yin
in
Adaptation
/ Algorithms
/ Empirical analysis
/ Error reduction
/ Labels
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning
by
Liang, Yingyu
, Wei, Junyi
, Shi, Zhenmei
, Xu, Zhuoyan
, Mu, Fangzhou
, Li, Yin
in
Adaptation
/ Algorithms
/ Empirical analysis
/ Error reduction
/ Labels
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning
Paper
Towards Few-Shot Adaptation of Foundation Models via Multitask Finetuning
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Foundation models have emerged as a powerful tool for many AI problems. Despite the tremendous success of foundation models, effective adaptation to new tasks, particularly those with limited labels, remains an open question and lacks theoretical understanding. An emerging solution with recent success in vision and NLP involves finetuning a foundation model on a selection of relevant tasks, before its adaptation to a target task with limited labeled samples. In this paper, we study the theoretical justification of this multitask finetuning approach. Our theoretical analysis reveals that with a diverse set of related tasks, this multitask finetuning leads to reduced error in the target task, in comparison to directly adapting the same pretrained model. We quantify the relationship between finetuning tasks and target tasks by diversity and consistency metrics, and further propose a practical task selection algorithm. We substantiate our theoretical claims with extensive empirical evidence. Further, we present results affirming our task selection algorithm adeptly chooses related finetuning tasks, providing advantages to the model performance on target tasks. We believe our study shed new light on the effective adaptation of foundation models to new tasks that lack abundant labels. Our code is available at https://github.com/OliverXUZY/Foudation-Model_Multitask.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.