Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Learning from models beyond fine-tuning
by
Tang, Anke
, Tao, Dacheng
, Hu, Han
, Luo, Yong
, Zheng, Hongling
, Shen, Li
, Wen, Yonggang
, Du, Bo
in
4014/4009
/ 639/705/1042
/ 639/705/117
/ Algorithms
/ Application programming interface
/ Appropriate technology
/ Cost control
/ Editing
/ Engineering
/ Knowledge
/ Language
/ Large language models
/ Natural language
/ Neural networks
/ Optimization
/ Paradigms
/ Review Article
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Learning from models beyond fine-tuning
by
Tang, Anke
, Tao, Dacheng
, Hu, Han
, Luo, Yong
, Zheng, Hongling
, Shen, Li
, Wen, Yonggang
, Du, Bo
in
4014/4009
/ 639/705/1042
/ 639/705/117
/ Algorithms
/ Application programming interface
/ Appropriate technology
/ Cost control
/ Editing
/ Engineering
/ Knowledge
/ Language
/ Large language models
/ Natural language
/ Neural networks
/ Optimization
/ Paradigms
/ Review Article
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Learning from models beyond fine-tuning
by
Tang, Anke
, Tao, Dacheng
, Hu, Han
, Luo, Yong
, Zheng, Hongling
, Shen, Li
, Wen, Yonggang
, Du, Bo
in
4014/4009
/ 639/705/1042
/ 639/705/117
/ Algorithms
/ Application programming interface
/ Appropriate technology
/ Cost control
/ Editing
/ Engineering
/ Knowledge
/ Language
/ Large language models
/ Natural language
/ Neural networks
/ Optimization
/ Paradigms
/ Review Article
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Journal Article
Learning from models beyond fine-tuning
2025
Request Book From Autostore
and Choose the Collection Method
Overview
Foundation models have demonstrated remarkable performance across various tasks, primarily due to their abilities to comprehend instructions and access extensive, high-quality data. These capabilities showcase the effectiveness of current foundation models and suggest a promising trajectory. Owing to multiple constraints, such as the extreme scarcity or inaccessibility of raw data used to train foundation models and the high cost of training large-scale foundation models from scratch, the use of pre-existing foundation models or application programming interfaces for downstream tasks has become a new research trend, which we call Learn from Model (LFM). LFM involves extracting and leveraging prior knowledge from foundation models through fine-tuning, editing and fusion methods and applying it to downstream tasks. We emphasize that maximizing the use of parametric knowledge in data-scarce scenarios is critical to LFM. Analysing the LFM paradigm can guide the selection of the most appropriate technology in a given scenario to minimize parameter storage and computational costs while improving the performance of foundation models on new tasks. This Review provides a comprehensive overview of current methods based on foundation models from the perspective of LFM.
Large general-purpose models are becoming more prevalent and useful, but also harder to train and find suitable training data for. Zheng et al. discuss how models can be used to train other models.
Publisher
Nature Publishing Group UK,Nature Publishing Group
This website uses cookies to ensure you get the best experience on our website.