Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Catch Your Breath: Adaptive Computation for Self-Paced Sequence Production
by
Nagarajan, Vaishnavh
, Jones, Matt
, Mozer, Michael C
, Galashov, Alexandre
, Ke, Rosemary
, Cao, Yuan
in
Accuracy
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Catch Your Breath: Adaptive Computation for Self-Paced Sequence Production
by
Nagarajan, Vaishnavh
, Jones, Matt
, Mozer, Michael C
, Galashov, Alexandre
, Ke, Rosemary
, Cao, Yuan
in
Accuracy
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Catch Your Breath: Adaptive Computation for Self-Paced Sequence Production
Paper
Catch Your Breath: Adaptive Computation for Self-Paced Sequence Production
2025
Request Book From Autostore
and Choose the Collection Method
Overview
We explore a class of supervised training objectives that allow a language model to dynamically and autonomously scale the number of compute steps used for each input token. For any token, the model can request additional compute steps by emitting a output. If the model is granted a delay, a specialized token is inserted at the next input step, providing the model with additional compute resources to generate an output. The model can request multiple pauses. To train the model to use outputs judiciously and to calibrate its uncertainty, we frame the selection of each output token as a sequential-decision problem with a time cost. We refer to the class of methods as \\(\\textit{Catch Your Breath}\\) losses and we study three methods in this class: CYB-AP frames the model's task as anytime prediction, where an output may be required at any step and accuracy is discounted over time; CYB-VA is a variational approach that aims to maximize prediction accuracy subject to a specified distribution over stopping times; and CYB-DP imposes a penalty based on a computational budget. Through fine-tuning experiments, we identify the best performing loss variant. The CYB model needs only one third as much training data as the baseline (no pause) model needs to achieve the same performance, and half as much data as a model with pauses and a cross-entropy loss. We find that the CYB model requests additional steps when doing so improves accuracy, and the model adapts its processing time to token-level complexity and context. For example, it often pauses after plural nouns like \\(\\textit{patients}\\) and \\(\\textit{challenges}\\) but never pauses after the first token of contracted words like \\(\\textit{wasn}\\) and \\(\\textit{didn}\\), and it shows high variability for ambiguous tokens like \\(\\textit{won}\\), which could function as either a verb or part of a contraction.
Publisher
Cornell University Library, arXiv.org
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
We currently cannot retrieve any items related to this title. Kindly check back at a later time.
This website uses cookies to ensure you get the best experience on our website.