Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting
by
Gong, Wuxuan
, Xu, Weiran
, Lele, Yang
, Han, Yufei
, Ma, Zhanyu
, Liang, Kongming
, Diao, Muxi
, Zhang, Yutong
, Yan, Zhonghao
in
Entropy
2026
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting
by
Gong, Wuxuan
, Xu, Weiran
, Lele, Yang
, Han, Yufei
, Ma, Zhanyu
, Liang, Kongming
, Diao, Muxi
, Zhang, Yutong
, Yan, Zhonghao
in
Entropy
2026
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting
Paper
Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting
2026
Request Book From Autostore
and Choose the Collection Method
Overview
Supervised Fine-Tuning (SFT) is the standard paradigm for domain adaptation, yet it frequently incurs the cost of catastrophic forgetting. In sharp contrast, on-policy Reinforcement Learning (RL) effectively preserves general capabilities. We investigate this discrepancy and identify a fundamental distributional gap: while RL aligns with the model's internal belief, SFT forces the model to fit external supervision. This mismatch often manifests as \"Confident Conflicts\" tokens characterized by low probability but low entropy. In these instances, the model is highly confident in its own prediction but is forced to learn a divergent ground truth, triggering destructive gradient updates. To address this, we propose Entropy-Adaptive Fine-Tuning (EAFT). Unlike methods relying solely on prediction probability, EAFT utilizes token-level entropy as a gating mechanism to distinguish between epistemic uncertainty and knowledge conflict. This allows the model to learn from uncertain samples while suppressing gradients on conflicting data. Extensive experiments on Qwen and GLM series (ranging from 4B to 32B parameters) across mathematical, medical, and agentic domains confirm our hypothesis. EAFT consistently matches the downstream performance of standard SFT while significantly mitigating the degradation of general capabilities.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.