Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
UniGist: Towards General and Hardware-aligned Sequence-level Long Context Compression
by
Fang, Tianqing
, Zhang, Hongming
, Zhang, Zhisong
, Dou, Zhicheng
, Mao, Kelong
, Deng, Chenlong
, Mi, Haitao
, Yu, Dong
, Li, Shuaiyi
in
Compressive strength
/ Context
/ Large language models
/ Real time
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
UniGist: Towards General and Hardware-aligned Sequence-level Long Context Compression
by
Fang, Tianqing
, Zhang, Hongming
, Zhang, Zhisong
, Dou, Zhicheng
, Mao, Kelong
, Deng, Chenlong
, Mi, Haitao
, Yu, Dong
, Li, Shuaiyi
in
Compressive strength
/ Context
/ Large language models
/ Real time
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
UniGist: Towards General and Hardware-aligned Sequence-level Long Context Compression
Paper
UniGist: Towards General and Hardware-aligned Sequence-level Long Context Compression
2025
Request Book From Autostore
and Choose the Collection Method
Overview
Large language models are increasingly capable of handling long-context inputs, but the memory overhead of key-value (KV) cache remains a major bottleneck for general-purpose deployment. While various compression strategies have been explored, sequence-level compression, which drops the full KV caches for certain tokens, is particularly challenging as it can lead to the loss of important contextual information. To address this, we introduce UniGist, a sequence-level long-context compression framework that efficiently preserves context information by replacing raw tokens with special compression tokens (gists) in a fine-grained manner. We adopt a chunk-free training strategy and design an efficient kernel with a gist shift trick, enabling optimized GPU training. Our scheme also supports flexible inference by allowing the actual removal of compressed tokens, resulting in real-time memory savings. Experiments across multiple long-context tasks demonstrate that UniGist significantly improves compression quality, with especially strong performance in detail-recalling tasks and long-range dependency modeling.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.