Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Spark Transformer: Reactivating Sparsity in FFN and Attention
by
Willcock, Jeremiah J
, Wassenberg, Jan
, Pathak, Shreya
, Wu, Kan
, Chen, Lin
, Subramanian, Suvinay
, Bhojanapalli, Srinadh
, Yu, Felix
, Evci, Utku
, Andreev, Alek
, Culler, David E
, Levy, Henry M
, Praneeth Netrapalli
, Kumar, Sanjiv
, Jain, Prateek
, Jia, Zhipeng
, You, Chong
, Chern, Felix
, Guo, Jiaxian
in
Decoding
/ Neurons
/ Parameter identification
/ Sparsity
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Spark Transformer: Reactivating Sparsity in FFN and Attention
by
Willcock, Jeremiah J
, Wassenberg, Jan
, Pathak, Shreya
, Wu, Kan
, Chen, Lin
, Subramanian, Suvinay
, Bhojanapalli, Srinadh
, Yu, Felix
, Evci, Utku
, Andreev, Alek
, Culler, David E
, Levy, Henry M
, Praneeth Netrapalli
, Kumar, Sanjiv
, Jain, Prateek
, Jia, Zhipeng
, You, Chong
, Chern, Felix
, Guo, Jiaxian
in
Decoding
/ Neurons
/ Parameter identification
/ Sparsity
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Spark Transformer: Reactivating Sparsity in FFN and Attention
by
Willcock, Jeremiah J
, Wassenberg, Jan
, Pathak, Shreya
, Wu, Kan
, Chen, Lin
, Subramanian, Suvinay
, Bhojanapalli, Srinadh
, Yu, Felix
, Evci, Utku
, Andreev, Alek
, Culler, David E
, Levy, Henry M
, Praneeth Netrapalli
, Kumar, Sanjiv
, Jain, Prateek
, Jia, Zhipeng
, You, Chong
, Chern, Felix
, Guo, Jiaxian
in
Decoding
/ Neurons
/ Parameter identification
/ Sparsity
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Spark Transformer: Reactivating Sparsity in FFN and Attention
Paper
Spark Transformer: Reactivating Sparsity in FFN and Attention
2025
Request Book From Autostore
and Choose the Collection Method
Overview
The discovery of the lazy neuron phenomenon in trained Transformers, where the vast majority of neurons in their feed-forward networks (FFN) are inactive for each token, has spurred tremendous interests in activation sparsity for enhancing large model efficiency. While notable progress has been made in translating such sparsity to wall-time benefits, modern Transformers have moved away from the ReLU activation function crucial to this phenomenon. Existing efforts on re-introducing activation sparsity often degrade model quality, increase parameter count, complicate or slow down training. Sparse attention, the application of sparse activation to the attention mechanism, often faces similar challenges. This paper introduces the Spark Transformer, a novel architecture that achieves a high level of activation sparsity in both FFN and the attention mechanism while maintaining model quality, parameter count, and standard training procedures. Our method realizes sparsity via top-k masking for explicit control over sparsity level. Crucially, we introduce statistical top-k, a hardware-accelerator-friendly, linear-time approximate algorithm that avoids costly sorting and mitigates significant training slowdown from standard top-\\(k\\) operators. Furthermore, Spark Transformer reallocates existing FFN parameters and attention key embeddings to form a low-cost predictor for identifying activated entries. This design not only mitigates quality loss from enforced sparsity, but also enhances wall-time benefit. Pretrained with the Gemma-2 recipe, Spark Transformer demonstrates competitive performance on standard benchmarks while exhibiting significant sparsity: only 8% of FFN neurons are activated, and each token attends to a maximum of 256 tokens. This sparsity translates to a 2.5x reduction in FLOPs, leading to decoding wall-time speedups of up to 1.79x on CPU and 1.40x on GPU.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.