Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
The Impact of Depth on Compositional Generalization in Transformer Language Models
by
Sha, Fei
, Garrette, Dan
, Petty, Jackson
, Dasgupta, Ishita
, Linzen, Tal
, Sjoerd van Steenkiste
in
Mathematical models
/ Modelling
/ Parameters
/ Transformers
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
The Impact of Depth on Compositional Generalization in Transformer Language Models
by
Sha, Fei
, Garrette, Dan
, Petty, Jackson
, Dasgupta, Ishita
, Linzen, Tal
, Sjoerd van Steenkiste
in
Mathematical models
/ Modelling
/ Parameters
/ Transformers
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
The Impact of Depth on Compositional Generalization in Transformer Language Models
Paper
The Impact of Depth on Compositional Generalization in Transformer Language Models
2024
Request Book From Autostore
and Choose the Collection Method
Overview
To process novel sentences, language models (LMs) must generalize compositionally -- combine familiar elements in new ways. What aspects of a model's structure promote compositional generalization? Focusing on transformers, we test the hypothesis, motivated by theoretical and empirical work, that deeper transformers generalize more compositionally. Simply adding layers increases the total number of parameters; to address this confound between depth and size, we construct three classes of models which trade off depth for width such that the total number of parameters is kept constant (41M, 134M and 374M parameters). We pretrain all models as LMs and fine-tune them on tasks that test for compositional generalization. We report three main conclusions: (1) after fine-tuning, deeper models generalize more compositionally than shallower models do, but the benefit of additional layers diminishes rapidly; (2) within each family, deeper models show better language modeling performance, but returns are similarly diminishing; (3) the benefits of depth for compositional generalization cannot be attributed solely to better performance on language modeling. Because model latency is approximately linear in the number of layers, these results lead us to the recommendation that, with a given total parameter budget, transformers can be made shallower than is typical without sacrificing performance.
Publisher
Cornell University Library, arXiv.org
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
This website uses cookies to ensure you get the best experience on our website.