Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models
by
Xue, Linting
, Raffel, Colin
, Constant, Noah
, Al-Rfou, Rami
, Kale, Mihir
, Narang, Sharan
, Roberts, Adam
, Barua, Aditya
in
Amortization
/ Computational linguistics
/ Experiments
/ Inference
/ Language
/ Language modeling
/ Linguistics
/ Multilingualism
/ Noise
/ Pipelines
/ Pronunciation
/ Robustness
/ Sequences
/ Spelling
/ Subword units
2022
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models
by
Xue, Linting
, Raffel, Colin
, Constant, Noah
, Al-Rfou, Rami
, Kale, Mihir
, Narang, Sharan
, Roberts, Adam
, Barua, Aditya
in
Amortization
/ Computational linguistics
/ Experiments
/ Inference
/ Language
/ Language modeling
/ Linguistics
/ Multilingualism
/ Noise
/ Pipelines
/ Pronunciation
/ Robustness
/ Sequences
/ Spelling
/ Subword units
2022
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models
by
Xue, Linting
, Raffel, Colin
, Constant, Noah
, Al-Rfou, Rami
, Kale, Mihir
, Narang, Sharan
, Roberts, Adam
, Barua, Aditya
in
Amortization
/ Computational linguistics
/ Experiments
/ Inference
/ Language
/ Language modeling
/ Linguistics
/ Multilingualism
/ Noise
/ Pipelines
/ Pronunciation
/ Robustness
/ Sequences
/ Spelling
/ Subword units
2022
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models
Journal Article
ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models
2022
Request Book From Autostore
and Choose the Collection Method
Overview
Most widely used pre-trained language models operate on sequences of tokens corresponding to word or subword units. By comparison,
models that operate directly on raw text (bytes or characters) have many benefits: They can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Because byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.
Publisher
MIT Press,MIT Press Journals, The,The MIT Press
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
This website uses cookies to ensure you get the best experience on our website.