Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
by
Clark, Jonathan H
, Garrette, Dan
, Turc, Iulia
, Wieting, John
in
Coders
/ Training
2022
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
by
Clark, Jonathan H
, Garrette, Dan
, Turc, Iulia
, Wieting, John
in
Coders
/ Training
2022
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Paper
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
2022
Request Book From Autostore
and Choose the Collection Method
Overview
Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.
This website uses cookies to ensure you get the best experience on our website.