Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Training Tips for the Transformer Model
by
Bojar, Ondřej
, Popel, Martin
in
Experiments
/ Linguistics
/ Machine learning
/ Machine translation
/ Training
/ Transformers
/ Translation
2018
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Training Tips for the Transformer Model
by
Bojar, Ondřej
, Popel, Martin
in
Experiments
/ Linguistics
/ Machine learning
/ Machine translation
/ Training
/ Transformers
/ Translation
2018
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Journal Article
Training Tips for the Transformer Model
2018
Request Book From Autostore
and Choose the Collection Method
Overview
This article describes our experiments in neural machine translation using the recent Tensor2Tensor framework and the Transformer sequence-to-sequence model (
). We examine some of the critical parameters that affect the final translation quality, memory usage, training stability and training time, concluding each experiment with a set of recommendations for fellow researchers. In addition to confirming the general mantra “more data and larger models”, we address scaling to multiple GPUs and provide practical tips for improved training regarding batch size, learning rate, warmup steps, maximum sentence length and checkpoint averaging. We hope that our observations will allow others to get better results given their particular hardware and data constraints.
Publisher
De Gruyter Open,Institute of Formal and Applied Linguistics
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
This website uses cookies to ensure you get the best experience on our website.