Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
A Survey on Evaluation Metrics for Machine Translation
by
Seonmin Koo
, Heuiseok Lim
, Hyeonseok Moon
, Jaehyung Seo
, Jungseob Lee
, Sugyeong Eo
, Chanjun Park
, Seungjun Lee
in
Algorithms
/ automatic evaluation metric
/ Automation
/ Bilingualism
/ Deep learning
/ Experimentation
/ Food science
/ Hypotheses
/ Language
/ Literature reviews
/ Machine learning
/ Machine translation
/ Mathematics
/ Neural networks
/ Performance enhancement
/ Performance evaluation
/ QA1-939
/ Statistical methods
/ Taxonomy
/ Technology application
/ Transformer
/ Translating and interpreting
2023
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
A Survey on Evaluation Metrics for Machine Translation
by
Seonmin Koo
, Heuiseok Lim
, Hyeonseok Moon
, Jaehyung Seo
, Jungseob Lee
, Sugyeong Eo
, Chanjun Park
, Seungjun Lee
in
Algorithms
/ automatic evaluation metric
/ Automation
/ Bilingualism
/ Deep learning
/ Experimentation
/ Food science
/ Hypotheses
/ Language
/ Literature reviews
/ Machine learning
/ Machine translation
/ Mathematics
/ Neural networks
/ Performance enhancement
/ Performance evaluation
/ QA1-939
/ Statistical methods
/ Taxonomy
/ Technology application
/ Transformer
/ Translating and interpreting
2023
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
A Survey on Evaluation Metrics for Machine Translation
by
Seonmin Koo
, Heuiseok Lim
, Hyeonseok Moon
, Jaehyung Seo
, Jungseob Lee
, Sugyeong Eo
, Chanjun Park
, Seungjun Lee
in
Algorithms
/ automatic evaluation metric
/ Automation
/ Bilingualism
/ Deep learning
/ Experimentation
/ Food science
/ Hypotheses
/ Language
/ Literature reviews
/ Machine learning
/ Machine translation
/ Mathematics
/ Neural networks
/ Performance enhancement
/ Performance evaluation
/ QA1-939
/ Statistical methods
/ Taxonomy
/ Technology application
/ Transformer
/ Translating and interpreting
2023
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Journal Article
A Survey on Evaluation Metrics for Machine Translation
2023
Request Book From Autostore
and Choose the Collection Method
Overview
The success of Transformer architecture has seen increased interest in machine translation (MT). The translation quality of neural network-based MT transcends that of translations derived using statistical methods. This growth in MT research has entailed the development of accurate automatic evaluation metrics that allow us to track the performance of MT. However, automatically evaluating and comparing MT systems is a challenging task. Several studies have shown that traditional metrics (e.g., BLEU, TER) show poor performance in capturing semantic similarity between MT outputs and human reference translations. To date, to improve performance, various evaluation metrics have been proposed using the Transformer architecture. However, a systematic and comprehensive literature review on these metrics is still missing. Therefore, it is necessary to survey the existing automatic evaluation metrics of MT to enable both established and new researchers to quickly understand the trend of MT evaluation over the past few years. In this survey, we present the trend of automatic evaluation metrics. To better understand the developments in the field, we provide the taxonomy of the automatic evaluation metrics. Then, we explain the key contributions and shortcomings of the metrics. In addition, we select the representative metrics from the taxonomy, and conduct experiments to analyze related problems. Finally, we discuss the limitation of the current automatic metric studies through the experimentation and our suggestions for further research to improve the automatic evaluation metrics.
This website uses cookies to ensure you get the best experience on our website.