Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Improving neural machine translation for low resource languages through non-parallel corpora: a case study of Egyptian dialect to modern standard Arabic translation
by
Bayomi, Hanaa
, Abdou, Sherif Mahdy
, Wassif, Khaled Tawfik
, Faheem, Mohamed Atta
in
639/705
/ 639/705/1046
/ 639/705/117
/ 639/705/258
/ Arabic language
/ Case studies
/ Data
/ Datasets
/ Dialects
/ Humanities and Social Sciences
/ Languages
/ Learning
/ Literary translation
/ Machine learning
/ Machine translation
/ Monolingualism
/ multidisciplinary
/ Parallel corpora
/ Science
/ Science (multidisciplinary)
/ Standard dialects
/ Training
/ Translation
/ Translations
/ Vocabulary
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Improving neural machine translation for low resource languages through non-parallel corpora: a case study of Egyptian dialect to modern standard Arabic translation
by
Bayomi, Hanaa
, Abdou, Sherif Mahdy
, Wassif, Khaled Tawfik
, Faheem, Mohamed Atta
in
639/705
/ 639/705/1046
/ 639/705/117
/ 639/705/258
/ Arabic language
/ Case studies
/ Data
/ Datasets
/ Dialects
/ Humanities and Social Sciences
/ Languages
/ Learning
/ Literary translation
/ Machine learning
/ Machine translation
/ Monolingualism
/ multidisciplinary
/ Parallel corpora
/ Science
/ Science (multidisciplinary)
/ Standard dialects
/ Training
/ Translation
/ Translations
/ Vocabulary
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Improving neural machine translation for low resource languages through non-parallel corpora: a case study of Egyptian dialect to modern standard Arabic translation
by
Bayomi, Hanaa
, Abdou, Sherif Mahdy
, Wassif, Khaled Tawfik
, Faheem, Mohamed Atta
in
639/705
/ 639/705/1046
/ 639/705/117
/ 639/705/258
/ Arabic language
/ Case studies
/ Data
/ Datasets
/ Dialects
/ Humanities and Social Sciences
/ Languages
/ Learning
/ Literary translation
/ Machine learning
/ Machine translation
/ Monolingualism
/ multidisciplinary
/ Parallel corpora
/ Science
/ Science (multidisciplinary)
/ Standard dialects
/ Training
/ Translation
/ Translations
/ Vocabulary
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Improving neural machine translation for low resource languages through non-parallel corpora: a case study of Egyptian dialect to modern standard Arabic translation
Journal Article
Improving neural machine translation for low resource languages through non-parallel corpora: a case study of Egyptian dialect to modern standard Arabic translation
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Machine translation for low-resource languages poses significant challenges, primarily due to the limited availability of data. In recent years, unsupervised learning has emerged as a promising approach to overcome this issue by aiming to learn translations between languages without depending on parallel data. A wide range of methods have been proposed in the literature to address this complex problem. This paper presents an in-depth investigation of semi-supervised neural machine translation specifically focusing on translating Arabic dialects, particularly Egyptian, to Modern Standard Arabic. The study employs two distinct datasets: one parallel dataset containing aligned sentences in both dialects, and a monolingual dataset where the source dialect is not directly connected to the target language in the training data. Three different translation systems are explored in this study. The first is an attention-based sequence-to-sequence model that benefits from the shared vocabulary between the Egyptian dialect and Modern Arabic to learn word embeddings. The second is an unsupervised transformer model that depends solely on monolingual data, without any parallel data. The third system starts with the parallel dataset for an initial supervised learning phase and then incorporates the monolingual data during the training process.
This website uses cookies to ensure you get the best experience on our website.