Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
407 result(s) for "Translation enhancement"
Sort by:
FMRP Enhances the Translation of 4EBP2 mRNA during Neuronal Differentiation
FMRP is a multifunctional protein encoded by the Fragile X Messenger Ribonucleoprotein 1 gene (FMR1). The inactivation of the FMR1 gene results in fragile X syndrome (FXS), a serious neurodevelopmental disorder. FMRP deficiency causes abnormal neurite outgrowth, which is likely to lead to abnormal learning and memory capabilities. However, the mechanism of FMRP in modulating neuronal development remains unknown. We found that FMRP enhances the translation of 4EBP2, a neuron-specific form of 4EBPs that inactivates eIF4E by inhibiting the interaction between eIF4E and eIF4G. Depletion of 4EBP2 results in abnormal neurite outgrowth. Moreover, the impairment of neurite outgrowth upon FMRP depletion was overcome by the ectopic expression of 4EBP2. These results suggest that FMRP controls neuronal development by enhancing 4EBP2 expression at the translational level. In addition, treatment with 4EGI-1, a chemical that blocks eIF4E activity, restored neurite length in FMRP-depleted and 4EBP2-depleted cells. In conclusion, we discovered that 4EBP2 functions as a key downstream regulator of FMRP activity in neuronal development and that FMRP represses eIF4E activity by enhancing 4EBP2 translation.
Research on cultural translation enhancement of Chinese art English textbooks based on improved Marian NMT and cultural adversarial networks
This study focuses on the translation and knowledge presentation of Chinese culture in art English textbooks. Due to the complex cultural context and highly specialized terminology in art English textbooks, traditional translation models struggle to accurately convey the deep semantic meaning and artistic value of Chinese culture. This paper proposes a translation enhancement method that integrates an improved Marian neural machine translation (Marian NMT) model with cultural adversarial reasoning networks (Cultural-Adversarial Reasoning Networks). The method employs transfer learning to incorporate Chinese cultural corpora for pre-training and combines a small amount of bilingual annotated data from art textbooks for fine-tuning. The model incorporates a cultural discriminator and generator adversarial mechanism to enhance the identification of culturally loaded words, art terminology, and context, thereby improving the cultural accuracy and educational suitability of the translation. Experiments were conducted on the “Chinese-English Parallel Corpus of Art English Textbooks,” covering themes such as painting, calligraphy, opera, and architecture. The results show that compared to the original Marian NMT, Transformer, and back-translation models, this method achieves significant improvements in BLEU, ROUGE, METEOR, and cultural knowledge integration accuracy (KIA), validating its effectiveness in translating Chinese cultural art English textbooks. The study concludes that this method can enhance the translation quality and teaching presentation effects of Chinese cultural elements in textbooks, providing technical support for the international dissemination of Chinese culture and textbook development.
Enhancement of Translation Initiation by A/T-Rich Sequences Downstream of the Initiation Codon in Escherichia coli
The region located downstream of the initiation codon constitutes part of the translation initiation signal, significantly affecting the level of protein expression in E. coli. In order to determine its influence on translation initiation, we inserted random 12-base sequences downstream of the initiation codon of the lacZ gene. A total of 119 random clones showing higher β-galactosidase activities than the control lacZ gene were isolated and subsequently sequenced. Analysis of these clones revealed that their insertion sequences are strikingly rich in A and T, but poor in G, with no consensus sequences among them. Toeprinting experiments and polysome profile analysis confirmed that the A/T-rich sequences enhance translation at the level of initiation. Collectively, the present data demonstrate that A/T richness of the region following the initiation codon plays a significant role in E. coli gene expression. Copyright © 2003 S. Karger AG, Basel [PUBLICATION ABSTRACT]
Joint Translation Method for English–Chinese Place Names Based on Prompt Learning and Knowledge Graph Enhancement
In producing English-Chinese bilingual maps, it is usually necessary to translate English place names into Chinese. However, pipeline-based methods for translating place names splits the place name translation task into multiple sub-tasks, carries the risk of error propagation, resulting in lower efficiency and poorer accuracy. Meanwhile, there is relatively little research on place name joint translation. In this regard, the study proposes an English-Chinese place name joint translation method based on prompt learning and knowledge graph enhancement. This method aims to improve the accuracy of English-Chinese place name translation. The proposed method is divided into two parts: The first part is the construction of prompt word template for place name translation. For the translation task of place names, the study first analyzes the characteristics of the transliteration of specific names and the semantic translation of generic names, constructing prompt word templates for the joint translation of ordinary place names. Then, based on the prompt words for ordinary place name translation, it takes into account the translation characteristics of the derived parts in derived place names, constructing a prompt word template for the joint translation of derived place names. Ultimately, leveraging the powerful contextual learning ability of LLM (Large Language Models), it achieves the joint translation of English and Chinese place names. The second part is the construction of the ontology of place name translation knowledge graph. To retrieve relevant knowledge about the input place names, the study designs an ontology for a knowledge graph of place names translation aimed at the English-Chinese place name translation task, combining the needs of English-Chinese place name translation and the semantic relationships between place names. This enhances the contextual information of the input place names and improves the performance of large language models in the English-Chinese place name translation task. Experiments have shown that compared to traditional pipeline-based place name translation methods, the place name translation method proposed in the study has improved performance by 21.26% in ordinary place name translation and an average of 27.70% in the field of derived place name translation. In bilingual map production, the method effectively improves the efficiency and accuracy of toponymic translation. Simultaneously providing reference for place name translation tasks in other languages.
Transforming Language Translation: A Deep Learning Approach to Urdu–English Translation
Machine translation has revolutionized the field of language translation in the last decade. Initially dominated by statistical models, the rise of deep learning techniques has led to neural networks, particularly Transformer models, taking the lead. These models have demonstrated exceptional performance in natural language processing tasks, surpassing traditional sequence-to-sequence models like RNN, GRU, and LSTM. With advantages like better handling of long-range dependencies and requiring less training time, the NLP community has shifted towards using Transformers for sequence-to-sequence tasks. In this work, we leverage the sequence-to-sequence transformer model to translate Urdu (a low resourced language) to English. Our model is based on a variant of transformer with some changes as activation dropout, attention dropout and final layer normalization. We have used four different datasets (UMC005, Tanzil, The Wire, and PIB) from two categories (religious and news) to train our model. The achieved results demonstrated that the model’s performance and quality of translation varied depending on the dataset used for fine-tuning. Our designed model has out performed the baseline models with 23.9 BLEU, 0.46 chrf, 0.44 METEOR and 60.75 TER scores. The enhanced performance attributes to meticulous parameter tuning, encompassing modifications in architecture and optimization techniques. Comprehensive parametric details regarding model configurations and optimizations are provided to elucidate the distinctiveness of our approach and how it surpasses prior works. We provide source code via GitHub for future studies.
Improved Unsupervised Neural Machine Translation with Semantically Weighted Back Translation for Morphologically Rich and Low Resource Languages
The effective method to utilize monolingual data and enhance the performance of neural machine translation models is back-translation. Iteratively conducting back-translation can further improve the performance of the translation model. In back-translation where, pseudo sentence pairs are generated to train the translation systems with a reconstruction loss, but all the pseudo sentence pairs are not of good quality, which can severely impact the performance of neural machine translation systems. This paper proposes an approach to unsupervised learning for neural machine translation with weighted back translation as part of the training process, as it provides more weight to good pseudo-parallel sentence pairs. The weight is calculated as the round-trip semantic similarity score for each pseudo-parallel sentence. We overcome the limitation of earlier lexical metric-based approaches, especially in the case of morphologically rich languages. Experimental results show an improvement of up to around 0.7% BLEU score over the baseline paper for morphologically rich language (English–Hindi, English–Tamil, and English–Telugu) and 0.3% BLEU score for low resource Hindi-Kangri language.
Back-translation effects on static and contextual word embeddings for topic classification embedding in classification tasks
This study investigates the impact of back-translation on topic classification, comparing its effects on static word vector representations (FastText) and contextual word embeddings (RoBERTa). Our objective was to determine whether back-translation improves classification performance across both types of embeddings. In experiments involving Logistic Regression, Support Vector Machine (SVM), Random Forest, and RNN-LSTM classifiers, we evaluated original datasets against those augmented with back-translated data in six languages. The results demonstrated that back-translation consistently enhanced the performance of classifiers using static word embeddings, with the F1-score increasing by up to 1.36% for Logistic Regression and 1.58% for SVM. Random Forest saw improvements of up to 2.80%, and RNN-LSTM by up to 1.46%; however, these gains were smaller in most languages and did not reach statistical significance. In contrast, the effect of back-translation on contextual embeddings from the RoBERTa model was negligible: no language showed a statistically significant F1-score improvement. Despite this, RoBERTa still delivered the highest absolute performance, suggesting that advanced contextual models are less reliant on external data augmentation techniques. These findings indicate that back-translation is especially beneficial for classification tasks in low-resource languages when using static word embeddings, but its utility is limited for modern context-aware models.
Knowledge enhanced zero-resource machine translation using image-pivoting
Zero resource machine translation usually means that there are no parallel corpora in the training of machine translation models, which can be solved with the help of extra information such as images. However, the ambiguity in the text, together with the irrelevant information in images, may cause the problem of translation errors of some key words. In order to alleviate the problem of image-text alignment deviation caused by word ambiguity, we introduce knowledge entities as an extra modality for the source language to enhance the representations of the source text to clarify its semantics. Specifically, we use additional multi-modal information including images and knowledge entities as an auxiliary hint for the source text in the Transformer-based zero-resource translation framework. We also solve the problem of the structural difference between the training and inference stages to handle the cases where there is no longer visual information in the inference stage. The proposed method achieves state-of-the-art BLEU scores in the field of zero-resource machine translation with the image as the pivot.
A Survey on Evaluation Metrics for Machine Translation
The success of Transformer architecture has seen increased interest in machine translation (MT). The translation quality of neural network-based MT transcends that of translations derived using statistical methods. This growth in MT research has entailed the development of accurate automatic evaluation metrics that allow us to track the performance of MT. However, automatically evaluating and comparing MT systems is a challenging task. Several studies have shown that traditional metrics (e.g., BLEU, TER) show poor performance in capturing semantic similarity between MT outputs and human reference translations. To date, to improve performance, various evaluation metrics have been proposed using the Transformer architecture. However, a systematic and comprehensive literature review on these metrics is still missing. Therefore, it is necessary to survey the existing automatic evaluation metrics of MT to enable both established and new researchers to quickly understand the trend of MT evaluation over the past few years. In this survey, we present the trend of automatic evaluation metrics. To better understand the developments in the field, we provide the taxonomy of the automatic evaluation metrics. Then, we explain the key contributions and shortcomings of the metrics. In addition, we select the representative metrics from the taxonomy, and conduct experiments to analyze related problems. Finally, we discuss the limitation of the current automatic metric studies through the experimentation and our suggestions for further research to improve the automatic evaluation metrics.
Stimuli‐Responsive Nanoparticles for Controlled Drug Delivery in Synergistic Cancer Immunotherapy
Cancer immunotherapy has achieved promising clinical progress over the recent years for its potential to treat metastatic tumors and inhibit their recurrences effectively. However, low patient response rates and dose‐limiting toxicity remain as major dilemmas for immunotherapy. Stimuli‐responsive nanoparticles (srNPs) combined with immunotherapy offer the possibility to amplify anti‐tumor immune responses, where the weak acidity, high concentration of glutathione, overexpressions of enzymes, and reactive oxygen species, and external stimuli in tumors act as triggers for controlled drug release. This review highlights the design of srNPs based on tumor microenvironment and/or external stimuli to combine with different anti‐tumor drugs, especially the immunoregulatory agents, which eventually realize synergistic immunotherapy of malignant primary or metastatic tumors and acquire a long‐term immune memory to prevent tumor recurrence. The authors hope that this review can provide theoretical guidance for the construction and clinical transformation of smart srNPs for controlled drug delivery in synergistic cancer immunotherapy. Stimuli‐responsive nanoparticles activated by endo/exo‐microenvironments, including acidity, high‐level glutathione, overexpressed enzymes, and reactive oxygen species, various external stimuli, and so forth, are developed to deliver immunoregulatory agents controllably to eventually amplify the synergistic immunotherapy efficacy of primary or metastatic malignant tumors and acquire a long‐term immune memory to prevent tumor recurrence.