Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Source
      Source
      Clear All
      Source
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
1 result(s) for "Chamali, Irene"
Sort by:
It’s All Greek to Them: Challenges in Translating Greek Slang and Idioms via LLMs and NMT
This thesis investigates the relatively understudied task of translating Greek(medium-resource language) slang and idiomatic expressions. While MachineTranslation (MT) has made significant advancements over the past decade —firstthrough Neural Machine Translation (NMT) and later with Large Language Mod-els (LLMs)—, its effectiveness remains underexplored for informal and culturallyspecific linguistic constructions, especially in under-resourced settings. Thiswork addresses these challenges by creating two novel parallel Greek-Englishdatasets: one for slang and one for idioms, and testing three LLMs (Gemma, Llama, Mistral) and one NMT model (Helsinki). To probe the models’ knowledge,both datasets are also manually annotated with \"informativeness\" scores with theassistance of human participants, with them indicating how easily the meaningof a target expression can be inferred in a given context. An error analysis is alsoconducted to identify specific model weaknesses. Results show consistently poortranslation performance across all models, for both datasets, with no statisticallysignificant differences among them. However, localized differences in handlinginformativeness and substantial variation in error types are observed. Thesefindings align with existing research highlighting the challenges of translatingunder-resourced languages and culturally embedded expressions.