Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation
by
Adams, Bram
, Tian, Yuan
, Nie, Pengyu
, Cogo, Filipe R
, Macedo, Marcos
in
Algorithms
/ Automation
/ Benchmarks
/ Large language models
/ Modernization
/ Multilingualism
/ Programming languages
/ Semantics
/ Sequences
/ Software engineering
/ Source code
/ Translations
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation
by
Adams, Bram
, Tian, Yuan
, Nie, Pengyu
, Cogo, Filipe R
, Macedo, Marcos
in
Algorithms
/ Automation
/ Benchmarks
/ Large language models
/ Modernization
/ Multilingualism
/ Programming languages
/ Semantics
/ Sequences
/ Software engineering
/ Source code
/ Translations
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation
by
Adams, Bram
, Tian, Yuan
, Nie, Pengyu
, Cogo, Filipe R
, Macedo, Marcos
in
Algorithms
/ Automation
/ Benchmarks
/ Large language models
/ Modernization
/ Multilingualism
/ Programming languages
/ Semantics
/ Sequences
/ Software engineering
/ Source code
/ Translations
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation
Paper
InterTrans: Leveraging Transitive Intermediate Translations to Enhance LLM-based Code Translation
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Code translation aims to convert a program from one programming language (PL) to another. This long-standing software engineering task is crucial for modernizing legacy systems, ensuring cross-platform compatibility, enhancing performance, and more. However, automating this process remains challenging due to many syntactic and semantic differences between PLs. Recent studies show that even advanced techniques such as large language models (LLMs), especially open-source LLMs, still struggle with the task. Currently, code LLMs are trained with source code from multiple programming languages, thus presenting multilingual capabilities. In this paper, we investigate whether such multilingual capabilities can be harnessed to enhance code translation. To achieve this goal, we introduce InterTrans, an LLM-based automated code translation approach that, in contrast to existing approaches, leverages intermediate translations across PLs to bridge the syntactic and semantic gaps between source and target PLs. InterTrans contains two stages. It first utilizes a novel Tree of Code Translation (ToCT) algorithm to plan transitive intermediate translation sequences between a given source and target PL, then validates them in a specific order. We evaluate InterTrans with three open LLMs on three benchmarks (i.e., CodeNet, HumanEval-X, and TransCoder) involving six PLs. Results show an absolute improvement between 18.3% to 43.3% in Computation Accuracy (CA) for InterTrans over Direct Translation with 10 attempts. The best-performing variant of InterTrans (with Magicoder LLM) achieved an average CA of 87.3%-95.4% on three benchmarks.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.