Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
How do languages influence each other? Studying cross-lingual data sharing during LM fine-tuning
by
Choenni, Rochelle
, Garrette, Dan
, Shutova, Ekaterina
in
Data acquisition
/ Knowledge acquisition
/ Languages
/ Large language models
/ Multilingualism
/ Training
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
How do languages influence each other? Studying cross-lingual data sharing during LM fine-tuning
by
Choenni, Rochelle
, Garrette, Dan
, Shutova, Ekaterina
in
Data acquisition
/ Knowledge acquisition
/ Languages
/ Large language models
/ Multilingualism
/ Training
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
How do languages influence each other? Studying cross-lingual data sharing during LM fine-tuning
Paper
How do languages influence each other? Studying cross-lingual data sharing during LM fine-tuning
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Multilingual large language models (MLLMs) are jointly trained on data from many different languages such that representation of individual languages can benefit from other languages' data. Impressive performance on zero-shot cross-lingual transfer shows that these models are capable of exploiting data from other languages. Yet, it remains unclear to what extent, and under which conditions, languages rely on each other's data. In this study, we use TracIn (Pruthi et al., 2020), a training data attribution (TDA) method, to retrieve the most influential training samples seen during multilingual fine-tuning for a particular test language. This allows us to analyse cross-lingual sharing mechanisms of MLLMs from a new perspective. While previous work studied cross-lingual sharing at the level of model parameters, we present the first approach to study cross-lingual sharing at the data level. We find that MLLMs rely on data from multiple languages from the early stages of fine-tuning and that this reliance gradually increases as fine-tuning progresses. We further study how different fine-tuning languages influence model performance on a given test language and find that they can both reinforce and complement the knowledge acquired from data of the test language itself.
Publisher
Cornell University Library, arXiv.org
Subject
This website uses cookies to ensure you get the best experience on our website.