Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Exploring Italian sentence embeddings properties through multi-tasking
by
Samo, Giuseppe
, Nastase, Vivi
, Merlo, Paola
, Jiang, Chunyang
in
Error analysis
/ Linguistics
/ Multitasking
/ Representations
/ Semantics
/ Sentences
/ Synthetic data
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Exploring Italian sentence embeddings properties through multi-tasking
by
Samo, Giuseppe
, Nastase, Vivi
, Merlo, Paola
, Jiang, Chunyang
in
Error analysis
/ Linguistics
/ Multitasking
/ Representations
/ Semantics
/ Sentences
/ Synthetic data
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Exploring Italian sentence embeddings properties through multi-tasking
Paper
Exploring Italian sentence embeddings properties through multi-tasking
2024
Request Book From Autostore
and Choose the Collection Method
Overview
We investigate to what degree existing LLMs encode abstract linguistic information in Italian in a multi-task setting. We exploit curated synthetic data on a large scale -- several Blackbird Language Matrices (BLMs) problems in Italian -- and use them to study how sentence representations built using pre-trained language models encode specific syntactic and semantic information. We use a two-level architecture to model separately a compression of the sentence embeddings into a representation that contains relevant information for a task, and a BLM task. We then investigate whether we can obtain compressed sentence representations that encode syntactic and semantic information relevant to several BLM tasks. While we expected that the sentence structure -- in terms of sequence of phrases/chunks -- and chunk properties could be shared across tasks, performance and error analysis show that the clues for the different tasks are encoded in different manners in the sentence embeddings, suggesting that abstract linguistic notions such as constituents or thematic roles does not seem to be present in the pretrained sentence embeddings.
Publisher
Cornell University Library, arXiv.org
Subject
MBRLCatalogueRelatedBooks
Related Items
Related Items
This website uses cookies to ensure you get the best experience on our website.