Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Deep neural networks and humans both benefit from compositional language structure
by
Galke, Lukas
, Raviv, Limor
, Ram, Yoav
in
631/114/1305
/ 631/114/2397
/ 639/705/1042
/ 639/705/117
/ Advantages
/ Artificial languages
/ Artificial neural networks
/ Deep Learning
/ Generalization
/ Humanities and Social Sciences
/ Humans
/ Information processing
/ Language
/ Language acquisition
/ Language modeling
/ Languages
/ Large language models
/ Learnability
/ Learning
/ Learning - physiology
/ Linguistics
/ Machine learning
/ Memorization
/ Memory
/ multidisciplinary
/ Natural Language Processing
/ Networks
/ Neural networks
/ Neural Networks, Computer
/ Recurrent
/ Recurrent neural networks
/ Science
/ Science (multidisciplinary)
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Deep neural networks and humans both benefit from compositional language structure
by
Galke, Lukas
, Raviv, Limor
, Ram, Yoav
in
631/114/1305
/ 631/114/2397
/ 639/705/1042
/ 639/705/117
/ Advantages
/ Artificial languages
/ Artificial neural networks
/ Deep Learning
/ Generalization
/ Humanities and Social Sciences
/ Humans
/ Information processing
/ Language
/ Language acquisition
/ Language modeling
/ Languages
/ Large language models
/ Learnability
/ Learning
/ Learning - physiology
/ Linguistics
/ Machine learning
/ Memorization
/ Memory
/ multidisciplinary
/ Natural Language Processing
/ Networks
/ Neural networks
/ Neural Networks, Computer
/ Recurrent
/ Recurrent neural networks
/ Science
/ Science (multidisciplinary)
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Deep neural networks and humans both benefit from compositional language structure
by
Galke, Lukas
, Raviv, Limor
, Ram, Yoav
in
631/114/1305
/ 631/114/2397
/ 639/705/1042
/ 639/705/117
/ Advantages
/ Artificial languages
/ Artificial neural networks
/ Deep Learning
/ Generalization
/ Humanities and Social Sciences
/ Humans
/ Information processing
/ Language
/ Language acquisition
/ Language modeling
/ Languages
/ Large language models
/ Learnability
/ Learning
/ Learning - physiology
/ Linguistics
/ Machine learning
/ Memorization
/ Memory
/ multidisciplinary
/ Natural Language Processing
/ Networks
/ Neural networks
/ Neural Networks, Computer
/ Recurrent
/ Recurrent neural networks
/ Science
/ Science (multidisciplinary)
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Deep neural networks and humans both benefit from compositional language structure
Journal Article
Deep neural networks and humans both benefit from compositional language structure
2024
Request Book From Autostore
and Choose the Collection Method
Overview
Deep neural networks drive the success of natural language processing. A fundamental property of language is its compositional structure, allowing humans to systematically produce forms for new meanings. For humans, languages with more compositional and transparent structures are typically easier to learn than those with opaque and irregular structures. However, this learnability advantage has not yet been shown for deep neural networks, limiting their use as models for human language learning. Here, we directly test how neural networks compare to humans in learning and generalizing different languages that vary in their degree of compositional structure. We evaluate the memorization and generalization capabilities of a large language model and recurrent neural networks, and show that both deep neural networks exhibit a learnability advantage for more structured linguistic input: neural networks exposed to more compositional languages show more systematic generalization, greater agreement between different agents, and greater similarity to human learners.
This study demonstrates that deep neural networks, like humans, show a learnability advantage when trained on languages with more structured linguistic input, resulting in closer alignment with human learning. This finding has important implications for both understanding human language acquisition and designing artificial language systems.
This website uses cookies to ensure you get the best experience on our website.