Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
555 result(s) for "631/378/2649/1594"
Sort by:
Semantic reconstruction of continuous language from non-invasive brain recordings
A brain–computer interface that decodes continuous language from non-invasive recordings would have many scientific and practical applications. Currently, however, non-invasive language decoders can only identify stimuli from among a small set of words or phrases. Here we introduce a non-invasive decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos, demonstrating that a single decoder can be applied to a range of tasks. We tested the decoder across cortex and found that continuous language can be separately decoded from multiple regions. As brain–computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder. Our findings demonstrate the viability of non-invasive language brain–computer interfaces. Tang et al. show that continuous language can be decoded from functional MRI recordings to recover the meaning of perceived and imagined speech stimuli and silent videos and that this language decoding requires subject cooperation.
Shared computational principles for language processing in humans and deep language models
Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.Deep language models have revolutionized natural language processing. The paper discovers three computational principles shared between deep language models and the human brain, which can transform our understanding of the neural basis of language.
Speech rhythms and their neural foundations
The recognition of spoken language has typically been studied by focusing on either words or their constituent elements (for example, low-level features or phonemes). More recently, the ‘temporal mesoscale’ of speech has been explored, specifically regularities in the envelope of the acoustic signal that correlate with syllabic information and that play a central role in production and perception processes. The temporal structure of speech at this scale is remarkably stable across languages, with a preferred range of rhythmicity of 2– 8 Hz. Importantly, this rhythmicity is required by the processes underlying the construction of intelligible speech. A lot of current work focuses on audio-motor interactions in speech, highlighting behavioural and neural evidence that demonstrates how properties of perceptual and motor systems, and their relation, can underlie the mesoscale speech rhythms. The data invite the hypothesis that the speech motor cortex is best modelled as a neural oscillator, a conjecture that aligns well with current proposals highlighting the fundamental role of neural oscillations in perception and cognition. The findings also show motor theories (of speech) in a different light, placing new mechanistic constraints on accounts of the action–perception interface.Syllables play a central role in speech production and perception. In this Review, Poeppel and Assaneo outline how a simple biophysical model of the speech production system as an oscillator explains the remarkably stable rhythmic structure of spoken language.
The language network as a natural kind within the broader landscape of the human brain
Language behaviour is complex, but neuroscientific evidence disentangles it into distinct components supported by dedicated brain areas or networks. In this Review, we describe the ‘core’ language network, which includes left-hemisphere frontal and temporal areas, and show that it is strongly interconnected, independent of input and output modalities, causally important for language and language-selective. We discuss evidence that this language network plausibly stores language knowledge and supports core linguistic computations related to accessing words and constructions from memory and combining them to interpret (decode) or generate (encode) linguistic messages. We emphasize that the language network works closely with, but is distinct from, both lower-level — perceptual and motor — mechanisms and higher-level systems of knowledge and reasoning. The perceptual and motor mechanisms process linguistic signals, but, in contrast to the language network, are sensitive only to these signals’ surface properties, not their meanings; the systems of knowledge and reasoning (such as the system that supports social reasoning) are sometimes engaged during language use but are not language-selective. This Review lays a foundation both for in-depth investigations of these different components of the language processing pipeline and for probing inter-component interactions.Many brain areas support complex language processing behaviours. In this Review, Fedorenko et al. disentangle the ‘core’ language system as functionally distinct from the perceptual and motor brain areas and knowledge and reasoning systems it closely interacts with during language comprehension and production.
An investigation across 45 languages and 12 language families reveals a universal language network
To understand the architecture of human language, it is critical to examine diverse languages; however, most cognitive neuroscience research has focused on only a handful of primarily Indo-European languages. Here we report an investigation of the fronto-temporo-parietal language network across 45 languages and establish the robustness to cross-linguistic variation of its topography and key functional properties, including left-lateralization, strong functional integration among its brain regions and functional selectivity for language processing.fMRI reveals similar topography, selectivity and inter-connectedness of language brain areas across 45 languages. These properties may allow the language system to handle the shared features of languages, shaped by biological and cultural evolution.
Natural speech reveals the semantic maps that tile human cerebral cortex
The meaning of language is represented in regions of the cerebral cortex collectively known as the ‘semantic system’. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain. It has been proposed that language meaning is represented throughout the cerebral cortex in a distributed ‘semantic system’, but little is known about the details of this network; here, voxel-wise modelling of functional MRI data collected while subjects listened to natural stories is used to create a detailed atlas that maps representations of word meaning in the human brain. A semantic atlas of the cerebral cortex It is thought that the meanings of words and language are represented in a semantic system distributed across much of the cerebral cortex. However, little is known about the detailed functional and anatomical organization of this network. Alex Huth, Jack Gallant and colleagues set out to map the functional representations of semantic meaning in the human brain using voxel-based modelling of functional magnetic resonance imaging (fMRI) recordings made while subjects listened to natural narrative speech. They find that each semantic concept is represented in multiple semantic areas, and each semantic area represents multiple semantic concepts. The recovered semantic maps are largely consistent across subjects, however, providing the basis for a semantic atlas that can be used for future studies of language processing. An interactive version of the atlas can be explored at http://gallantlab.org/huth2016 .
The Human Connectome Project's neuroimaging approach
This paper describes an integrated approach for neuroimaging data acquisition, analysis and sharing. Building on methodological advances from the Human Connectome Project (HCP) and elsewhere, the HCP-style paradigm applies to new and existing data sets that meet core requirements and may accelerate progress in understanding the brain in health and disease. Noninvasive human neuroimaging has yielded many discoveries about the brain. Numerous methodological advances have also occurred, though inertia has slowed their adoption. This paper presents an integrated approach to data acquisition, analysis and sharing that builds upon recent advances, particularly from the Human Connectome Project (HCP). The 'HCP-style' paradigm has seven core tenets: (i) collect multimodal imaging data from many subjects; (ii) acquire data at high spatial and temporal resolution; (iii) preprocess data to minimize distortions, blurring and temporal artifacts; (iv) represent data using the natural geometry of cortical and subcortical structures; (v) accurately align corresponding brain areas across subjects and studies; (vi) analyze data using neurobiologically accurate brain parcellations; and (vii) share published data via user-friendly databases. We illustrate the HCP-style paradigm using existing HCP data sets and provide guidance for future research. Widespread adoption of this paradigm should accelerate progress in understanding the brain in health and disease.
Language is primarily a tool for communication rather than thought
Language is a defining characteristic of our species, but the function, or functions, that it serves has been debated for centuries. Here we bring recent evidence from neuroscience and allied disciplines to argue that in modern humans, language is a tool for communication, contrary to a prominent view that we use language for thinking. We begin by introducing the brain network that supports linguistic ability in humans. We then review evidence for a double dissociation between language and thought, and discuss several properties of language that suggest that it is optimized for communication. We conclude that although the emergence of language has unquestionably transformed human culture, language does not appear to be a prerequisite for complex thought, including symbolic thought. Instead, language is a powerful tool for the transmission of cultural knowledge; it plausibly co-evolved with our thinking and reasoning capacities, and only reflects, rather than gives rise to, the signature sophistication of human cognition. Evidence from neuroscience and related fields suggests that language and thought processes operate in distinct networks in the human brain and that language is optimized for communication and not for complex thought.
Cortical tracking of hierarchical linguistic structures in connected speech
Language consists of a hierarchy of linguistic units: words, phrases and sentences. The authors explore whether and how these abstract linguistic units are represented in the brain during speech comprehension. They find that cortical rhythms track the timescales of these linguistic units, revealing a hierarchy of neural processing timescales underlying internal construction of hierarchical linguistic structure. The most critical attribute of human language is its unbounded combinatorial nature: smaller elements can be combined into larger structures on the basis of a grammatical system, resulting in a hierarchy of linguistic units, such as words, phrases and sentences. Mentally parsing and representing such structures, however, poses challenges for speech comprehension. In speech, hierarchical linguistic structures do not have boundaries that are clearly defined by acoustic cues and must therefore be internally and incrementally constructed during comprehension. We found that, during listening to connected speech, cortical activity of different timescales concurrently tracked the time course of abstract linguistic structures at different hierarchical levels, such as words, phrases and sentences. Notably, the neural tracking of hierarchical linguistic structures was dissociated from the encoding of acoustic cues and from the predictability of incoming words. Our results indicate that a hierarchy of neural processing timescales underlies grammar-based internal construction of hierarchical linguistic structure.
Machine translation of cortical activity to text with an encoder–decoder framework
A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech. Here we show how to decode the electrocorticogram with high accuracy and at natural-speech rates. Taking a cue from recent advances in machine translation, we train a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence. For each participant, data consist of several spoken repeats of a set of 30–50 sentences, along with the contemporaneous signals from ~250 electrodes distributed over peri-Sylvian cortices. Average word error rates across a held-out repeat set are as low as 3%. Finally, we show how decoding with limited data can be improved with transfer learning, by training certain layers of the network under multiple participants’ data.Makin and colleagues decode speech from neural signals recorded during a preoperative procedure, using an algorithm inspired by machine translation. For one participant reading from a closed set of 50 sentences, decoding accuracy is nearly perfect.