Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
96
result(s) for
"Gallant, Jack"
Sort by:
Feature-space selection with banded ridge regression
by
Gallant, Jack L.
,
Eickenberg, Michael
,
Nunez-Elizalde, Anwar O.
in
Complementarity
,
Computational neuroscience
,
Decomposition
2022
•Using multiple feature spaces in a joint encoding model improves prediction accuracy.•The variance explained by the joint model can be decomposed over feature spaces.•Banded ridge regression optimizes the regularization for each feature space.•Banded ridge regression contains an implicit feature-space selection mechanism.•Banded ridge regression can be solved with random search or gradient descent.
Encoding models provide a powerful framework to identify the information represented in brain recordings. In this framework, a stimulus representation is expressed within a feature space and is used in a regularized linear regression to predict brain activity. To account for a potential complementarity of different feature spaces, a joint model is fit on multiple feature spaces simultaneously. To adapt regularization strength to each feature space, ridge regression is extended to banded ridge regression, which optimizes a different regularization hyperparameter per feature space. The present paper proposes a method to decompose over feature spaces the variance explained by a banded ridge regression model. It also describes how banded ridge regression performs a feature-space selection, effectively ignoring non-predictive and redundant feature spaces. This feature-space selection leads to better prediction accuracy and to better interpretability. Banded ridge regression is then mathematically linked to a number of other regression methods with similar feature-space selection mechanisms. Finally, several methods are proposed to address the computational challenge of fitting banded ridge regressions on large numbers of voxels and feature spaces. All implementations are released in an open-source Python package called Himalaya.
Journal Article
Attention during natural vision warps semantic representation across the human brain
by
Çukur, Tolga
,
Gallant, Jack L
,
Huth, Alexander G
in
631/378/116/2395
,
631/378/2613/2616
,
631/378/2649/1310
2013
The authors use functional magnetic resonance imaging to measure how the semantic representation changes when searching for different object categories in natural movies. They find tuning shifts that expand the representation of the attended category and of semantically related, but unattended, categories, and compress the representation of categories semantically dissimilar to the target.
Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.
Journal Article
Visual and linguistic semantic representations are aligned at the border of human visual cortex
by
Bilenko, Natalia Y.
,
Deniz, Fatma
,
Nunez-Elizalde, Anwar O.
in
59/36
,
631/378/116/2395
,
631/378/2613
2021
Semantic information in the human brain is organized into multiple networks, but the fine-grain relationships between them are poorly understood. In this study, we compared semantic maps obtained from two functional magnetic resonance imaging experiments in the same participants: one that used silent movies as stimuli and another that used narrative stories. Movies evoked activity from a network of modality-specific, semantically selective areas in visual cortex. Stories evoked activity from another network of semantically selective areas immediately anterior to visual cortex. Remarkably, the pattern of semantic selectivity in these two distinct networks corresponded along the boundary of visual cortex: for visual categories represented posterior to the boundary, the same categories were represented linguistically on the anterior side. These results suggest that these two networks are smoothly joined to form one contiguous map.
This study shows that visual areas pass information to the amodal semantic system through semantically selective channels aligned at the border of visual cortex. This architecture might support the integration of visual perception and semantic memory.
Journal Article
Encoding and decoding in fMRI
by
Naselaris, Thomas
,
Gallant, Jack L.
,
Nishimoto, Shinji
in
Brain - physiology
,
Brain Mapping - methods
,
Brain research
2011
Over the past decade fMRI researchers have developed increasingly sensitive techniques for analyzing the information represented in BOLD activity. The most popular of these techniques is linear classification, a simple technique for decoding information about experimental stimuli or tasks from patterns of activity across an array of voxels. A more recent development is the voxel-based encoding model, which describes the information about the stimulus or task that is represented in the activity of single voxels. Encoding and decoding are complementary operations: encoding uses stimuli to predict activity while decoding uses activity to predict information about the stimuli. However, in practice these two operations are often confused, and their respective strengths and weaknesses have not been made clear. Here we use the concept of a linearizing feature space to clarify the relationship between encoding and decoding. We show that encoding and decoding operations can both be used to investigate some of the most common questions about how information is represented in the brain. However, focusing on encoding models offers two important advantages over decoding. First, an encoding model can in principle provide a complete functional description of a region of interest, while a decoding model can provide only a partial description. Second, while it is straightforward to derive an optimal decoding model from an encoding model it is much more difficult to derive an encoding model from a decoding model. We propose a systematic modeling approach that begins by estimating an encoding model for every voxel in a scan and ends by using the estimated encoding models to perform decoding.
►Encoding and decoding can be described in terms of a linearizing feature space. ►Encoding models can provide complete functional descriptions; decoding models cannot. ►Decoding models can relate activity directly to behavior; encoding models cannot. ►An encoding model is more easily converted to a decoding model than vice versa.
Journal Article
A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes
by
Olman, Cheryl A.
,
Naselaris, Thomas
,
Gallant, Jack L.
in
Adult
,
Art galleries & museums
,
Brain Mapping - methods
2015
Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial frequency, and orientation) are encoded during mental imagery. However, the specific hypothesis that low-level visual features are encoded during mental imagery is difficult to directly test using MVPC. The difficulty is especially acute when considering the representation of complex, multi-object scenes that can evoke multiple sources of variation that are distinct from low-level visual features. Therefore, we used a voxel-wise modeling and decoding approach to directly test the hypothesis that low-level visual features are encoded in activity generated during mental imagery of complex scenes. Using fMRI measurements of cortical activity evoked by viewing photographs, we constructed voxel-wise encoding models of tuning to low-level visual features. We also measured activity as subjects imagined previously memorized works of art. We then used the encoding models to determine if putative low-level visual features encoded in this activity could pick out the imagined artwork from among thousands of other randomly selected images. We show that mental images can be accurately identified in this way; moreover, mental image identification accuracy depends upon the degree of tuning to low-level visual features in the voxels selected for decoding. These results directly confirm the hypothesis that low-level visual features are encoded during mental imagery of complex scenes. Our work also points to novel forms of brain–machine interaction: we provide a proof-of-concept demonstration of an internet image search guided by mental imagery.
•A model of representation in early visual cortex decodes mental images of complex scenes.•Mental imagery depends directly upon the encoding of low-level visual features.•Low-level visual features of mental images are encoded by activity in early visual cortex.•Depictive theories of mental imagery are strongly supported by our results.•Brain activity evoked by mental imagery can be used to guide internet image search.
Journal Article
Natural speech reveals the semantic maps that tile human cerebral cortex
by
Gallant, Jack L.
,
Griffiths, Thomas L.
,
Theunissen, Frédéric E.
in
59/36
,
631/378/116/2395
,
631/378/2649/1594
2016
The meaning of language is represented in regions of the cerebral cortex collectively known as the ‘semantic system’. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.
It has been proposed that language meaning is represented throughout the cerebral cortex in a distributed ‘semantic system’, but little is known about the details of this network; here, voxel-wise modelling of functional MRI data collected while subjects listened to natural stories is used to create a detailed atlas that maps representations of word meaning in the human brain.
A semantic atlas of the cerebral cortex
It is thought that the meanings of words and language are represented in a semantic system distributed across much of the cerebral cortex. However, little is known about the detailed functional and anatomical organization of this network. Alex Huth, Jack Gallant and colleagues set out to map the functional representations of semantic meaning in the human brain using voxel-based modelling of functional magnetic resonance imaging (fMRI) recordings made while subjects listened to natural narrative speech. They find that each semantic concept is represented in multiple semantic areas, and each semantic area represents multiple semantic concepts. The recovered semantic maps are largely consistent across subjects, however, providing the basis for a semantic atlas that can be used for future studies of language processing. An interactive version of the atlas can be explored at
http://gallantlab.org/huth2016
.
Journal Article
Voxelwise encoding models with non-spherical multivariate normal priors
by
Nunez-Elizalde, Anwar O.
,
Gallant, Jack L.
,
Huth, Alexander G.
in
Accuracy
,
Algorithms
,
Brain - physiology
2019
Predictive models for neural or fMRI data are often fit using regression methods that employ priors on the model parameters. One widely used method is ridge regression, which employs a spherical multivariate normal prior that assumes equal and independent variance for all parameters. However, a spherical prior is not always optimal or appropriate. There are many cases where expert knowledge or hypotheses about the structure of the model parameters could be used to construct a better prior. In these cases, non-spherical multivariate normal priors can be employed using a generalized form of ridge known as Tikhonov regression. Yet Tikhonov regression is only rarely used in neuroscience. In this paper we discuss the theoretical basis for Tikhonov regression, demonstrate a computationally efficient method for its application, and show several examples of how Tikhonov regression can improve predictive models for fMRI data. We also show that many earlier studies have implicitly used Tikhonov regression by linearly transforming the regressors before performing ridge regression.
•Theoretical basis for encoding models with multivariate normal (MVN) priors.•Non-spherical MVN priors improve prediction accuracy in multiple settings.•Joint model estimation with banded ridge regression improves prediction accuracy.
Journal Article
Identifying natural images from human brain activity
by
Prenger, Ryan J.
,
Naselaris, Thomas
,
Gallant, Jack L.
in
Biological and medical sciences
,
Brain
,
Brain - physiology
2008
Reading the mind
Recent functional magnetic resonance imaging (fMRI) studies have shown that, based on patterns of activity evoked by different categories of visual images, it is possible to deduce simple features in the visual scene, or to which category it belongs. Kay
et al
. take this approach a tantalizing step further. Their newly developed decoding method, based on quantitative receptive field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas, can identify with high accuracy which specific natural image an observer saw, even for an image chosen at random from 1,000 distinct images. This prompts the thought that it may soon be possible to decode subjective perceptual experiences such as visual imagery and dreams, an idea previously restricted to the realm of science fiction.
Recent functional magnetic resonance imaging (fMRI) studies have shown that it is possible to deduce simple features in the visual scene or to which category it belongs. A decoding method based on quantitative receptive field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas has now been developed. These models make it possible to identify, out of a large set of completely novel complex images, which specific image was seen by an observer.
A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation
1
,
2
, position
3
and object category
4
,
5
from activity in visual cortex. However, these studies typically used relatively simple stimuli (for example, gratings) or images drawn from fixed categories (for example, faces, houses), and decoding was based on previous measurements of brain activity evoked by those same stimuli or categories. To overcome these limitations, here we develop a decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas. These models describe the tuning of individual voxels for space, orientation and spatial frequency, and are estimated directly from responses evoked by natural images. We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. Identification is not a mere consequence of the retinotopic organization of visual areas; simpler receptive-field models that describe only spatial tuning yield much poorer identification performance. Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.
Journal Article
Pyrcca: Regularized Kernel Canonical Correlation Analysis in Python and Its Applications to Neuroimaging
by
Gallant, Jack L.
,
Bilenko, Natalia Y.
in
Analysis of covariance
,
Bioinformatics
,
Brain architecture
2016
In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA). CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Pyrcca to implement cross-subject comparison in a natural movie functional magnetic resonance imaging (fMRI) experiment by finding a data-driven set of functional response patterns that are similar across individuals. We validate this cross-subject comparison method in Pyrcca by predicting responses to novel natural movies across subjects. Finally, we show how Pyrcca can reveal retinotopic organization in brain responses to natural movies without the need for an explicit model.
Journal Article
Phonemic segmentation of narrative speech in human cerebral cortex
by
Gallant, Jack L.
,
Deniz, Fatma
,
Theunissen, Frédéric E.
in
631/378/116/2395
,
631/378/2619/2618
,
631/378/2649/1594
2023
Speech processing requires extracting meaning from acoustic patterns using a set of intermediate representations based on a dynamic segmentation of the speech stream. Using whole brain mapping obtained in fMRI, we investigate the locus of cortical phonemic processing not only for single phonemes but also for short combinations made of diphones and triphones. We find that phonemic processing areas are much larger than previously described: they include not only the classical areas in the dorsal superior temporal gyrus but also a larger region in the lateral temporal cortex where diphone features are best represented. These identified phonemic regions overlap with the lexical retrieval region, but we show that short word retrieval is not sufficient to explain the observed responses to diphones. Behavioral studies have shown that phonemic processing and lexical retrieval are intertwined. Here, we also have identified candidate regions within the speech cortical network where this joint processing occurs.
The neural dynamics underlying speech comprehension are not well understood. Here, the authors show that phonemic-to-lexical processing is localized to a large region of the temporal cortex, and that segmentation of the speech stream occurs mostly at the level of diphones.
Journal Article