Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
21 result(s) for "Feather, Jenelle"
Sort by:
Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions
Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.
Connectivity precedes function in the development of the visual word form area
Before children can read, their brains have yet to develop selective responses to words. This study demonstrates that a child's connectivity pattern at age 5 can predict where their own word-selective cortex will later develop. This suggests that connectivity lays the groundwork for later functional development of cortex. What determines the cortical location at which a given functionally specific region will arise in development? We tested the hypothesis that functionally specific regions develop in their characteristic locations because of pre-existing differences in the extrinsic connectivity of that region to the rest of the brain. We exploited the visual word form area (VWFA) as a test case, scanning children with diffusion and functional imaging at age 5, before they learned to read, and at age 8, after they learned to read. We found the VWFA developed functionally in this interval and that its location in a particular child at age 8 could be predicted from that child's connectivity fingerprints (but not functional responses) at age 5. These results suggest that early connectivity instructs the functional development of the VWFA, possibly reflecting a general mechanism of cortical development.
Model metamers reveal divergent invariances between biological and artificial neural networks
Deep neural network models of sensory systems are often proposed to learn representational transformations with invariances like those in the brain. To reveal these invariances, we generated ‘model metamers’, stimuli whose activations within a model stage are matched to those of a natural stimulus. Metamers for state-of-the-art supervised and unsupervised neural network models of vision and audition were often completely unrecognizable to humans when generated from late model stages, suggesting differences between model and human invariances. Targeted model changes improved human recognizability of model metamers but did not eliminate the overall human–model discrepancy. The human recognizability of a model’s metamers was well predicted by their recognizability by other models, suggesting that models contain idiosyncratic invariances in addition to those required by the task. Metamer recognizability dissociated from both traditional brain-based benchmarks and adversarial vulnerability, revealing a distinct failure mode of existing sensory models and providing a complementary benchmark for model assessment. The authors test artificial neural networks with stimuli whose activations are matched to those of a natural stimulus. These ‘model metamers’ are often unrecognizable to humans, demonstrating a discrepancy between human and model sensory systems.
Evaluating Machine Learning Models of Sensory Systems
We rely on our sensory systems to perceive and interact with the world, and understanding how these systems work is a central focus in neuroscience. A goal of our field is to build stimulus-computable models of sensory systems that reproduce brain responses and behavior. The past decade has given rise to models that capture complex behaviors such as image classification, word recognition, and texture perception. Yet, there are known discrepancies between such models and human observers, such as in the architectural components, learning mechanisms, and resulting representations, that must be rectified to obtain complete models of the brain.This dissertation investigates the representations in contemporary models of sensory systems, focusing on the auditory and visual systems. The first study explores the extent to which deep neural network audio models capture human fMRI responses to sound. Most tested models out-predicted previous hand-engineered models of auditory cortex and exhibited hierarchical brain-model correspondence. The second study investigates the invariances of visual and auditory models of perception using \"model metamers\", synthetic stimuli that produce the same activations in a model as a natural stimulus. Behavioral experiments on humans using these stimuli reveal that the invariances of most current computational neural network models of perception do not align with human perceptual invariances. Our experiments trace this discrepancy to invariances that are specific to individual models, and provide some guidance for how to eliminate them. The third study uses similar techniques as those used to generate model metamers, but applies them to auditory texture models with the aim of reducing their dimensionality. We found that previous hand-engineered models of auditory texture can be significantly reduced in dimensionality without compromising their ability to capture human perception. The fourth study investigates the representational geometry of neural networks trained with biologically-inspired stochasticity. Together, this work presents ways to compare the representations of neural networks to those of human perceptual systems, and suggests paths for future improvements of these models.
Representational similarity precedes category selectivity in the developing ventral visual pathway
Many studies have investigated the development of face-, scene-, and body-selective regions in the ventral visual pathway. This work has primarily focused on comparing the size and univariate selectivity of these neural regions in children versus adults. In contrast, very few studies have investigated the developmental trajectory of more distributed activation patterns within and across neural regions. Here, we scanned both children (ages 5–7) and adults to test the hypothesis that distributed representational patterns arise before category selectivity (for faces, bodies, or scenes) in the ventral pathway. Consistent with this hypothesis, we found mature representational patterns in several ventral pathway regions (e.g., FFA, PPA, etc.), even in children who showed no hint of univariate selectivity. These results suggest that representational patterns emerge first in each region, perhaps forming a scaffold upon which univariate category selectivity can subsequently develop. More generally, our findings demonstrate an important dissociation between category selectivity and distributed response patterns, and raise questions about the relative roles of each in development and adult cognition. •Visual representations are implemented in the ventral visual pathway via category-selective regions and distributed activation patterns.•Which develops first in children: small, selective regions or large-scale activation patterns?•We found that even in children with no category-selective regions whatsoever, they still had mature distributed activation patterns.•We speculate that these distributed representational structures serve as a foundation upon whichcategory-selective regions are later built.
An Open, Large-Scale, Collaborative Effort to Estimate the Reproducibility of Psychological Science
Reproducibility is a defining feature of science. However, because of strong incentives for innovation and weak incentives for confirmation, direct replication is rarely practiced or published. The Reproducibility Project is an open, large-scale, collaborative effort to systematically examine the rate and predictors of reproducibility in psychological science. So far, 72 volunteer researchers from 41 institutions have organized to openly and transparently replicate studies published in three prominent psychological journals in 2008. Multiple methods will be used to evaluate the findings, calculate an empirical rate of replication, and investigate factors that predict reproducibility. Whatever the result, a better understanding of reproducibility will ultimately improve confidence in scientific methodology and findings.
Brain-Model Evaluations Need the NeuroAI Turing Test
What makes an artificial system a good model of intelligence? The classical test proposed by Alan Turing focuses on behavior, requiring that an artificial agent's behavior be indistinguishable from that of a human. While behavioral similarity provides a strong starting point, two systems with very different internal representations can produce the same outputs. Thus, in modeling biological intelligence, the field of NeuroAI often aims to go beyond behavioral similarity and achieve representational convergence between a model's activations and the measured activity of a biological system. This position paper argues that the standard definition of the Turing Test is incomplete for NeuroAI, and proposes a stronger framework called the ``NeuroAI Turing Test'', a benchmark that extends beyond behavior alone and \\emph{additionally} requires models to produce internal neural representations that are empirically indistinguishable from those of a brain up to measured individual variability, i.e. the differences between a computational model and the brain is no more than the difference between one brain and another brain. While the brain is not necessarily the ceiling of intelligence, it remains the only universally agreed-upon example, making it a natural reference point for evaluating computational models. By proposing this framework, we aim to shift the discourse from loosely defined notions of brain inspiration to a systematic and testable standard centered on both behavior and internal representations, providing a clear benchmark for neuroscientific modeling and AI development.
End-to-end Topographic Auditory Models Replicate Signatures of Human Auditory Cortex
The human auditory cortex is topographically organized. Neurons with similar response properties are spatially clustered, forming smooth maps for acoustic features such as frequency in early auditory areas, and modular regions selective for music and speech in higher-order cortex. Yet, evaluations for current computational models of auditory perception do not measure whether such topographic structure is present in a candidate model. Here, we show that cortical topography is not present in the previous best-performing models at predicting human auditory fMRI responses. To encourage the emergence of topographic organization, we adapt a cortical wiring-constraint loss originally designed for visual perception. The new class of topographic auditory models, TopoAudio, are trained to classify speech, and environmental sounds from cochleagram inputs, with an added constraint that nearby units on a 2D cortical sheet develop similar tuning. Despite these additional constraints, TopoAudio achieves high accuracy on benchmark tasks comparable to the unconstrained non-topographic baseline models. Further, TopoAudio predicts the fMRI responses in the brain as well as standard models, but unlike standard models, TopoAudio develops smooth, topographic maps for tonotopy and amplitude modulation (common properties of early auditory representation, as well as clustered response modules for music and speech (higher-order selectivity observed in the human auditory cortex). TopoAudio is the first end-to-end biologically grounded auditory model to exhibit emergent topography, and our results emphasize that a wiring-length constraint can serve as a general-purpose regularization tool to achieve biologically aligned representations.
A Spectral Theory of Neural Prediction and Alignment
The representations of neural networks are often compared to those of biological systems by performing regression between the neural network responses and those measured from biological systems. Many different state-of-the-art deep neural networks yield similar neural predictions, but it remains unclear how to differentiate among models that perform equally well at predicting neural responses. To gain insight into this, we use a recent theoretical framework that relates the generalization error from regression to the spectral properties of the model and the target. We apply this theory to the case of regression between model activations and neural responses and decompose the neural prediction error in terms of the model eigenspectra, alignment of model eigenvectors and neural responses, and the training set size. Using this decomposition, we introduce geometrical measures to interpret the neural prediction error. We test a large number of deep neural networks that predict visual cortical activity and show that there are multiple types of geometries that result in low neural prediction error as measured via regression. The work demonstrates that carefully decomposing representational metrics can provide interpretability of how models are capturing neural activity and points the way towards improved models of neural activity.