Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
27 result(s) for "Shinn, Maxwell"
Sort by:
Generative modeling of brain maps with spatial autocorrelation
Studies of large-scale brain organization have revealed interesting relationships between spatial gradients in brain maps across multiple modalities. Evaluating the significance of these findings requires establishing statistical expectations under a null hypothesis of interest. Through generative modeling of synthetic data that instantiate a specific null hypothesis, quantitative benchmarks can be derived for arbitrarily complex statistical measures. Here, we present a generative null model, provided as an open-access software platform, that generates surrogate maps with spatial autocorrelation (SA) matched to SA of a target brain map. SA is a prominent and ubiquitous property of brain maps that violates assumptions of independence in conventional statistical tests. Our method can simulate surrogate brain maps, constrained by empirical data, that preserve the SA of cortical, subcortical, parcellated, and dense brain maps. We characterize how SA impacts p-values in pairwise brain map comparisons. Furthermore, we demonstrate how SA-preserving surrogate maps can be used in gene set enrichment analyses to test hypotheses of interest related to brain map topography. Our findings demonstrate the utility of SA-preserving surrogate maps for hypothesis testing in complex statistical analyses, and underscore the need to disambiguate meaningful relationships from chance associations in studies of large-scale brain organization. •Spatial autocorrelation can dramatically inflate p-values in brain map analyses.•Null model generates surrogate brain maps matched to target spatial autocorrelation.•Spatial autocorrelation drives spurious findings in gene set enrichment analyses.•Surrogate maps can correct statistical analyses including gene set enrichment.•Python-based package implements the generative model with neuroimaging functionality.
A flexible framework for simulating and fitting generalized drift-diffusion models
The drift-diffusion model (DDM) is an important decision-making model in cognitive neuroscience. However, innovations in model form have been limited by methodological challenges. Here, we introduce the generalized drift-diffusion model (GDDM) framework for building and fitting DDM extensions, and provide a software package which implements the framework. The GDDM framework augments traditional DDM parameters through arbitrary user-defined functions. Models are solved numerically by directly solving the Fokker-Planck equation using efficient numerical methods, yielding a 100-fold or greater speedup over standard methodology. This speed allows GDDMs to be fit to data using maximum likelihood on the full response time (RT) distribution. We demonstrate fitting of GDDMs within our framework to both animal and human datasets from perceptual decision-making tasks, with better accuracy and fewer parameters than several DDMs implemented using the latest methodology, to test hypothesized decision-making mechanisms. Overall, our framework will allow for decision-making model innovation and novel experimental designs.
A transcriptomic axis predicts state modulation of cortical interneurons
Transcriptomics has revealed that cortical inhibitory neurons exhibit a great diversity of fine molecular subtypes 1 – 6 , but it is not known whether these subtypes have correspondingly diverse patterns of activity in the living brain. Here we show that inhibitory subtypes in primary visual cortex (V1) have diverse correlates with brain state, which are organized by a single factor: position along the main axis of transcriptomic variation. We combined in vivo two-photon calcium imaging of mouse V1 with a transcriptomic method to identify mRNA for 72 selected genes in ex vivo slices. We classified inhibitory neurons imaged in layers 1–3 into a three-level hierarchy of 5 subclasses, 11 types and 35 subtypes using previously defined transcriptomic clusters 3 . Responses to visual stimuli differed significantly only between subclasses, with cells in the Sncg subclass uniformly suppressed, and cells in the other subclasses predominantly excited. Modulation by brain state differed at all hierarchical levels but could be largely predicted from the first transcriptomic principal component, which also predicted correlations with simultaneously recorded cells. Inhibitory subtypes that fired more in resting, oscillatory brain states had a smaller fraction of their axonal projections in layer 1, narrower spikes, lower input resistance and weaker adaptation as determined in vitro 7 , and expressed more inhibitory cholinergic receptors. Subtypes that fired more during arousal had the opposite properties. Thus, a simple principle may largely explain how diverse inhibitory V1 subtypes shape state-dependent cortical processing. Two-photon imaging and in situ transcriptomic analysis of the primary visual cortex in mice show that a single transcriptomic axis correlates with the state modulation of cortical inhibitory neurons.
Transient neuronal suppression for exploitation of new sensory evidence
In noisy but stationary environments, decisions should be based on the temporal integration of sequentially sampled evidence. This strategy has been supported by many behavioral studies and is qualitatively consistent with neural activity in multiple brain areas. By contrast, decision-making in the face of non-stationary sensory evidence remains poorly understood. Here, we trained monkeys to identify and respond via saccade to the dominant color of a dynamically refreshed bicolor patch that becomes informative after a variable delay. Animals’ behavioral responses were briefly suppressed after evidence changes, and many neurons in the frontal eye field displayed a corresponding dip in activity at this time, similar to that frequently observed after stimulus onset but sensitive to stimulus strength. Generalized drift-diffusion models revealed consistency of behavior and neural activity with brief suppression of motor output, but not with pausing or resetting of evidence accumulation. These results suggest that momentary arrest of motor preparation is important for dynamic perceptual decision making. While evidence is constantly changing during real-world decisions, little is known about how the brain deals with such changes. Here, the authors show that the brain strategically suppresses motor output via the frontal eye fields in response to stimulus changes.
Versatility of nodal affiliation to communities
Graph theoretical analysis of the community structure of networks attempts to identify the communities (or modules) to which each node affiliates. However, this is in most cases an ill-posed problem, as the affiliation of a node to a single community is often ambiguous. Previous solutions have attempted to identify all of the communities to which each node affiliates. Instead of taking this approach, we introduce versatility , V , as a novel metric of nodal affiliation: V  ≈ 0 means that a node is consistently assigned to a specific community; V  >> 0 means it is inconsistently assigned to different communities. Versatility works in conjunction with existing community detection algorithms, and it satisfies many theoretically desirable properties in idealised networks designed to maximise ambiguity of modular decomposition. The local minima of global mean versatility identified the resolution parameters of a hierarchical community detection algorithm that least ambiguously decomposed the community structure of a social (karate club) network and the mouse brain connectome. Our results suggest that nodal versatility is useful in quantifying the inherent ambiguity of modular decomposition.
Structural covariance networks are coupled to expression of genes enriched in supragranular layers of the human cortex
Complex network topology is characteristic of many biological systems, including anatomical and functional brain networks (connectomes). Here, we first constructed a structural covariance network from MRI measures of cortical thickness on 296 healthy volunteers, aged 14–24 years. Next, we designed a new algorithm for matching sample locations from the Allen Brain Atlas to the nodes of the SCN. Subsequently we used this to define, transcriptomic brain networks by estimating gene co-expression between pairs of cortical regions. Finally, we explored the hypothesis that transcriptional networks and structural MRI connectomes are coupled. A transcriptional brain network (TBN) and a structural covariance network (SCN) were correlated across connection weights and showed qualitatively similar complex topological properties: assortativity, small-worldness, modularity, and a rich-club. In both networks, the weight of an edge was inversely related to the anatomical (Euclidean) distance between regions. There were differences between networks in degree and distance distributions: the transcriptional network had a less fat-tailed degree distribution and a less positively skewed distance distribution than the SCN. However, cortical areas connected to each other within modules of the SCN had significantly higher levels of whole genome co-expression than expected by chance. Nodes connected in the SCN had especially high levels of expression and co-expression of a human supragranular enriched (HSE) gene set that has been specifically located to supragranular layers of human cerebral cortex and is known to be important for large-scale, long-distance cortico-cortical connectivity. This coupling of brain transcriptome and connectome topologies was largely but not entirely accounted for by the common constraint of physical distance on both networks. •Transcriptomic Brain Network (TBN) is defined as inter-regional gene co-expression.•TBN has complex topological properties partially overlapped with structural networks.•Structural modules have higher gene co-expression than expected by chance.•Human Supragranular genes are highly expressed and coexpressed in structural hubs.
Functional brain networks reflect spatial and temporal autocorrelation
High-throughput experimental methods in neuroscience have led to an explosion of techniques for measuring complex interactions and multi-dimensional patterns. However, whether sophisticated measures of emergent phenomena can be traced back to simpler, low-dimensional statistics is largely unknown. To explore this question, we examined resting-state functional magnetic resonance imaging (rs-fMRI) data using complex topology measures from network neuroscience. Here we show that spatial and temporal autocorrelation are reliable statistics that explain numerous measures of network topology. Surrogate time series with subject-matched spatial and temporal autocorrelation capture nearly all reliable individual and regional variation in these topology measures. Network topology changes during aging are driven by spatial autocorrelation, and multiple serotonergic drugs causally induce the same topographic change in temporal autocorrelation. This reductionistic interpretation of widely used complexity measures may help link them to neurobiology. Individual variation in fMRI-derived brain networks is reproduced in a model using only the smoothness (autocorrelation) of the fMRI time series. Smoothness has implication for aging and can be causally manipulated by psychedelic serotonergic drugs.
Refinement type contracts for verification of scientific investigative software
Our scientific knowledge is increasingly built on software output. User code which defines data analysis pipelines and computational models is essential for research in the natural and social sciences, but little is known about how to ensure its correctness. The structure of this code and the development process used to build it limit the utility of traditional testing methodology. Formal methods for software verification have seen great success in ensuring code correctness but generally require more specialized training, development time, and funding than is available in the natural and social sciences. Here, we present a Python library which uses lightweight formal methods to provide correctness guarantees without the need for specialized knowledge or substantial time investment. Our package provides runtime verification of function entry and exit condition contracts using refinement types. It allows checking hyperproperties within contracts and offers automated test case generation to supplement online checking. We co-developed our tool with a medium-sized (\\(\\approx\\)3000 LOC) software package which simulates decision-making in cognitive neuroscience. In addition to helping us locate trivial bugs earlier on in the development cycle, our tool was able to locate four bugs which may have been difficult to find using traditional testing methods. It was also able to find bugs in user code which did not contain contracts or refinement type annotations. This demonstrates how formal methods can be used to verify the correctness of scientific software which is difficult to test with mainstream approaches.
Phantom oscillations in principal component analysis
Principal component analysis (PCA) is a dimensionality reduction technique that is known for being simple and easy to interpret. Principal components are often interpreted as low-dimensional patterns in high-dimensional data. However, this simple interpretation of PCA relies on several unstated assumptions that are difficult to satisfy. When these assumptions are violated, non-oscillatory data may have oscillatory principal components. Here, we show that two common properties of data violate these assumptions and cause oscillatory principal components: smooth-ness, and shifts in time or space. These two properties implicate almost all neuroscience data. We show how the oscillations that they produce, which we call “phantom oscillations”, impact data analysis. We also show that traditional cross-validation does not detect phantom oscillations, so we suggest procedures that do. Our findings are supported by a collection of mathematical proofs. Collectively, our work demonstrates that patterns which emerge from high-dimensional data analysis may not faithfully represent the underlying data.