Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Is Full-Text Available
      Is Full-Text Available
      Clear All
      Is Full-Text Available
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Subject
    • Country Of Publication
    • Publisher
    • Source
    • Language
    • Place of Publication
    • Contributors
    • Location
3,037 result(s) for "Charles, Adam S"
Sort by:
Volumetric two-photon imaging of neurons using stereoscopy (vTwINS)
vTwINS enables high-speed volumetric calcium imaging via a V-shaped point spread function and a dedicated data-processing algorithm. Song et al . apply this strategy to image population activity in the mouse visual cortex and hippocampus. Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large-scale recording of neural activity in vivo . Here, we introduce volumetric two-photon imaging of neurons using stereoscopy (vTwINS), a volumetric calcium imaging method that uses an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced 'image pairs' in the resulting 2D image, and the separation distance between projections is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a modified orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrated vTwINS by imaging neural population activity in the mouse primary visual cortex and hippocampus. Our results demonstrated that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame rate.
Multi-day neuron tracking in high-density electrophysiology recordings using earth mover’s distance
Accurate tracking of the same neurons across multiple days is crucial for studying changes in neuronal activity during learning and adaptation. Advances in high-density extracellular electrophysiology recording probes, such as Neuropixels, provide a promising avenue to accomplish this goal. Identifying the same neurons in multiple recordings is, however, complicated by non-rigid movement of the tissue relative to the recording sites (drift) and loss of signal from some neurons. Here, we propose a neuron tracking method that can identify the same cells independent of firing statistics, that are used by most existing methods. Our method is based on between-day non-rigid alignment of spike-sorted clusters. We verified the same cell identity in mice using measured visual receptive fields. This method succeeds on datasets separated from 1 to 47 days, with an 84% average recovery rate.
Decomposed Linear Dynamical Systems (dLDS) models reveal instantaneous, context-dependent dynamic connectivity in C. elegans
Mounting evidence indicates that neural “tuning” can be highly variable within an individual across time and across individuals. Furthermore, modulatory effects can change the relationship between neurons as a function of behavioral or other conditions, meaning that the changes in activity (the derivative) may be as important as the activity itself. Current computational models cannot capture the nonstationarity and variability of neural coding, preventing the quantitative evaluation of these effects. We therefore present a novel approach to analyze these effects in a well-studied organisms, C. elegans , leveraging recent advances in dynamical systems modeling: decomposed Linear Dynamical Systems (dLDS). Our approach enables the discovery of multiple parallel neural processes on different timescales using a set of linear operators that can be recombined in different ratios. Our model identifies “dynamic connectivity”, describing patterns of dynamic neural interactions in time. We use these patterns to identify instantaneous, contextually-dependent, hierarchical roles of neurons; discover the underlying variability of neural representations even under seemingly discrete behaviors; and learn an aligned latent space underlying multiple worms’ activity. By analyzing individual worms and neurons, we found that (1) changes in interneuron connectivity mediate efficient task-switching and (2) changes in sensory neuron connectivity show a mechanism of adaptation. Dynamic connectivity maps from decomposed linear dynamical systems models of C. elegans reveal how sensory neuron connectivity shifts throughout adaptation and large-scale connectivity shifts in interneurons during changes in environment and behavior.
Cross-modality supervised image restoration enables nanoscale tracking of synaptic plasticity in living mice
Learning is thought to involve changes in glutamate receptors at synapses, submicron structures that mediate communication between neurons in the central nervous system. Due to their small size and high density, synapses are difficult to resolve in vivo, limiting our ability to directly relate receptor dynamics to animal behavior. Here we developed a combination of computational and biological methods to overcome these challenges. First, we trained a deep-learning image-restoration algorithm that combines the advantages of ex vivo super-resolution and in vivo imaging modalities to overcome limitations specific to each optical system. When applied to in vivo images from transgenic mice expressing fluorescently labeled glutamate receptors, this restoration algorithm super-resolved synapses, enabling the tracking of behavior-associated synaptic plasticity with high spatial resolution. This method demonstrates the capabilities of image enhancement to learn from ex vivo data and imaging techniques to improve in vivo imaging resolution. XTC is a supervised deep-learning-based image-restoration approach that is trained with images from different modalities and applied to an in vivo modality with no ground truth. XTC’s capabilities are demonstrated in synapse tracking in the mouse brain.
Detecting and correcting false transients in calcium imaging
Population recordings of calcium activity are a major source of insight into neural function. Large datasets require automated processing, but this can introduce errors that are difficult to detect. Here we show that popular time course-estimation algorithms often contain substantial misattribution errors affecting 10–20% of transients. Misattribution, in which fluorescence is ascribed to the wrong cell, arises when overlapping cells and processes are imperfectly defined or not identified. To diagnose misattribution, we develop metrics and visualization tools for evaluating large datasets. To correct time courses, we introduce a robust estimator that explicitly accounts for contaminating signals. In one hippocampal dataset, removing contamination reduced the number of place cells by 15%, and 19% of place fields shifted by over 10 cm. Our methods are compatible with other cell-finding techniques, empowering users to diagnose and correct a potentially widespread problem that could alter scientific conclusions. SEUDO is a tool for detecting and correcting errors introduced by automated processing of calcium imaging data.
Review of data processing of functional optical microscopy for neuroscience
Functional optical imaging in neuroscience is rapidly growing with the development of optical systems and fluorescence indicators. To realize the potential of these massive spatiotemporal datasets for relating neuronal activity to behavior and stimuli and uncovering local circuits in the brain, accurate automated processing is increasingly essential. We cover recent computational developments in the full data processing pipeline of functional optical microscopy for neuroscience data and discuss ongoing and emerging challenges.
Locomotion engages context-dependent motor strategies for head stabilization in primates
Flexible motor control is essential for navigating complex, unpredictable environments. Although movement execution is often associated with stereotyped patterns of neural and muscular activation, the degree to which these patterns are conserved versus flexibly reorganized to meet task demands across diverse contextual changes has not been well characterized. Here we recorded head and body kinematics alongside muscle activity in rhesus monkeys during head stabilization—crucial for maintaining gaze and balance—while walking on a treadmill at various speeds, and during overground locomotion in the presence or absence of enhanced autonomic arousal. Dimensionality reduction analyses revealed a flexible control strategy during treadmill walking: a stable activation structure that scaled with speed. In contrast, overground walking evoked heightened muscle engagement and more substantial changes in organization. This pattern largely persisted even during elevated arousal, with larger pupil size linked to stronger but structurally preserved muscle recruitment. Together these findings demonstrate that the brain dynamically adapts motor coordination to context even for automatic behaviors, underscoring the need to examine control strategies in a wide range of conditions.
CUSP: Complex Spike Sorting from Multi-electrode Array Recordings with U-net Sequence-to-Sequence Prediction
Complex spikes (CSs) in cerebellar Purkinje cells convey unique signals complementary to Simple spike (SS) action potentials, but are infrequent and variable in waveform. Their variability and low spike counts, combined with recording artifacts such as electrode drift, make automated detection challenging. We introduce CUSP (CS sorting via U-net Sequence Prediction), a fully automated deep learning framework for CS sorting in high-density multi-electrode array recordings. CUSP uses a U-Net architecture with hybrid self-attention inception blocks to integrate local field potential and action potential signals and outputs CS event probabilities in a sequence-to-sequence manner. Detected events are clustered and paired with concurrently detected SSs to reconstruct the complete Purkinje cell activity. Trained on cerebellar neuropixels recordings in rhesus macaques, CUSP achieves human-expert performance (F1 = 0.83 ± 0.03) and even captures valid CS events overlooked during manual annotation. CUSP outperforms traditional and state-of-the-art CS and SS sorting algorithms on CS detection. It remains robust to waveform variability, spikelet composition, and electrode drift, enabling accurate CS tracking in long-term recordings. In contrast, existing methods often show false-positive biases or degrade under drift. CUSP provides a scalable, robust framework for analyzing burst-like or dynamically complex spike patterns. Its generalizability makes it valuable for large-scale cerebellar datasets and other neural systems, such as hippocampal pyramidal cells, where complex bursts are critical for computation. By combining expert-level accuracy with automation, CUSP offers a broadly applicable solution for studying information coding across circuits.
Continuous partitioning of neuronal variability
Neurons exhibit substantial trial-to-trial variability in response to repeated stimuli, posing a major challenge for understanding the information content of neural spike trains. In visual cortex, responses show greater-than-Poisson variability, whose origins and structure remain unclear. To address this puzzle, we introduce a continuous, doubly stochastic model of spike train variability that partitions neural responses into a smooth stimulus-driven component and a time-varying stochastic gain process. We applied this model to spike trains from four visual areas (LGN, V1, V2, and MT) and found that the gain process is well described by an exponentiated power law, with increasing amplitude and slower decay at higher levels of the visual hierarchy. The model also provides analytical expressions for the Fano factor of binned spike counts as a function of timescale, linking observed variability to underlying modulatory dynamics. Together, these results establish a principled framework for characterizing neural variability across cortical processing stages.