Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
52 result(s) for "Stringer, Carsen"
Sort by:
Cellpose 2.0: how to train your own model
Pretrained neural network models for biological segmentation can provide good out-of-the-box results for many image types. However, such models do not allow users to adapt the segmentation style to their specific needs and can perform suboptimally for test images that are very different from the training images. Here we introduce Cellpose 2.0, a new package that includes an ensemble of diverse pretrained models as well as a human-in-the-loop pipeline for rapid prototyping of new custom models. We show that models pretrained on the Cellpose dataset can be fine-tuned with only 500–1,000 user-annotated regions of interest (ROI) to perform nearly as well as models trained on entire datasets with up to 200,000 ROI. A human-in-the-loop approach further reduced the required user annotation to 100–200 ROI, while maintaining high-quality segmentations. We provide software tools such as an annotation graphical user interface, a model zoo and a human-in-the-loop pipeline to facilitate the adoption of Cellpose 2.0. Cellpose 2.0 improves cell segmentation by offering pretrained models that can be fine-tuned using a human-in-the-loop training pipeline and fewer than 1,000 user-annotated regions of interest.
Cellpose: a generalist algorithm for cellular segmentation
Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments. Cellpose was trained on a new dataset of highly varied images of cells, containing over 70,000 segmented objects. We also demonstrate a three-dimensional (3D) extension of Cellpose that reuses the two-dimensional (2D) model and does not require 3D-labeled data. To support community contributions to the training data, we developed software for manual labeling and for curation of the automated results. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.Cellpose is a generalist, deep learning-based approach for segmenting structures in a wide range of image types. Cellpose does not require parameter adjustment or model retraining and outperforms established methods on 2D and 3D datasets.
Spontaneous behaviors drive multidimensional, brainwide activity
How is it that groups of neurons dispersed through the brain interact to generate complex behaviors? Three papers in this issue present brain-scale studies of neuronal activity and dynamics (see the Perspective by Huk and Hart). Allen et al. found that in thirsty mice, there is widespread neural activity related to stimuli that elicit licking and drinking. Individual neurons encoded task-specific responses, but every brain area contained neurons with different types of response. Optogenetic stimulation of thirst-sensing neurons in one area of the brain reinstated drinking and neuronal activity across the brain that previously signaled thirst. Gründemann et al. investigated the activity of mouse basal amygdala neurons in relation to behavior during different tasks. Two ensembles of neurons showed orthogonal activity during exploratory and nonexploratory behaviors, possibly reflecting different levels of anxiety experienced in these areas. Stringer et al. analyzed spontaneous neuronal firing, finding that neurons in the primary visual cortex encoded both visual information and motor activity related to facial movements. The variability of neuronal responses to visual stimuli in the primary visual area is mainly related to arousal and reflects the encoding of latent behavioral states. Science , this issue p. eaav3932 , p. eaav8736 , p. eaav7893 ; see also p. 236 Neurons in the primary visual cortex encode both visual information and motor activity. Neuronal populations in sensory cortex produce variable responses to sensory stimuli and exhibit intricate spontaneous activity even without external sensory input. Cortical variability and spontaneous activity have been variously proposed to represent random noise, recall of prior experience, or encoding of ongoing behavioral and cognitive variables. Recording more than 10,000 neurons in mouse visual cortex, we observed that spontaneous activity reliably encoded a high-dimensional latent state, which was partially related to the mouse’s ongoing behavior and was represented not just in visual cortex but also across the forebrain. Sensory inputs did not interrupt this ongoing signal but added onto it a representation of external stimuli in orthogonal dimensions. Thus, visual cortical population activity, despite its apparently noisy structure, reliably encodes an orthogonal fusion of sensory and multidimensional behavioral information.
Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation
Advances in microscopy hold great promise for allowing quantitative and precise measurement of morphological and molecular phenomena at the single-cell level in bacteria; however, the potential of this approach is ultimately limited by the availability of methods to faithfully segment cells independent of their morphological or optical characteristics. Here, we present Omnipose, a deep neural network image-segmentation algorithm. Unique network outputs such as the gradient of the distance field allow Omnipose to accurately segment cells on which current algorithms, including its predecessor, Cellpose, produce errors. We show that Omnipose achieves unprecedented segmentation performance on mixed bacterial cultures, antibiotic-treated cells and cells of elongated or branched morphology. Furthermore, the benefits of Omnipose extend to non-bacterial subjects, varied imaging modalities and three-dimensional objects. Finally, we demonstrate the utility of Omnipose in the characterization of extreme morphological phenotypes that arise during interbacterial antagonism. Our results distinguish Omnipose as a powerful tool for characterizing diverse and arbitrarily shaped cell types from imaging data. Omnipose is a deep neural network algorithm for image segmentation that improves upon existing approaches by solving the challenging problem of accurately segmenting morphologically diverse cells from images acquired with any modality.
Facemap: a framework for modeling neural activity based on orofacial tracking
Recent studies in mice have shown that orofacial behaviors drive a large fraction of neural activity across the brain. To understand the nature and function of these signals, we need better computational models to characterize the behaviors and relate them to neural activity. Here we developed Facemap, a framework consisting of a keypoint tracker and a deep neural network encoder for predicting neural activity. Our algorithm for tracking mouse orofacial behaviors was more accurate than existing pose estimation tools, while the processing speed was several times faster, making it a powerful tool for real-time experimental interventions. The Facemap tracker was easy to adapt to data from new labs, requiring as few as 10 annotated frames for near-optimal performance. We used the keypoints as inputs to a deep neural network which predicts the activity of ~50,000 simultaneously-recorded neurons and, in visual cortex, we doubled the amount of explained variance compared to previous methods. Using this model, we found that the neuronal activity clusters that were well predicted from behavior were more spatially spread out across cortex. We also found that the deep behavioral features from the model had stereotypical, sequential dynamics that were not reversible in time. In summary, Facemap provides a stepping stone toward understanding the function of the brain-wide neural signals and their relation to behavior. Facemap is a data analysis framework for tracking keypoints on mouse faces and relating them to large-scale neural activity. Both of these steps use state-of-the-art convolutional neural networks to achieve high precision and fast processing speeds.
Vagal sensory neurons mediate the Bezold–Jarisch reflex and induce syncope
Visceral sensory pathways mediate homeostatic reflexes, the dysfunction of which leads to many neurological disorders 1 . The Bezold–Jarisch reflex (BJR), first described 2 , 3 in 1867, is a cardioinhibitory reflex that is speculated to be mediated by vagal sensory neurons (VSNs) that also triggers syncope. However, the molecular identity, anatomical organization, physiological characteristics and behavioural influence of cardiac VSNs remain mostly unknown. Here we leveraged single-cell RNA-sequencing data and HYBRiD tissue clearing 4 to show that VSNs that express neuropeptide Y receptor Y2 (NPY2R) predominately connect the heart ventricular wall to the area postrema. Optogenetic activation of NPY2R VSNs elicits the classic triad of BJR responses—hypotension, bradycardia and suppressed respiration—and causes an animal to faint. Photostimulation during high-resolution echocardiography and laser Doppler flowmetry with behavioural observation revealed a range of phenotypes reflected in clinical syncope, including reduced cardiac output, cerebral hypoperfusion, pupil dilation and eye-roll. Large-scale Neuropixels brain recordings and machine-learning-based modelling showed that this manipulation causes the suppression of activity across a large distributed neuronal population that is not explained by changes in spontaneous behavioural movements. Additionally, bidirectional manipulation of the periventricular zone had a push–pull effect, with inhibition leading to longer syncope periods and activation inducing arousal. Finally, ablating NPY2R VSNs specifically abolished the BJR. Combined, these results demonstrate a genetically defined cardiac reflex that recapitulates characteristics of human syncope at physiological, behavioural and neural network levels. The molecular mechanisms underlying the Bezold–Jarisch reflex and syncope (fainting) involve vagal sensory neurons that express neuropeptide Y receptor Y2, the deletion of which in animal models abolishes the Bezold–Jarisch reflex.
Cellpose3: one-click image restoration for improved cellular segmentation
Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types; however, existing methods struggle for images that are degraded by noise, blurring or undersampling, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry and undersampled images. Unlike previous approaches that train models to restore pixel values, we trained Cellpose3 to output images that are well segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as ‘one-click’ buttons inside the graphical interface of Cellpose as well as in the Cellpose API. Cellpose3 employs deep-learning-based approaches for image restoration to improve cellular segmentation and shows strong generalized performance even on images degraded by noise, blurring or undersampling.
A simplified minimodel of visual cortical neurons
Artificial neural networks (ANNs) have been shown to predict neural responses in primary visual cortex (V1) better than classical models. However, this performance often comes at the expense of simplicity and interpretability. Here we introduce a new class of simplified ANN models that can predict over 70% of the response variance of V1 neurons. To achieve this high performance, we first recorded a new dataset of over 29,000 neurons responding to up to 65,000 natural image presentations in mouse V1. We found that ANN models required only two convolutional layers for good performance, with a relatively small first layer. We further found that we could make the second layer small without loss of performance, by fitting individual “minimodels” to each neuron. Similar simplifications applied for models of monkey V1 neurons. We show that the minimodels can be used to gain insight into how stimulus invariance arises in biological neurons. Mathematical models of V1 seek to explain the response properties of V1 neurons, often with more complex models providing more accurate predictions. Here, the authors show that deep neural network models of mouse and monkey V1 can be dramatically simplified to a two-layer “minimodel\" while retaining high accuracy.
Inhibitory control of correlated intrinsic variability in cortical networks
Cortical networks exhibit intrinsic dynamics that drive coordinated, large-scale fluctuations across neuronal populations and create noise correlations that impact sensory coding. To investigate the network-level mechanisms that underlie these dynamics, we developed novel computational techniques to fit a deterministic spiking network model directly to multi-neuron recordings from different rodent species, sensory modalities, and behavioral states. The model generated correlated variability without external noise and accurately reproduced the diverse activity patterns in our recordings. Analysis of the model parameters suggested that differences in noise correlations across recordings were due primarily to differences in the strength of feedback inhibition. Further analysis of our recordings confirmed that putative inhibitory neurons were indeed more active during desynchronized cortical states with weak noise correlations. Our results demonstrate that network models with intrinsically-generated variability can accurately reproduce the activity patterns observed in multi-neuron recordings and suggest that inhibition modulates the interactions between intrinsic dynamics and sensory inputs to control the strength of noise correlations. Our brains contain billions of neurons, which are continually producing electrical signals to relay information around the brain. Yet most of our knowledge of how the brain works comes from studying the activity of one neuron at a time. Recently, studies of multiple neurons have shown that they tend to be active together in short bursts called “up” states, which are followed by periods in which they are less active called “down” states. When we are sleeping or under a general anesthetic, the neurons may be completely silent during down states, but when we are awake the difference in activity between the two states is usually less extreme. However, it is still not clear how the neurons generate these patterns of activity. To address this question, Stringer et al. studied the activity of neurons in the brains of awake and anesthetized rats, mice and gerbils. The experiments recorded electrical activity from many neurons at the same time and found a wide range of different activity patterns. A computational model based on these data suggests that differences in the degree to which some neurons suppress the activity of other neurons may account for this variety. Increasing the strength of these inhibitory signals in the model decreased the fluctuations in electrical activity across entire areas of the brain. Further analysis of the experimental data supported the model’s predictions by showing that inhibitory neurons – which act to reduce electrical activity in other neurons – were more active when there were fewer fluctuations in activity across the brain. The next step following on from this work would be to develop ways to build computer models that can mimic the activity of many more neurons at the same time. The models could then be used to interpret the electrical activity produced by many different kinds of neuron. This will enable researchers to test more sophisticated hypotheses about how the brain works.
A modular chemigenetic calcium indicator for multiplexed in vivo functional imaging
Genetically encoded fluorescent calcium indicators allow cellular-resolution recording of physiology. However, bright, genetically targetable indicators that can be multiplexed with existing tools in vivo are needed for simultaneous imaging of multiple signals. Here we describe WHaloCaMP, a modular chemigenetic calcium indicator built from bright dye-ligands and protein sensor domains. Fluorescence change in WHaloCaMP results from reversible quenching of the bound dye via a strategically placed tryptophan. WHaloCaMP is compatible with rhodamine dye-ligands that fluoresce from green to near-infrared, including several that efficiently label the brain in animals. When bound to a near-infrared dye-ligand, WHaloCaMP shows a 7× increase in fluorescence intensity and a 2.1-ns increase in fluorescence lifetime upon calcium binding. We use WHaloCaMP1a to image Ca 2+ responses in vivo in flies and mice, to perform three-color multiplexed functional imaging of hundreds of neurons and astrocytes in zebrafish larvae and to quantify Ca 2+ concentration using fluorescence lifetime imaging microscopy (FLIM). WHaloCaMP is a chemigenetic calcium indicator that can be combined with different rhodamine dyes for multiplexed or FLIM imaging in vivo, as demonstrated for calcium imaging in neuronal cultures, brain slices, Drosophila , zebrafish larvae and the mouse brain.