Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
34,260 result(s) for "Discrimination Learning"
Sort by:
Dopamine D2 receptors in discrimination learning and spine enlargement
Dopamine D2 receptors (D2Rs) are densely expressed in the striatum and have been linked to neuropsychiatric disorders such as schizophrenia 1 , 2 . High-affinity binding of dopamine suggests that D2Rs detect transient reductions in dopamine concentration (the dopamine dip) during punishment learning 3 – 5 . However, the nature and cellular basis of D2R-dependent behaviour are unclear. Here we show that tone reward conditioning induces marked stimulus generalization in a manner that depends on dopamine D1 receptors (D1Rs) in the nucleus accumbens (NAc) of mice, and that discrimination learning refines the conditioning using a dopamine dip. In NAc slices, a narrow dopamine dip (as short as 0.4 s) was detected by D2Rs to disinhibit adenosine A 2A receptor (A 2A R)-mediated enlargement of dendritic spines in D2R-expressing spiny projection neurons (D2-SPNs). Plasticity-related signalling by Ca 2+ /calmodulin-dependent protein kinase II and A 2A Rs in the NAc was required for discrimination learning. By contrast, extinction learning did not involve dopamine dips or D2-SPNs. Treatment with methamphetamine, which dysregulates dopamine signalling, impaired discrimination learning and spine enlargement, and these impairments were reversed by a D2R antagonist. Our data show that D2Rs refine the generalized reward learning mediated by D1Rs. Detection of dopamine dips by neurons that express dopamine D2 receptors in the striatum is used to refine generalized reward conditioning mediated by dopamine D1 receptors.
A star-nose-like tactile-olfactory bionic sensing array for robust object recognition in non-visual environments
Object recognition is among the basic survival skills of human beings and other animals. To date, artificial intelligence (AI) assisted high-performance object recognition is primarily visual-based, empowered by the rapid development of sensing and computational capabilities. Here, we report a tactile-olfactory sensing array, which was inspired by the natural sense-fusion system of star-nose mole, and can permit real-time acquisition of the local topography, stiffness, and odor of a variety of objects without visual input. The tactile-olfactory information is processed by a bioinspired olfactory-tactile associated machine-learning algorithm, essentially mimicking the biological fusion procedures in the neural system of the star-nose mole. Aiming to achieve human identification during rescue missions in challenging environments such as dark or buried scenarios, our tactile-olfactory intelligent sensing system could classify 11 typical objects with an accuracy of 96.9% in a simulated rescue scenario at a fire department test site. The tactile-olfactory bionic sensing system required no visual input and showed superior tolerance to environmental interference, highlighting its great potential for robust object recognition in difficult environments where other methods fall short. For the object recognition in lightless environments, the authors propose the olfactory-tactile machine learning approach, inspired by the star-nose mole’s neural system. They show how bionic flexible sensor arrays allow for real-time acquisition of object’s form and odor when touching it.
Social place-cells in the bat hippocampus
Different sets of neurons encode the spatial position and orientation of an organism. However, social animals need to know the position of other individuals for social interactions, observational learning, and group navigation. Surprisingly, very little is known about how the position of other animals is represented in the brain. Danjo et al. and Omer et al. now report the discovery of a subgroup of neurons in hippocampal area CA1 that encodes the presence of conspecifics in rat and bat brains, respectively. Science , this issue p. 213 , p. 218 A subpopulation of bat hippocampal CA1 neurons represents the spatial position of another bat. Social animals have to know the spatial positions of conspecifics. However, it is unknown how the position of others is represented in the brain. We designed a spatial observational-learning task, in which an observer bat mimicked a demonstrator bat while we recorded hippocampal dorsal-CA1 neurons from the observer bat. A neuronal subpopulation represented the position of the other bat, in allocentric coordinates. About half of these “social place-cells” represented also the observer’s own position—that is, were place cells. The representation of the demonstrator bat did not reflect self-movement or trajectory planning by the observer. Some neurons represented also the position of inanimate moving objects; however, their representation differed from the representation of the demonstrator bat. This suggests a role for hippocampal CA1 neurons in social-spatial cognition.
Spatial host–microbiome sequencing reveals niches in the mouse gut
Mucosal and barrier tissues, such as the gut, lung or skin, are composed of a complex network of cells and microbes forming a tight niche that prevents pathogen colonization and supports host–microbiome symbiosis. Characterizing these networks at high molecular and cellular resolution is crucial for understanding homeostasis and disease. Here we present spatial host–microbiome sequencing (SHM-seq), an all-sequencing-based approach that captures tissue histology, polyadenylated RNAs and bacterial 16S sequences directly from a tissue by modifying spatially barcoded glass surfaces to enable simultaneous capture of host transcripts and hypervariable regions of the 16S bacterial ribosomal RNA. We applied our approach to the mouse gut as a model system, used a deep learning approach for data mapping and detected spatial niches defined by cellular composition and microbial geography. We show that subpopulations of gut cells express specific gene programs in different microenvironments characteristic of regional commensal bacteria and impact host–bacteria interactions. SHM-seq should enhance the study of native host–microbe interactions in health and disease. Spatial host–microbiome sequencing simultaneously profiles microbes and host transcriptomes from mouse colons.
Visual discrimination and amodal completion in zebrafish
While zebrafish represent an important model for the study of the visual system, visual perception in this species is still less investigated than in other teleost fish. In this work, we validated for zebrafish two versions of a visual discrimination learning task, which is based on the motivation to reach food and companions. Using this task, we investigated zebrafish ability to discriminate between two different shape pairs (i.e., disk vs. cross and full vs. amputated disk). Once zebrafish were successfully trained to discriminate a full from an amputated disk, we also tested their ability to visually complete partially occluded objects (amodal completion). After training, animals were presented with two amputated disks. In these test stimuli, another shape was either exactly juxtaposed or only placed close to the missing sectors of the disk. Only the former stimulus should elicit amodal completion. In human observers, this stimulus causes the impression that the other shape is occluding the missing sector of the disk, which is thus perceived as a complete, although partially hidden, disk. In line with our predictions, fish reinforced on the full disk chose the stimulus eliciting amodal completion, while fish reinforced on the amputated disk chose the other stimulus. This represents the first demonstration of amodal completion perception in zebrafish. Moreover, our results also indicated that a specific shape pair (disk vs. cross) might be particularly difficult to discriminate for this species, confirming previous reports obtained with different procedures.
Summary statistics in auditory perception
Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here the authors show that the auditory system summarizes the temporal details of sounds using time-averaged statistics. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail. Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.
The odour of an unfamiliar stressed or relaxed person affects dogs’ responses to a cognitive bias test
Dogs can discriminate stressed from non-stressed human odour samples, but the effect on their cognition is unstudied. Using a cognitive bias task, we tested how human odours affect dogs’ likelihood of approaching a food bowl placed at three ambiguous locations (“near-positive”, “middle” and “near-negative”) between trained “positive” (rewarded) and “negative” (unrewarded) locations. Using odour samples collected from three unfamiliar volunteers during stressful and relaxing activities, we tested eighteen dogs under three conditions: no odour, stress odour and relaxed odour, with the order of test odours counterbalanced across dogs. When exposed to stress odour during session three, dogs were significantly less likely to approach a bowl placed at one of the three ambiguous locations (near-negative) compared to no odour, indicating possible risk-reduction behaviours in response to the smell of human stress. Dogs’ learning of trained positive and negative locations improved with repeated testing and was significant between sessions two and three only when exposed to stress odour during session three, suggesting odour influenced learning. This is the first study to show that without visual or auditory cues, olfactory cues of human stress may affect dogs’ cognition and learning, which, if true, could have important consequences for dog welfare and working performance.
A deep learning method for simultaneous denoising and missing wedge reconstruction in cryogenic electron tomography
Cryogenic electron tomography is a technique for imaging biological samples in 3D. A microscope collects a series of 2D projections of the sample, and the goal is to reconstruct the 3D density of the sample called the tomogram. Reconstruction is difficult as the 2D projections are noisy and can not be recorded from all directions, resulting in a missing wedge of information. Tomograms conventionally reconstructed with filtered back-projection suffer from noise and strong artefacts due to the missing wedge. Here, we propose a deep-learning approach for simultaneous denoising and missing wedge reconstruction called DeepDeWedge. The algorithm requires no ground truth data and is based on fitting a neural network to the 2D projections using a self-supervised loss. DeepDeWedge is simpler than current state-of-the-art approaches for denoising and missing wedge reconstruction, performs competitively and produces more denoised tomograms with higher overall contrast. The authors propose DeepDeWedge, a deep learning method to improve the visual quality of 3D images obtained with cryogenic electron tomography. DeepDeWedge effectively removes noise and artefacts due to missing data.
Distinct learning-induced changes in stimulus selectivity and interactions of GABAergic interneuron classes in visual cortex
How learning enhances neural representations for behaviorally relevant stimuli via activity changes of cortical cell types remains unclear. We simultaneously imaged responses of pyramidal cells (PYR) along with parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal peptide (VIP) inhibitory interneurons in primary visual cortex while mice learned to discriminate visual patterns. Learning increased selectivity for task-relevant stimuli of PYR, PV and SOM subsets but not VIP cells. Strikingly, PV neurons became as selective as PYR cells, and their functional interactions reorganized, leading to the emergence of stimulus-selective PYR–PV ensembles. Conversely, SOM activity became strongly decorrelated from the network, and PYR–SOM coupling before learning predicted selectivity increases in individual PYR cells. Thus, learning differentially shapes the activity and interactions of multiple cell classes: while SOM inhibition may gate selectivity changes, PV interneurons become recruited into stimulus-specific ensembles and provide more selective inhibition as the network becomes better at discriminating behaviorally relevant stimuli.
Stimulus intensity and temporal configuration interact during bimodal learning and memory in honey bees
Multimodal integration is a core neural process with a keen relevance during ecological tasks requiring learning and memory, such as foraging. The benefits of learning multimodal signals imply solving whether the components come from a single event. This challenge presumably depends on the timing and intensity of the stimuli. Here, we used simultaneous and alternate presentations of olfactory and visual stimuli, at low and high intensities, to understand how temporal and intensity variations affect the learning of a bimodal stimulus and its components. We relied on the conditioning of the proboscis extension response (PER) to train honey bees to an appetitive learning task with bimodal stimuli precisely controlled. We trained bees to stimuli with different synchronicity and intensity levels. We found that synchronicity, order of presentation, and intensity significantly impacted the probability of exhibiting conditioned PER responses and the latency of the conditioned responses. At low intensities, synchronous bimodal inputs produced maximal multisensory enhancement, while asynchronous temporal orders led to lower performances. At high intensities, the relative advantage of the synchronous stimulation diminished, and asynchronous stimuli produced similar performances. Memory retention was higher for the olfactory component and bimodal stimuli compared to the visual component, irrespective of the training’s temporal configuration. Bees retained the asynchronous bimodal configuration to a lesser extent than the synchronous one, depending on the stimulus intensity. We conclude that time (synchrony), order of presentation, and intensity have interdependent effects on bee learning and memory performance. This suggests caution when assessing the independent effects of each factor.