Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,072 result(s) for "Auditory discrimination learning"
Sort by:
Dissecting neural computations in the human auditory pathway using deep neural networks for speech
The human auditory system extracts rich linguistic abstractions from speech signals. Traditional approaches to understanding this complex process have used linear feature-encoding models, with limited success. Artificial neural networks excel in speech recognition tasks and offer promising computational models of speech processing. We used speech representations in state-of-the-art deep neural network (DNN) models to investigate neural coding from the auditory nerve to the speech cortex. Representations in hierarchical layers of the DNN correlated well with the neural activity throughout the ascending auditory system. Unsupervised speech models performed at least as well as other purely supervised or fine-tuned models. Deeper DNN layers were better correlated with the neural activity in the higher-order auditory cortex, with computations aligned with phonemic and syllabic structures in speech. Accordingly, DNN models trained on either English or Mandarin predicted cortical responses in native speakers of each language. These results reveal convergence between DNN model representations and the biological auditory pathway, offering new approaches for modeling neural coding in the auditory cortex. Using direct intracranial recordings and modern speech AI models, Li and colleagues show representational and computational similarities between deep neural networks for self-supervised speech learning and the human auditory pathway.
Non-invasive, opsin-free mid-infrared modulation activates cortical neurons and accelerates associative learning
Neurostimulant drugs or magnetic/electrical stimulation techniques can overcome attention deficits, but these drugs or techniques are weakly beneficial in boosting the learning capabilities of healthy subjects. Here, we report a stimulation technique, mid-infrared modulation (MIM), that delivers mid-infrared light energy through the opened skull or even non-invasively through a thinned intact skull and can activate brain neurons in vivo without introducing any exogeneous gene. Using c-Fos immunohistochemistry, in vivo single-cell electrophysiology and two-photon Ca 2+ imaging in mice, we demonstrate that MIM significantly induces firing activities of neurons in the targeted cortical area. Moreover, mice that receive MIM targeting to the auditory cortex during an auditory associative learning task exhibit a faster learning speed (~50% faster) than control mice. Together, this non-invasive, opsin-free MIM technique is demonstrated with potential for modulating neuronal activity. Neurostimulant drugs or magnetic/electrical stimulation techniques have shown limited effects on learning capabilities of healthy subjects. The authors show that, without introducing an exogeneous gene, mid-infrared light can modulate firing activity of neurons in vivo and accelerate learning in mice.
Multimodal deep learning models for early detection of Alzheimer’s disease stage
Most current Alzheimer’s disease (AD) and mild cognitive disorders (MCI) studies use single data modality to make predictions such as AD stages. The fusion of multiple data modalities can provide a holistic view of AD staging analysis. Thus, we use deep learning (DL) to integrally analyze imaging (magnetic resonance imaging (MRI)), genetic (single nucleotide polymorphisms (SNPs)), and clinical test data to classify patients into AD, MCI, and controls (CN). We use stacked denoising auto-encoders to extract features from clinical and genetic data, and use 3D-convolutional neural networks (CNNs) for imaging data. We also develop a novel data interpretation method to identify top-performing features learned by the deep-models with clustering and perturbation analysis. Using Alzheimer’s disease neuroimaging initiative (ADNI) dataset, we demonstrate that deep models outperform shallow models, including support vector machines, decision trees, random forests, and k-nearest neighbors. In addition, we demonstrate that integrating multi-modality data outperforms single modality models in terms of accuracy, precision, recall, and meanF1 scores. Our models have identified hippocampus, amygdala brain areas, and the Rey Auditory Verbal Learning Test (RAVLT) as top distinguished features, which are consistent with the known AD literature.
Association between healthy lifestyle and memory decline in older adults: 10 year, population based, prospective cohort study
AbstractObjectiveTo identify an optimal lifestyle profile to protect against memory loss in older individuals.DesignPopulation based, prospective cohort study.SettingParticipants from areas representative of the north, south, and west of China.ParticipantsIndividuals aged 60 years or older who had normal cognition and underwent apolipoprotein E (APOE) genotyping at baseline in 2009.Main outcome measuresParticipants were followed up until death, discontinuation, or 26 December 2019. Six healthy lifestyle factors were assessed: a healthy diet (adherence to the recommended intake of at least 7 of 12 eligible food items), regular physical exercise (≥150 min of moderate intensity or ≥75 min of vigorous intensity, per week), active social contact (≥twice per week), active cognitive activity (≥twice per week), never or previously smoked, and never drinking alcohol. Participants were categorised into the favourable group if they had four to six healthy lifestyle factors, into the average group for two to three factors, and into the unfavourable group for zero to one factor. Memory function was assessed using the World Health Organization/University of California-Los Angeles Auditory Verbal Learning Test, and global cognition was assessed via the Mini-Mental State Examination. Linear mixed models were used to explore the impact of lifestyle factors on memory in the study sample.Results29 072 participants were included (mean age of 72.23 years; 48.54% (n=14 113) were women; and 20.43% (n=5939) were APOE ε4 carriers). Over the 10 year follow-up period (2009-19), participants in the favourable group had slower memory decline than those in the unfavourable group (by 0.028 points/year, 95% confidence interval 0.023 to 0.032, P<0.001). APOE ε4 carriers with favourable (0.027, 95% confidence interval 0.023 to 0.031) and average (0.014, 0.010 to 0.019) lifestyles exhibited a slower memory decline than those with unfavourable lifestyles. Among people who were not carriers of APOE ε4, similar results were observed among participants in the favourable (0.029 points/year, 95% confidence interval 0.019 to 0.039) and average (0.019, 0.011 to 0.027) groups compared with those in the unfavourable group. APOE ε4 status and lifestyle profiles did not show a significant interaction effect on memory decline (P=0.52).ConclusionA healthy lifestyle is associated with slower memory decline, even in the presence of the APOE ε4 allele. This study might offer important information to protect older adults against memory decline.Trial registrationClinicalTrials.gov NCT03653156.
Sensory cortex plasticity supports auditory social learning
Social learning (SL) through experience with conspecifics can facilitate the acquisition of many behaviors. Thus, when Mongolian gerbils are exposed to a demonstrator performing an auditory discrimination task, their subsequent task acquisition is facilitated, even in the absence of visual cues. Here, we show that transient inactivation of auditory cortex (AC) during exposure caused a significant delay in task acquisition during the subsequent practice phase, suggesting that AC activity is necessary for SL. Moreover, social exposure induced an improvement in AC neuron sensitivity to auditory task cues. The magnitude of neural change during exposure correlated with task acquisition during practice. In contrast, exposure to only auditory task cues led to poorer neurometric and behavioral outcomes. Finally, social information during exposure was encoded in the AC of observer animals. Together, our results suggest that auditory SL is supported by AC neuron plasticity occurring during social exposure and prior to behavioral performance. Social learning through observing conspecifics can facilitate the acquisition of behaviors. Here, the authors show in Mongolian gerbils that auditory cortex is necessary for social learning of an auditory discrimination task, and that social exposure improves neuronal coding of auditory task cues.
A GRU–CNN model for auditory attention detection using microstate and recurrence quantification analysis
Attention as a cognition ability plays a crucial role in perception which helps humans to concentrate on specific objects of the environment while discarding others. In this paper, auditory attention detection (AAD) is investigated using different dynamic features extracted from multichannel electroencephalography (EEG) signals when listeners attend to a target speaker in the presence of a competing talker. To this aim, microstate and recurrence quantification analysis are utilized to extract different types of features that reflect changes in the brain state during cognitive tasks. Then, an optimized feature set is determined by employing the processes of significant feature selection based on classification performance. The classifier model is developed by hybrid sequential learning that employs Gated Recurrent Units (GRU) and Convolutional Neural Network (CNN) into a unified framework for accurate attention detection. The proposed AAD method shows that the selected feature set achieves the most discriminative features for the classification process. Also, it yields the best performance as compared with state-of-the-art AAD approaches from the literature in terms of various measures. The current study is the first to validate the use of microstate and recurrence quantification parameters to differentiate auditory attention using reinforcement learning without access to stimuli.
A role for the thalamus in danger evoked awakening during sleep
Sleep involves a relative disconnection from the environment, yet sensory stimuli can still trigger awakenings. The mechanism underlying sensory vigilance and stimulus discrimination during sleep remains unclear. Here, we showed that neutral auditory stimuli evoked responses across parallel auditory and non-auditory pathways, including the auditory cortex and thalamus, the hippocampus and centro-medial thalamus (CMT). Using a convolutional neural network, we identified CMT activity as the most discriminant hub for auditory-evoked sleep-to-wake transitions among all recorded structures. Furthermore, we found that prior associative learning of auditory cues with danger (conditioned stimulus, CS+) resulted in increased awakening upon CS+ exposure during NREM, but not REM, sleep. These sleep-to-wake transitions were blocked by optogenetic silencing of CMT neurons during CS+ exposure in sleeping mice. Altogether, these results suggest a central role of the CMT neurons in the residual processing of behaviorally-relevant information in the sleeping brain functioning as one of the major hubs for awakening in response to danger. The extent to which the sleeping brain can discern safety from danger is poorly understood. In mice, the authors show that centro-medial thalamic neurons detect threat-related sounds and selectively trigger awakenings during NREM, but not REM, sleep.
Single cell plasticity and population coding stability in auditory thalamus upon associative learning
Cortical and limbic brain areas are regarded as centres for learning. However, how thalamic sensory relays participate in plasticity upon associative learning, yet support stable long-term sensory coding remains unknown. Using a miniature microscope imaging approach, we monitor the activity of populations of auditory thalamus (medial geniculate body) neurons in freely moving mice upon fear conditioning. We find that single cells exhibit mixed selectivity and heterogeneous plasticity patterns to auditory and aversive stimuli upon learning, which is conserved in amygdala-projecting medial geniculate body neurons. Activity in auditory thalamus to amygdala-projecting neurons stabilizes single cell plasticity in the total medial geniculate body population and is necessary for fear memory consolidation. In contrast to individual cells, population level encoding of auditory stimuli remained stable across days. Our data identifies auditory thalamus as a site for complex neuronal plasticity in fear learning upstream of the amygdala that is in an ideal position to drive plasticity in cortical and limbic brain areas. These findings suggest that medial geniculate body’s role goes beyond a sole relay function by balancing experience-dependent, diverse single cell plasticity with consistent ensemble level representations of the sensory environment to support stable auditory perception with minimal affective bias. How thalamic sensory relays participate in plasticity upon associative fear learning and stable long-term sensory coding remains unknown. The authors show that auditory thalamus neurons exhibit heterogeneous plasticity patterns after learning while population level encoding of auditory stimuli remains stable across days.
Inhibitory effect of tDCS on auditory evoked response: Simultaneous MEG-tDCS reveals causal role of right auditory cortex in pitch learning
•We tested pitch learning using a microtonal melody discrimination task.•tDCS over the right AC disrupted pitch learning, but tDCS over the left AC did not.•MEG showed tDCS over the right or left AC decreased N1m amplitude of ipsilateral AC.•The N1m changes and pitch learning were correlated only in the right AC tDCS group. A body of literature has demonstrated that the right auditory cortex (AC) plays a dominant role in fine pitch processing. However, our understanding is relatively limited about whether this asymmetry extends to perceptual learning of pitch. There is also a lack of causal evidence regarding the role of the right AC in pitch learning.  We addressed these points with anodal transcranial direct current stimulation (tDCS), adapting a previous behavioral study in which anodal tDCS over the right AC was shown to block improvement of a microtonal pitch pattern learning task over 3 days. To address the physiological changes associated with tDCS, we recorded MEG data simultaneously with tDCS on the first day, and measured behavioral thresholds on the following two consecutive days. We tested three groups of participants who received anodal tDCS over their right or left AC, or sham tDCS, and measured the N1m auditory evoked response before, during, and after tDCS. Our data show that anodal tDCS of the right AC disrupted pitch discrimination learning up to two days after its application, whereas learning was unaffected by left-AC or sham tDCS. Although tDCS reduced the amplitude of the N1m ipsilaterally to the stimulated hemisphere on both left and right, only right AC N1m amplitude reductions were associated with the degree to which pitch learning was disrupted. This brain-behavior relationship confirms a causal link between right AC physiological responses and fine pitch processing, and provides neurophysiological insight concerning the mechanisms of action of tDCS on the auditory system.
Dynamics of nonlinguistic statistical learning: From neural entrainment to the emergence of explicit knowledge
Humans are highly attuned to patterns in the environment. This ability to detect environmental patterns, referred to as statistical learning, plays a key role in many diverse aspects of cognition. However, the spatiotemporal neural mechanisms underlying implicit statistical learning, and how these mechanisms may relate or give rise to explicit learning, remain poorly understood. In the present study, we investigated these different aspects of statistical learning by using an auditory nonlinguistic statistical learning paradigm combined with magnetoencephalography. Twenty-four healthy volunteers were exposed to structured and random tone sequences, and statistical learning was quantified by neural entrainment. Already early during exposure, participants showed strong entrainment to the embedded tone patterns. A significant increase in entrainment over exposure was detected only in the structured condition, reflecting the trajectory of learning. While source reconstruction revealed a wide range of brain areas involved in this process, entrainment in areas around the left pre-central gyrus as well as right temporo-frontal areas significantly predicted behavioral performance. Sensor level results confirmed this relationship between neural entrainment and subsequent explicit knowledge. These results give insights into the dynamic relation between neural entrainment and explicit learning of triplet structures, suggesting that these two aspects are systematically related yet dissociable. Neural entrainment reflects robust, implicit learning of underlying patterns, whereas the emergence of explicit knowledge, likely built on the implicit encoding of structure, varies across individuals and may depend on factors such as sufficient exposure time and attention. [Display omitted]