Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
948
result(s) for
"Pitch Perception - physiology"
Sort by:
Piano training enhances the neural processing of pitch and improves speech perception in Mandarin-speaking children
by
Liu, Li
,
Gong, Chen Chen
,
Geiser, Eveline
in
Attention - physiology
,
Auditory discrimination
,
Biological Sciences
2018
Musical training confers advantages in speech-sound processing, which could play an important role in early childhood education. To understand the mechanisms of this effect, we used eventrelated potential and behavioral measures in a longitudinal design. Seventy-four Mandarin-speaking children aged 4–5 y old were pseudorandomly assigned to piano training, reading training, or a no-contact control group. Six months of piano training improved behavioral auditory word discrimination in general as well as word discrimination based on vowels compared with the controls. The reading group yielded similar trends. However, the piano group demonstrated unique advantages over the reading and control groups in consonant-based word discrimination and in enhanced positive mismatch responses (pMMRs) to lexical tone and musical pitch changes. The improved word discrimination based on consonants correlated with the enhancements in musical pitch pMMRs among the children in the piano group. In contrast, all three groups improved equally on general cognitive measures, including tests of IQ, working memory, and attention. The results suggest strengthened common sound processing across domains as an important mechanism underlying the benefits of musical training on language processing. In addition, although we failed to find far-transfer effects of musical training to general cognition, the near-transfer effects to speech perception establish the potential for musical training to help children improve their language skills. Piano training was not inferior to reading training on direct tests of language function, and it even seemed superior to reading training in enhancing consonant discrimination.
Journal Article
Cortical entrainment to music and its modulation by expertise
by
Doelling, Keith B.
,
Poeppel, David
in
Acoustic Stimulation
,
Auditory Perception - physiology
,
Biological Sciences
2015
Recent studies establish that cortical oscillations track naturalistic speech in a remarkably faithful way. Here, we test whether such neural activity, particularly low-frequency (<8 Hz; delta–theta) oscillations, similarly entrain to music and whether experience modifies such a cortical phenomenon. Music of varying tempi was used to test entrainment at different rates. In three magnetoencephalography experiments, we recorded from nonmusicians, as well as musicians with varying years of experience. Recordings from nonmusicians demonstrate cortical entrainment that tracks musical stimuli over a typical range of tempi, but not at tempi below 1 note per second. Importantly, the observed entrainment correlates with performance on a concurrent pitch-related behavioral task. In contrast, the data from musicians show that entrainment is enhanced by years of musical training, at all presented tempi. This suggests a bidirectional relationship between behavior and cortical entrainment, a phenomenon that has not previously been reported. Additional analyses focus on responses in the beta range (∼15–30 Hz)—often linked to delta activity in the context of temporal predictions. Our findings provide evidence that the role of beta in temporal predictions scales to the complex hierarchical rhythms in natural music and enhances processing of musical content. This study builds on important findings on brainstem plasticity and represents a compelling demonstration that cortical neural entrainment is tightly coupled to both musical training and task performance, further supporting a role for cortical oscillatory activity in music perception and cognition.
Journal Article
Sensory–motor networks involved in speech production and motor control: An fMRI study
by
Shebek, Rachel
,
Hansen, Daniel R.
,
Howard, Matthew A.
in
Acoustic Stimulation
,
Adult
,
Auditory feedback
2015
Speaking is one of the most complex motor behaviors developed to facilitate human communication. The underlying neural mechanisms of speech involve sensory–motor interactions that incorporate feedback information for online monitoring and control of produced speech sounds. In the present study, we adopted an auditory feedback pitch perturbation paradigm and combined it with functional magnetic resonance imaging (fMRI) recordings in order to identify brain areas involved in speech production and motor control. Subjects underwent fMRI scanning while they produced a steady vowel sound /a/ (speaking) or listened to the playback of their own vowel production (playback). During each condition, the auditory feedback from vowel production was either normal (no perturbation) or perturbed by an upward (+600 cents) pitch-shift stimulus randomly. Analysis of BOLD responses during speaking (with and without shift) vs. rest revealed activation of a complex network including bilateral superior temporal gyrus (STG), Heschl's gyrus, precentral gyrus, supplementary motor area (SMA), Rolandic operculum, postcentral gyrus and right inferior frontal gyrus (IFG). Performance correlation analysis showed that the subjects produced compensatory vocal responses that significantly correlated with BOLD response increases in bilateral STG and left precentral gyrus. However, during playback, the activation network was limited to cortical auditory areas including bilateral STG and Heschl's gyrus. Moreover, the contrast between speaking vs. playback highlighted a distinct functional network that included bilateral precentral gyrus, SMA, IFG, postcentral gyrus and insula. These findings suggest that speech motor control involves feedback error detection in sensory (e.g. auditory) cortices that subsequently activate motor-related areas for the adjustment of speech parameters during speaking.
•Auditory feedback plays a key role in speech production and motor control.•Humans vocally compensate for pitch perturbations in their voice auditory feedback.•Vocal pitch motor control involves a complex sensory–motor network in the brain.•Functional networks of speech motor control are not affected in patients with epilepsy.
Journal Article
Deep neural network models reveal interplay of peripheral coding and stimulus statistics in pitch perception
by
McDermott, Josh H.
,
Gonzalez, Ray
,
Saddler, Mark R.
in
631/378/116
,
631/378/2619
,
631/477/2811
2021
Perception is thought to be shaped by the environments for which organisms are optimized. These influences are difficult to test in biological organisms but may be revealed by machine perceptual systems optimized under different conditions. We investigated environmental and physiological influences on pitch perception, whose properties are commonly linked to peripheral neural coding limits. We first trained artificial neural networks to estimate fundamental frequency from biologically faithful cochlear representations of natural sounds. The best-performing networks replicated many characteristics of human pitch judgments. To probe the origins of these characteristics, we then optimized networks given altered cochleae or sound statistics. Human-like behavior emerged only when cochleae had high temporal fidelity and when models were optimized for naturalistic sounds. The results suggest pitch perception is critically shaped by the constraints of natural environments in addition to those of the cochlea, illustrating the use of artificial neural networks to reveal underpinnings of behavior.
The neural and computational mechanisms underpinning pitch perception remain unclear. Here, the authors trained deep neural networks to estimate the fundamental frequency of sounds and found that human pitch perception depends on precise spike timing in the auditory nerve, but is also adapted to the statistical tendencies of natural sounds.
Journal Article
Tone Language Speakers and Musicians Share Enhanced Perceptual and Cognitive Abilities for Musical Pitch: Evidence for Bidirectionality between the Domains of Language and Music
2013
Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language.
Journal Article
Do 'leaders' in change sound different from 'laggers'? The perceptual similarity of New Zealand English voices
2025
Work on covariation in New Zealand English has revealed groups of speakers characterised by their back vowel spaces and status as 'leaders' or 'laggers' across a set of ongoing vowel changes. We investigate whether listeners hear speakers from different groups as perceptually distinct. We conduct a perception task in which New Zealanders rate the similarity of pairs of speakers. We use the results to create a two-dimensional perceptual similarity space by means of Multi-Dimensional Scaling, and test if speakers are organised within this space according to their back vowels, leader-lagger status, speed, or mean pitch. Results indicate higher pitched and faster speakers are perceptually distinct from lower pitched and slower speakers. Leaders are perceptually distinct from laggers if they are not markedly higher pitched. A Generalised Additive Mixed Model fit to the trial-by-trial ratings shows order effects, revealing that perception of similarity is not symmetrical. They also support the perceptual relevance of speaker speed, pitch and leader-lagger status.
Journal Article
Rivalry between pitch and timbre in auditory stream segregation
2025
Two rapidly alternating tones with different pitches may be perceived as one integrated stream when the pitch differences are small or two separated streams when the pitch differences are large. Likewise, timbre differences between two tones may also cause such sequential stream segregation . Moreover, the effects of pitch and timbre on stream segregation may cancel each other out, which is called a trade-off. However, how timbre differences caused by specific patterns of spectral shapes interact with pitch differences and affect stream segregation has been largely unexplored. Therefore, we used stripe tones , in which stripe-like spectral patterns of harmonic complex tones were realized by grouping harmonic components into several bands based on harmonic numbers and removing harmonic components in every other band. Here, we show that 2- and 4-band stimuli elicited distinctive stream segregation against pitch proximity. By contrast, pitch separations dominated stream segregation for 16-band stimuli. The results for 8-band stimuli most clearly showed the trade-off between pitch and timbre on stream segregation. These results suggest that the stimuli with a small number ( ≤ 4) of bands elicit strong stream segregation due to sharp timbral contrasts between stripe-like spectral patterns, and that the auditory system looks to be limited in integrating blocks of frequency components dispersed over frequency and time.
Journal Article
Musical aptitude moderates ease and vividness but not frequency of the speech-to-song illusion
2025
Repetitions of a spoken phrase can induce a perceptual illusion in which speech transforms into song, known as the speech-to-song illusion. Speech acoustics that share certain pitch and timing properties with songs seem to be involved in facilitating the illusion, with a recent proposal suggesting that the illusion hinges on the individual ability to detect musical features latently present in speech. The current study tests this proposal by manipulating pitch and timing features of spoken phrases and examining how musical aptitude of listeners (specifically, their sensitivity to disruptions of musical melody and beat timing) moderates their experience of the illusion. The results show that the illusion is perceived by everyone regardless of their musical aptitude, with phrases that contain stable pitch and long periods of high sonority transforming more frequently. However, musical aptitude does moderate the speed and the strength of the illusion. Listeners with lower beat perception ability experience the illusion faster, which suggests involvement of temporal distortion processes during repetitions. Listeners with higher melody perception ability experience the illusion more strongly, which indicates involvement of musical pitch extraction rather than pitch distortion. These findings contribute new evidence on the complexity of an illusory experience in the auditory domain.
Journal Article
Cortical tracking of rhythm in music and speech
by
Harding, Eleanor E.
,
Large, Edward W.
,
Kotz, Sonja A.
in
Adult
,
Brain Mapping - methods
,
Cerebral Cortex - physiology
2019
Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.
•Rhythm is reflected in the acoustic envelope of music and speech.•Cerebral-acoustic coherence (CACoh) measured EEG phase locking with stimuli envelopes.•CACoh showed cortical tracking of music and speech rhythm.•CACoh was greater in music than speech, driven by highly trained musicians.•Musical training correlated positively with music CACoh, but not speech CACoh.
Journal Article
Variance aftereffect within and between sensory modalities for visual and auditory domains
by
Ishiguchi, Akira
,
Ueda, Sachiyo
,
Yakushijin, Reiko
in
Adaptation
,
Adaptation, Physiological
,
Adult
2024
We can grasp various features of the outside world using summary statistics efficiently. Among these statistics, variance is an index of information homogeneity or reliability. Previous research has shown that visual variance information in the context of spatial integration is encoded directly as a unique feature, and currently perceived variance can be distorted by that of the preceding stimuli. In this study, we focused on variance perception in temporal integration. We investigated whether any variance aftereffects occurred in visual size and auditory pitch. Furthermore, to examine the mechanism of cross-modal variance perception, we also investigated whether variance aftereffects occur between different modalities. Four experimental conditions (a combination of sensory modalities of adaptor and test: visual-to-visual, visual-to-auditory, auditory-to-auditory, and auditory-to-visual) were conducted. Participants observed a sequence of visual or auditory stimuli perturbed in size or pitch with certain variance and performed a variance classification task before and after the variance adaptation phase. We found that in visual size, within modality adaptation to small or large variance, resulted in a variance aftereffect, indicating that variance judgments are biased in the direction away from that of the adapting stimulus. In auditory pitch, within modality adaptation to small variance caused variance aftereffect. For cross-modal combinations, adaptation to small variance in visual size resulted in variance aftereffect. However, the effect was weak, and variance aftereffect did not occur in other conditions. These findings indicate that the variance information of sequentially presented stimuli is encoded independently in visual and auditory domains.
Journal Article