Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
35
result(s) for
"Timbre Perception - physiology"
Sort by:
Rivalry between pitch and timbre in auditory stream segregation
2025
Two rapidly alternating tones with different pitches may be perceived as one integrated stream when the pitch differences are small or two separated streams when the pitch differences are large. Likewise, timbre differences between two tones may also cause such sequential stream segregation . Moreover, the effects of pitch and timbre on stream segregation may cancel each other out, which is called a trade-off. However, how timbre differences caused by specific patterns of spectral shapes interact with pitch differences and affect stream segregation has been largely unexplored. Therefore, we used stripe tones , in which stripe-like spectral patterns of harmonic complex tones were realized by grouping harmonic components into several bands based on harmonic numbers and removing harmonic components in every other band. Here, we show that 2- and 4-band stimuli elicited distinctive stream segregation against pitch proximity. By contrast, pitch separations dominated stream segregation for 16-band stimuli. The results for 8-band stimuli most clearly showed the trade-off between pitch and timbre on stream segregation. These results suggest that the stimuli with a small number ( ≤ 4) of bands elicit strong stream segregation due to sharp timbral contrasts between stripe-like spectral patterns, and that the auditory system looks to be limited in integrating blocks of frequency components dispersed over frequency and time.
Journal Article
Combination of absolute pitch and tone language experience enhances lexical tone perception
by
Lau, Joseph C. Y.
,
Maggu, Akshay R.
,
Waye, Mary M. Y.
in
631/378/2619
,
631/378/2649
,
631/378/2649/1594
2021
Absolute pitch (AP), a unique ability to name or produce pitch without any reference, is known to be influenced by genetic and cultural factors. AP and tone language experience are both known to promote lexical tone perception. However, the effects of the combination of AP and tone language experience on lexical tone perception are currently not known. In the current study, using behavioral (Categorical Perception) and electrophysiological (Frequency Following Response) measures, we investigated the effect of the combination of AP and tone language experience on lexical tone perception. We found that the Cantonese speakers with AP outperformed the Cantonese speakers without AP on Categorical Perception and Frequency Following Responses of lexical tones, suggesting an additive effect due to the combination of AP and tone language experience. These findings suggest a role of basic sensory pre-attentive auditory processes towards pitch encoding in AP. Further, these findings imply a common mechanism underlying pitch encoding in AP and tone language perception.
Journal Article
Auditory cortex activity related to perceptual awareness versus masking of tone sequences
2021
•Awareness related negativity (ARN) is evoked by repeated tones heard in a tone cloud.•This study addressed if this activity is related to single tones or tone sequences.•No activity enhancement was observed when tone frequency was provided in retrospect.•Prominent enhancement was observed when listeners had to identify a whole tone sequence.•ARN is related to perceptual awareness of a tone sequence under multi-tone masking.
Sequences of repeating tones can be masked by other tones of different frequency. When these tone sequences are perceived, nevertheless, a prominent neural response in the auditory cortex is evoked by each tone of the sequence. When the targets are detected based on their isochrony, participants know that they are listening to the target once they detected it. To explore if the neural activity is more closely related to this detection task or to perceptual awareness, this magnetoencephalography (MEG) study used targets that could only be identified with cues provided after or before the masked target. In experiment 1, multiple mono-tone streams with jittered inter-stimulus interval were used, and the tone frequency of the target was indicated by a cue. Results showed no differential auditory cortex activity between hit and miss trials with post-stimulus cues. A late negative response for hit trials was only observed for pre-stimulus cues, suggesting a task-related component. Since experiment 1 provided no evidence for a link of a difference response with tone awareness, experiment 2 was planned to probe if detection of tone streams was linked to a difference response in auditory cortex. Random-tone sequences were presented in the presence of a multi-tone masker, and the sequence was repeated without masker thereafter. Results showed a prominent difference wave for hit compared to miss trials in experiment 2 evoked by targets in the presence of the masker. These results suggest that perceptual awareness of tone streams is linked to neural activity in auditory cortex.
Journal Article
On the Relationship Between General Auditory Sensitivity and Speech Perception: An Examination of Pitch and Lexical Tone Perception in 4- to 6-Year-Old Children
2020
Purpose: Theoretical models and substantial research have proposed that general auditory sensitivity is a developmental foundation for speech perception and language acquisition. Nonetheless, controversies exist about the effectiveness of general auditory training in improving speech and language skills. This research investigated the relationships among general auditory sensitivity, phonemic speech perception, and word-level speech perception via the examination of pitch and lexical tone perception in children. Method: Forty-eight typically developing 4- to 6-year-old Cantonese-speaking children were tested on the discrimination of the pitch patterns of lexical tones in synthetic stimuli, discrimination of naturally produced lexical tones, and identification of lexical tone in familiar words. Results: The findings revealed that accurate lexical tone discrimination and identification did not necessarily entail the accurate discrimination of nonlinguistic stimuli that followed the pitch levels and pitch shapes of lexical tones. Although pitch discrimination and tone discrimination abilities were strongly correlated, accuracy in pitch discrimination was lower than that in tone discrimination, and nonspeech pitch discrimination ability did not precede linguistic tone discrimination in the developmental trajectory. Conclusions: Contradicting the theoretical models, the findings of this study suggest that general auditory sensitivity and speech perception may not be causally or hierarchically related. The finding that accuracy in pitch discrimination is lower than that in tone discrimination suggests that comparable nonlinguistic auditory perceptual ability may not be necessary for accurate speech perception and language learning. The results cast doubt on the use of nonlinguistic auditory perceptual training to improve children's speech, language, and literacy abilities.
Journal Article
Timbral brightness perception investigated through multimodal interference
2024
Brightness is among the most studied aspects of timbre perception. Psychoacoustically, sounds described as “bright” versus “dark” typically exhibit a high versus low frequency emphasis in the spectrum. However, relatively little is known about the neurocognitive mechanisms that facilitate these
metaphors we listen with
. Do they originate in universal magnitude representations common to more than one sensory modality? Triangulating three different interaction paradigms, we investigated using speeded classification whether intramodal, crossmodal, and amodal interference occurs when timbral brightness, as modeled by the centroid of the spectral envelope, and pitch height/visual brightness/numerical value processing are semantically congruent and incongruent. In four online experiments varying in priming strategy, onset timing, and response deadline, 189 total participants were presented with a baseline stimulus (a pitch, gray square, or numeral) then asked to quickly identify a target stimulus that is higher/lower, brighter/darker, or greater/less than the baseline after being primed with a bright or dark synthetic harmonic tone. Results suggest that timbral brightness modulates the perception of pitch and possibly visual brightness, but not numerical value. Semantically incongruent pitch height-timbral brightness shifts produced significantly slower reaction time (RT) and higher error compared to congruent pairs. In the visual task, incongruent pairings of gray squares and tones elicited slower RTs than congruent pairings (in two experiments). No interference was observed in the number comparison task. These findings shed light on the embodied and multimodal nature of experiencing timbre.
Journal Article
Auditory and vibrotactile interactions in perception of timbre acoustic features
by
Zatorre, Robert
,
Albouy, Philippe
,
Sharp, Andréanne
in
631/378/2619
,
631/378/2620
,
631/378/2649/1723
2025
Recently, there has been increasing interest in developing auditory-to-vibrotactile sensory devices. However, the potential of these technologies is constrained by our limited understanding of which features of complex sounds can be perceived through vibrations. The present study aimed to investigate the vibrotactile perception of acoustic features related to timbre, an essential component to identify environmental, speech and musical sounds. Discrimination thresholds were measured for six features: three spectral (number of harmonics, harmonic roll-off ratio, even-harmonic attenuation) and three temporal (attack time, amplitude modulation depth and amplitude modulation frequency) using auditory, vibrotactile and combined auditory + vibrotactile stimulation in 31 adult humans with normal tactile and auditory sensitivity. Result revealed that all spectral and temporal features can be reliably discriminated via vibrotactile stimulation only. However, for spectral features, vibrotactile thresholds were significantly higher (i.e., worse) than auditory thresholds whereas, for temporal features, only vibrotactile amplitude modulation frequency was significantly higher. With simultaneous auditory and tactile presentation, thresholds significantly improved for attack time and amplitude modulation depth, but not for any of the spectral acoustic features. These results suggest that vibrotactile temporal cues have a more straightforward potential for assisting auditory perception, while vibrotactile spectral cues may require specialized signal processing schemes.
Journal Article
Vocal Emotion Perception and Musicality—Insights from EEG Decoding
by
Schweinberger, Stefan R.
,
Nussbaum, Christine
,
Lehnen, Johannes M.
in
Acoustic Stimulation
,
Acoustics
,
Adult
2025
Musicians have an advantage in recognizing vocal emotions compared to non-musicians, a performance advantage often attributed to enhanced early auditory sensitivity to pitch. Yet a previous ERP study only detected group differences from 500 ms onward, suggesting that conventional ERP analyses might not be sensitive enough to detect early neural effects. To address this, we re-analyzed EEG data from 38 musicians and 39 non-musicians engaged in a vocal emotion perception task. Stimuli were generated using parameter-specific voice morphing to preserve emotional cues in either the pitch contour (F0) or timbre. By employing a neural decoding framework with a Linear Discriminant Analysis classifier, we tracked the evolution of emotion representations over time in the EEG signal. Converging with the previous ERP study, our findings reveal that musicians—but not non-musicians—exhibited significant emotion decoding between 500 and 900 ms after stimulus onset, a pattern observed for F0-Morphs only. These results suggest that musicians’ superior vocal emotion recognition arises from more effective integration of pitch information during later processing stages rather than from enhanced early sensory encoding. Our study also demonstrates the potential of neural decoding approaches using EEG brain activity as a biological sensor for unraveling the temporal dynamics of voice perception.
Journal Article
Contributions of fundamental frequency and timbre to vocal emotion perception and their electrophysiological correlates
by
Schweinberger, Stefan R
,
Schirmer, Annett
,
Nussbaum, Christine
in
Acoustic properties
,
Acoustics
,
Auditory Perception - physiology
2022
Abstract
Our ability to infer a speaker’s emotional state depends on the processing of acoustic parameters such as fundamental frequency (F0) and timbre. Yet, how these parameters are processed and integrated to inform emotion perception remains largely unknown. Here we pursued this issue using a novel parameter-specific voice morphing technique to create stimuli with emotion modulations in only F0 or only timbre. We used these stimuli together with fully modulated vocal stimuli in an event-related potential (ERP) study in which participants listened to and identified stimulus emotion. ERPs (P200 and N400) and behavioral data converged in showing that both F0 and timbre support emotion processing but do so differently for different emotions: Whereas F0 was most relevant for responses to happy, fearful and sad voices, timbre was most relevant for responses to voices expressing pleasure. Together, these findings offer original insights into the relative significance of different acoustic parameters for early neuronal representations of speaker emotion and show that such representations are predictive of subsequent evaluative judgments.
Journal Article
Effect of Vibrotactile Stimulation on Auditory Timbre Perception for Normal-Hearing Listeners and Cochlear-Implant Users
by
Marozeau, Jeremy
,
Verma, Tushar
,
Aker, Scott C.
in
Acoustic Stimulation - methods
,
Auditory Perception - physiology
,
Cochlear Implants
2023
The study tests the hypothesis that vibrotactile stimulation can affect timbre perception. A multidimensional scaling experiment was conducted. Twenty listeners with normal hearing and nine cochlear implant users were asked to judge the dissimilarity of a set of synthetic sounds that varied in attack time and amplitude modulation depth. The listeners were simultaneously presented with vibrotactile stimuli, which varied also in attack time and amplitude modulation depth. The results showed that alterations to the temporal waveform of the tactile stimuli affected the listeners’ dissimilarity judgments of the audio. A three-dimensional analysis revealed evidence of crossmodal processing where the audio and tactile equivalents combined accounted for their dissimilarity judgments. For the normal-hearing listeners, 86% of the first dimension was explained by audio impulsiveness and 14% by tactile impulsiveness; 75% of the second dimension was explained by the audio roughness or fast amplitude modulation, while its tactile counterpart explained 25%. Interestingly, the third dimension revealed a combination of 43% of audio impulsiveness and 57% of tactile amplitude modulation. For the CI listeners, the first dimension was mostly accounted for by the tactile roughness and the second by the audio impulsiveness. This experiment shows that the perception of timbre can be affected by tactile input and could lead to the developing of new audio-tactile devices for people with hearing impairment.
Journal Article
Something in the Way She Sings: Enhanced Memory for Vocal Melodies
2012
Across species, there is considerable evidence of preferential processing for biologically significant signals such as conspecific vocalizations and the calls of individual conspecifics. Surprisingly, music cognition in human listeners is typically studied with stimuli that are relatively low in biological significance, such as instrumental sounds. The present study explored the possibility that melodies might be remembered better when presented vocally rather than instrumentally. Adults listened to unfamiliar folk melodies, with some presented in familiar timbres (voice and piano) and others in less familiar timbres (banjo and marimba). They were subsequently tested on recognition of previously heard melodies intermixed with novel melodies. Melodies presented vocally were remembered better than those presented instrumentally even though they were liked less. Factors underlying the advantage for vocal melodies remain to be determined. In line with its biological significance, vocal music may evoke increased vigilance or arousal, which in turn may result in greater depth of processing and enhanced memory for musical details.
Journal Article