Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
65 result(s) for "Smit, Eline A."
Sort by:
Musical aptitude moderates ease and vividness but not frequency of the speech-to-song illusion
Repetitions of a spoken phrase can induce a perceptual illusion in which speech transforms into song, known as the speech-to-song illusion. Speech acoustics that share certain pitch and timing properties with songs seem to be involved in facilitating the illusion, with a recent proposal suggesting that the illusion hinges on the individual ability to detect musical features latently present in speech. The current study tests this proposal by manipulating pitch and timing features of spoken phrases and examining how musical aptitude of listeners (specifically, their sensitivity to disruptions of musical melody and beat timing) moderates their experience of the illusion. The results show that the illusion is perceived by everyone regardless of their musical aptitude, with phrases that contain stable pitch and long periods of high sonority transforming more frequently. However, musical aptitude does moderate the speed and the strength of the illusion. Listeners with lower beat perception ability experience the illusion faster, which suggests involvement of temporal distortion processes during repetitions. Listeners with higher melody perception ability experience the illusion more strongly, which indicates involvement of musical pitch extraction rather than pitch distortion. These findings contribute new evidence on the complexity of an illusory experience in the auditory domain.
Evidence for a universal association of auditory roughness with musical stability
We provide evidence that the roughness of chords—a psychoacoustic property resulting from unresolved frequency components—is associated with perceived musical stability (operationalized as finishedness) in participants with differing levels and types of exposure to Western or Western-like music. Three groups of participants were tested in a remote cloud forest region of Papua New Guinea (PNG), and two groups in Sydney, Australia (musicians and non-musicians). Unlike prominent prior studies of consonance/dissonance across cultures, we framed the concept of consonance as stability rather than as pleasantness. We find a negative relationship between roughness and musical stability in every group including the PNG community with minimal experience of musical harmony. The effect of roughness is stronger for the Sydney participants, particularly musicians. We find an effect of harmonicity —a psychoacoustic property resulting from chords having a spectral structure resembling a single pitched tone (such as produced by human vowel sounds)—only in the Sydney musician group, which indicates this feature’s effect is mediated via a culture-dependent mechanism. In sum, these results underline the importance of both universal and cultural mechanisms in music cognition, and they suggest powerful implications for understanding the origin of pitch structures in Western tonal music as well as on possibilities for new musical forms that align with humans’ perceptual and cognitive biases. They also highlight the importance of how consonance/dissonance is operationalized and explained to participants—particularly those with minimal prior exposure to musical harmony.
Explaining L2 Lexical Learning in Multiple Scenarios: Cross-Situational Word Learning in L1 Mandarin L2 English Speakers
Adults commonly struggle with perceiving and recognizing the sounds and words of a second language (L2), especially when the L2 sounds do not have a counterpart in the learner’s first language (L1). We examined how L1 Mandarin L2 English speakers learned pseudo English words within a cross-situational word learning (CSWL) task previously presented to monolingual English and bilingual Mandarin-English speakers. CSWL is ambiguous because participants are not provided with direct mappings of words and object referents. Rather, learners discern word-object correspondences through tracking multiple co-occurrences across learning trials. The monolinguals and bilinguals tested in previous studies showed lower performance for pseudo words that formed vowel minimal pairs (e.g., /dit/-/dɪt/) than pseudo word which formed consonant minimal pairs (e.g., /bɔn/-/pɔn/) or non-minimal pairs which differed in all segments (e.g., /bɔn/-/dit/). In contrast, L1 Mandarin L2 English listeners struggled to learn all word pairs. We explain this seemingly contradicting finding by considering the multiplicity of acoustic cues in the stimuli presented to all participant groups. Stimuli were produced in infant-directed-speech (IDS) in order to compare performance by children and adults and because previous research had shown that IDS enhances L1 and L2 acquisition. We propose that the suprasegmental pitch variation in the vowels typical of IDS stimuli might be perceived as lexical tone distinctions for tonal language speakers who cannot fully inhibit their L1 activation, resulting in high lexical competition and diminished learning during an ambiguous word learning task. Our results are in line with the Second Language Linguistic Perception (L2LP) model which proposes that fine-grained acoustic information from multiple sources and the ability to switch between language modes affects non-native phonetic and lexical development.
Tuning the Musical Mind: Next Steps in Solving the Puzzle of the Cognitive Transfer of Musical Training to Language and Back
A growing body of research has been studying cognitive benefits that arise from music training in childhood or adulthood. Many studies focus specifically on the cognitive transfer of music training to language skill, with the aim of preventing language deficits and disorders and improving speech. However, predicted transfer effects are not always documented and not all findings replicate. While we acknowledge the important work that has been done in this field, we highlight the limitations of the persistent dichotomy between musicians and nonmusicians and argue that future research would benefit from a movement towards skill-based continua of musicianship instead of the currently widely practiced dichotomization of participants into groups of musicians and nonmusicians. Culturally situated definitions of musicianship as well as higher awareness of language diversity around the world are key to the understanding of potential cognitive transfers from music to language (and back). We outline a gradient approach to the study of the musical mind and suggest the next steps that could be taken to advance the field.
The role of native language and beat perception ability in the perception of speech rhythm
The perception of rhythm has been studied across a range of auditory signals, with speech presenting one of the particularly challenging cases to capture and explain. Here, we asked if rhythm perception in speech is guided by perceptual biases arising from native language structures, if it is shaped by the cognitive ability to perceive a regular beat, or a combination of both. Listeners of two prosodically distinct languages - English and French - heard sentences (spoken in their native and the foreign language, respectively) and compared the rhythm of each sentence to its drummed version (presented at inter-syllabic, inter vocalic, or isochronous intervals). While English listeners tended to map sentence rhythm onto inter-vocalic and inter-syllabic intervals in this task, French listeners showed a perceptual preference for inter-vocalic intervals only. The native language tendency was equally apparent in the listeners' foreign language and was enhanced by individual beat perception ability. These findings suggest that rhythm perception in speech is shaped primarily by listeners' native language experience with a lesser influence of innate cognitive traits.
The role of native language and beat perception ability in the perception of speech rhythm
The perception of rhythm has been studied across a range of auditory signals, with speech presenting one of the particularly challenging cases to capture and explain. Here, we asked if rhythm perception in speech is guided by perceptual biases arising from native language structures, if it is shaped by the cognitive ability to perceive a regular beat, or a combination of both. Listeners of two prosodically distinct languages - English and French - heard sentences (spoken in their native and the foreign language, respectively) and compared the rhythm of each sentence to its drummed version (presented at inter-syllabic, inter-vocalic, or isochronous intervals). While English listeners tended to map sentence rhythm onto inter-vocalic and inter-syllabic intervals in this task, French listeners showed a perceptual preference for inter-vocalic intervals only. The native language tendency was equally apparent in the listeners’ foreign language and was enhanced by individual beat perception ability. These findings suggest that rhythm perception in speech is shaped primarily by listeners’ native language experience with a lesser influence of innate cognitive traits.
Perceived Emotions of Harmonic Cadences
Harmonic cadences are chord progressions that play an important structural role in Western classical music – they demarcate musical phrases and contribute to the tonality. This study examines participants’ ratings of the perceived arousal and valence of a variety of harmonic cadences. Manipulations included the type of cadence (authentic, plagal, half, and deceptive), its mode (major or minor), its average pitch height (the transposition of the cadence), the presence of a single tetrad (a dissonant four-tone chord), and the mode (major or minor) of the cadence’s final chord. With the exception of average pitch height, the manipulations had only small effects on arousal. However, the perceived valence of major cadences was substantially higher than for minor cadences, and average pitch had a medium-sized positive effect. Plagal cadences, the inclusion of a tetrad, and ending on a minor chord all had weak negative effects for valence. The present findings are discussed in light of contemporary music theory and music psychology, as knowledge of how specific acoustic components and musical structures impact emotion perception in music is important for performance practice, and music-based therapies.
The Need for Composite Models of Music Perception
In the article “Consonance preferences within an unconventional tuning system,” Friedman and colleagues (2021) examine consonance ratings of a large range of dyads and triads from the Bohlen-Pierce chromatic just (BPCJ) scale. The study is designed as a replication of a recent paper by Bowling, Purves, and Gill (2018), which proposes that perception of consonance in dyads, triads, and tetrads can be predicted by their harmonic similarity to human vocalisations. In this commentary, we would like to correct some interpretations regarding Friedman et al.’s (2021) discussion of our paper (Smit, Milne, Dean, & Weidemann, 2019), as well as express some concerns regarding the statistical methods used. We also propose a stronger emphasis on the use of, as named by Friedman et al., composite models as a range of recent evidence strongly suggests that no single acoustic measure can fully predict the complex experience of consonance.
The Need for Composite Models of Music Perception
In the article “Consonance preferences within an unconventional tuning system,” Friedman and colleagues (2021) examine consonance ratings of a large range of dyads and triads from the Bohlen-Pierce chromatic just (BPCJ) scale. The study is designed as a replication of a recent paper by Bowling, Purves, and Gill (2018), which proposes that perception of consonance in dyads, triads, and tetrads can be predicted by their harmonic similarity to human vocalisations. In this commentary, we would like to correct some interpretations regarding Friedman et al.’s (2021) discussion of our paper (Smit, Milne, Dean, & Weidemann, 2019), as well as express some concerns regarding the statistical methods used. We also propose a stronger emphasis on the use of, as named by Friedman et al., composite models as a range of recent evidence strongly suggests that no single acoustic measure can fully predict the complex experience of consonance.
Evidence for a universal association of auditory roughness with musical stability
We provide evidence that the roughness of chords-a psychoacoustic property resulting from unresolved frequency components-is associated with perceived musical stability (operationalized as finishedness) in participants with differing levels and types of exposure to Western or Western-like music. Three groups of participants were tested in a remote cloud forest region of Papua New Guinea (PNG), and two groups in Sydney, Australia (musicians and non-musicians). Unlike prominent prior studies of consonance/dissonance across cultures, we framed the concept of consonance as stability rather than as pleasantness. We find a negative relationship between roughness and musical stability in every group including the PNG community with minimal experience of musical harmony. The effect of roughness is stronger for the Sydney participants, particularly musicians. We find an effect of harmonicity-a psychoacoustic property resulting from chords having a spectral structure resembling a single pitched tone (such as produced by human vowel sounds)-only in the Sydney musician group, which indicates this feature's effect is mediated via a culture-dependent mechanism. In sum, these results underline the importance of both universal and cultural mechanisms in music cognition, and they suggest powerful implications for understanding the origin of pitch structures in Western tonal music as well as on possibilities for new musical forms that align with humans' perceptual and cognitive biases. They also highlight the importance of how consonance/dissonance is operationalized and explained to participants-particularly those with minimal prior exposure to musical harmony.