Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
871 result(s) for "Dean, Roger T."
Sort by:
Emotional responses in Papua New Guinea show negligible evidence for a universal effect of major versus minor music
Music is a vital part of most cultures and has a strong impact on emotions [1–5]. In Western cultures, emotive valence is strongly influenced by major and minor melodies and harmony (chords and their progressions) [6–13]. Yet, how pitch and harmony affect our emotions, and to what extent these effects are culturally mediated or universal, is hotly debated [2, 5, 14–20]. Here, we report an experiment conducted in a remote cloud forest region of Papua New Guinea, across several communities with similar traditional music but differing levels of exposure to Western-influenced tonal music. One hundred and seventy participants were presented with pairs of major and minor cadences (chord progressions) and melodies, and chose which of them made them happier. The experiment was repeated by 60 non-musicians and 19 musicians in Sydney, Australia. Bayesian analyses show that, for cadences, there is strong evidence that greater happiness was reported for major than minor in every community except one: the community with minimal exposure to Western-like music. For melodies, there is strong evidence that greater happiness was reported for those with higher mean pitch (major melodies) than those with lower mean pitch (minor melodies) in only one of the three PNG communities and in both Sydney groups. The results show that the emotive valence of major and minor is strongly associated with exposure to Western-influenced music and culture, although we cannot exclude the possibility of universality.
Perception of affect in unfamiliar musical chords
This study investigates the role of extrinsic and intrinsic predictors in the perception of affect in mostly unfamiliar musical chords from the Bohlen-Pierce microtonal tuning system. Extrinsic predictors are derived, in part, from long-term statistical regularities in music; for example, the prevalence of a chord in a corpus of music that is relevant to a participant. Conversely, intrinsic predictors make no use of long-term statistical regularities in music; for example, psychoacoustic features inherent in the music, such as roughness. Two types of affect were measured for each chord: pleasantness/unpleasantness and happiness/sadness. We modelled the data with a number of novel and well-established intrinsic predictors, namely roughness, harmonicity, spectral entropy and average pitch height; and a single extrinsic predictor, 12-TET Dissimilarity, which was estimated by the chord's smallest distance to any 12-tone equally tempered chord. Musical sophistication was modelled as a potential moderator of the above predictors. Two experiments were conducted, each using slightly different tunings of the Bohlen-Pierce musical system: a just intonation version and an equal-tempered version. It was found that, across both tunings and across both affective responses, all the tested intrinsic features and 12-TET Dissimilarity have consistent influences in the expected direction. These results contrast with much current music perception research, which tends to assume the dominance of extrinsic over intrinsic predictors. This study highlights the importance of both intrinsic characteristics of the acoustic signal itself, as well as extrinsic factors, such as 12-TET Dissimilarity, on perception of affect in music.
Evidence for a universal association of auditory roughness with musical stability
We provide evidence that the roughness of chords—a psychoacoustic property resulting from unresolved frequency components—is associated with perceived musical stability (operationalized as finishedness) in participants with differing levels and types of exposure to Western or Western-like music. Three groups of participants were tested in a remote cloud forest region of Papua New Guinea (PNG), and two groups in Sydney, Australia (musicians and non-musicians). Unlike prominent prior studies of consonance/dissonance across cultures, we framed the concept of consonance as stability rather than as pleasantness. We find a negative relationship between roughness and musical stability in every group including the PNG community with minimal experience of musical harmony. The effect of roughness is stronger for the Sydney participants, particularly musicians. We find an effect of harmonicity —a psychoacoustic property resulting from chords having a spectral structure resembling a single pitched tone (such as produced by human vowel sounds)—only in the Sydney musician group, which indicates this feature’s effect is mediated via a culture-dependent mechanism. In sum, these results underline the importance of both universal and cultural mechanisms in music cognition, and they suggest powerful implications for understanding the origin of pitch structures in Western tonal music as well as on possibilities for new musical forms that align with humans’ perceptual and cognitive biases. They also highlight the importance of how consonance/dissonance is operationalized and explained to participants—particularly those with minimal prior exposure to musical harmony.
Practice-led Research, Research-led Practice in the Creative Arts
This book addresses one of the most exciting and innovative developments within higher education: the rise in prominence of the creative arts and the accelerating recognition that creative practice is a form of research.
Acoustic Intensity Causes Perceived Changes in Arousal Levels in Music: An Experimental Investigation
Listener perceptions of changes in the arousal expressed by classical music have been found to correlate with changes in sound intensity/loudness over time. This study manipulated the intensity profiles of different pieces of music in order to test the causal nature of this relationship. Listeners (N = 38) continuously rated their perceptions of the arousal expressed by each piece. An extract from Dvorak's Slavonic Dance Opus 46 No 1 was used to create a variant in which the direction of change in intensity was inverted, while other features were retained. Even though it was only intensity that was inverted, perceived arousal was also inverted. The original intensity profile was also superimposed on three new pieces of music. The time variation in the perceived arousal of all pieces was similar to their intensity profile. Time series analyses revealed that intensity variation was a major influence on the arousal perception in all pieces, in spite of their stylistic diversity.
Origins of 1/f noise in human music performance from short-range autocorrelations related to rhythmic structures
1/f fluctuations have been described in numerous physical and biological processes. This noise structure describes an inverse relationship between the intensity and frequency of events in a time series (for example reflected in power spectra), and is believed to indicate long-range dependence, whereby events at one time point influence events many observations later. 1/f has been identified in rhythmic behaviors, such as music, and is typically attributed to long-range correlations. However short-range dependence in musical performance is a well-established finding and past research has suggested that 1/f can arise from multiple continuing short-range processes. We tested this possibility using simulations and time-series modeling, complemented by traditional analyses using power spectra and detrended fluctuation analysis (as often adopted more recently). Our results show that 1/f-type fluctuations in musical contexts may be explained by short-range models involving multiple time lags, and the temporal ranges in which rhythmic hierarchies are expressed are apt to create these fluctuations through such short-range autocorrelations. We also analyzed gait, heartbeat, and resting-state EEG data, demonstrating the coexistence of multiple short-range processes and 1/f fluctuation in a variety of phenomena. This suggests that 1/f fluctuation might not indicate long-range correlations, and points to its likely origins in musical rhythm and related structures.
What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics
Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous sounds, rather than instrumental discontinuous notes. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event 'hazard' analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral measures linked to timbre strengthened the models. Overall, results show that even when little of the pitch and rhythm information important for phrasing in note-based music is available, phrasing is still perceived, primarily in response to changes of intensity and timbre. Implications for electroacoustic music composition and music recommender systems are discussed.
Musical Expertise and the Ability to Imagine Loudness
Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant's imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability.
How Different Are Our Perceptions of Equal-Tempered and Microtonal Intervals? A Behavioural and EEG Survey
For listeners familiar with Western twelve-tone equal-tempered (12-TET) music, a novel microtonal tuning system is expected to present additional processing challenges. We aimed to determine whether this was the case, focusing on the extent to which our perceptions can be considered bottom-up (psychoacoustic and primarily perceptual) and top-down (dependent on familiarity and cognitive processing). We elicited both overt response ratings, and covert event-related potentials (ERPs), so as to compare subjective impressions of sounds with the neurophysiological processing of the acoustic signal. We hypothesised that microtonal intervals are perceived differently from 12-TET intervals, and that the responses of musicians (n = 10) and non-musicians (n = 10) are distinct. Two-note chords were presented comprising 12-TET intervals (consonant and dissonant) or microtonal (quarter tone) intervals, and ERP, subjective roughness ratings, and liking ratings were recorded successively. Musical experience mediated the perception of differences between dissonant and microtone intervals, with non-musicians giving similar ratings for each, and musicians preferring dissonant over the less commonly used microtonal intervals, rating them as less rough. ERP response amplitude was greater for consonant intervals than other intervals. Musical experience interacted with interval type, suggesting that musical expertise facilitates the sensory and perceptual discrimination of microtonal intervals from 12-TET intervals, and an increased ability to categorize such intervals. Non-musicians appear to have perceived microtonal intervals as instances of neighbouring 12-TET intervals.
Exploring the Comprehensibility of Ten Different Musical Notation Systems and Underlying Factors
Numerous systems of musical notation have been developed to address some of the complexities associated with conventional Staff notation, such as translating it into physical movements and memorizing the meaning of its symbols. Surprisingly, there has been little empirical research assessing and comparing the comprehensibility of conventional versus alternative notation methods. In this study, three main features were assessed for 10 different musical notation systems: discriminability (the ease of visually distinguishing pitch or duration changes in notation), iconicity (extent of resemblance between melodies and notation), and complexity. A total of 213 valid responses were collected in an online experiment. Participants completed two tasks, visual discriminability and melody-notation matching. They also provided complexity ratings for different notational systems. Multilevel Bayesian regression models show strong evidence that Figurenotes, Numbered notation, and Piano Roll notation have a relatively high level of discriminability, while Figurenotes, Proportional notation, Staff notation, and Piano Roll notation have a relatively high level of iconicity. Piano Roll notation was rated the least complex musical notation system. Differences in the results across pitch and duration dimensions, age, and musical sophistication were also found. Importantly, we also examined the effects of the different visual variables used by the notational systems (color, position, shape): changes in position have the highest discriminability, iconicity, and the lowest complexity. Qualitative analysis for some open questions also supported Piano Roll notation as being the most favorable musical notation, especially among novices.