Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,552 result(s) for "Musical expression"
Sort by:
Touching the audience: musical haptic wearables for augmented and participatory live music performances
This paper introduces the musical haptic wearables for audiences (MHWAs), a class of wearable devices for musical applications targeting audiences of live music performances. MHWAs are characterized by embedded intelligence, wireless connectivity to local and remote networks, a system to deliver haptic stimuli, and tracking of gestures and/or physiological parameters. They aim to enrich musical experiences by leveraging the sense of touch as well as providing new capabilities for creative participation. The embedded intelligence enables the communication with other external devices, processes input data, and generates music-related haptic stimuli. We validate our vision with two concert-experiments. The first experiment involved a duo of electronic music performers and twenty audience members. Half of the audience used an armband-based prototype of MHWA delivering vibro-tactile feedback in response to performers’ actions on their digital musical instruments, and the other half was used as a control group. In the second experiment, a smart mandolin performer played live for twelve audience members wearing a gilet-based MHWA, which provided vibro-tactile sensations in response to the performed music. Overall, results from both experiments suggest that MHWAs have the potential to enrich the experience of listening to live music in terms of arousal, valence, enjoyment, and engagement. Nevertheless, results showed that the audio-haptic experience was not homogeneous across participants, who could be grouped as those appreciative of the vibrations and those less appreciative of them. The causes for a lack of appreciation of the haptic experience were mainly identified as the sensation of unpleasantness caused by the vibrations in certain parts of the body and the lack of the comprehension of the relation between what was felt and what was heard. Based on the reported results, we offer suggestions for practitioners interested in designing wearables for enriching the musical experience of audiences of live music via the sense of touch. Such suggestions point towards the need of mechanisms of personalization, systems able to minimize the latency between the sound and the vibrations, and a time of adaptation to the vibrations.
A Review of Music and Emotion Studies: Approaches, Emotion Models, and Stimuli
The field of music and emotion research has grown rapidly and diversified during the last decade. This has led to a certain degree of confusion and inconsistency between competing notions of emotions, data, and results. The present review of 251 studies describes the focus of prevalent research approaches, methods, and models of emotion, and documents the types of musical stimuli used over the past twenty years. Although self-report approaches to emotions are the most common way of dealing with music and emotions, using multiple approaches is becoming increasingly popular. A large majority (70%) of the studies employed variants of the discrete or the dimensional emotion models. A large proportion of stimuli rely on a relatively modest amount of familiar classical examples. The evident shortcomings of these prevalent patterns in music and emotion studies are highlighted, and concrete plans of action for future studies are suggested.
Music and movement share a dynamic structure that supports universal expressions of emotion
Music moves us. Its kinetic power is the foundation of human behaviors as diverse as dance, romance, lullabies, and the military march. Despite its significance, the music-movement relationship is poorly understood. We present an empirical method for testing whether music and movement share a common structure that affords equivalent and universal emotional expressions. Our method uses a computer program that can generate matching examples of music and movement from a single set of features: rate, jitter (regularity of rate), direction, step size, and dissonance/visual spikiness. We applied our method in two experiments, one in the United States and another in an isolated tribal village in Cambodia. These experiments revealed three things: (i) each emotion was represented by a unique combination of features, (ii) each combination expressed the same emotion in both music and movement, and (iii) this common structure between music and movement was evident within and across cultures.
Rudolf Arnheim: Perceptive dynamics in musical expression
A pupil of Köhler and von Hornbostel in Berlin, Arnheim published an article in the Musical Quarterly in 1984, where he applied the principles of visual composition to the musical form. In a painting, for example, the forces of visual composition are essential for aesthetic enjoyment; in music, sounds are essential as they are always occurring in time, and this constitutes the main dynamic vector of music. Starting with the tetrachord of ancient Greek music and analysing the relationships between notes in the major and minor scales, Arnheim identifies what in the Western musical tradition represents the basis on which each note must relate: the tonal centre, the attractive motor that determines the perceptual dynamics of musical expression. Two examples, taken from Schubert and Béla Bartók, show us how the structure of the melodic construction is essential in generating different states of mind; Arnheim, however, does not stop at the simple eliciting of an emotion, but situates the musical structure in a more general framework, which refers to perceptual patterns belonging to any sphere, mental or physical, thanks to which we experience our reality.
Empathy and Emotional Contagion as a Link Between Recognized and Felt Emotions in Music Listening
Previous studies have shown that there is a difference between recognized and induced emotion in music listening. In this study, empathy is tested as a possible moderator between recognition and induction that is, on its own, moderated via music preference evaluations and other individual and situational features. Preference was also tested to determine whether it had an effect on measures of emotion independently from emotional expression. A web-based experiment gathered from 3,164 music listeners emotion, empathy, and preference ratings in a between-subjects design embedded in a music-personality test. Stimuli were a sample of 23 musical excerpts (each 30 seconds long, five randomly assigned to each participant) from various musical styles chosen to represent different emotions and preferences. Listeners in the recognition rating condition rated measures of valence and arousal significantly differently than listeners in the felt rating condition. Empathy ratings were shown to modulate this relationship: when empathy was present, the difference between the two rating types was reduced. Furthermore, we confirmed preference as one major predictor of empathy ratings. Emotional contagion was tested and confirmed as an additional direct effect of emotional expression on induced emotions. This study is among the first to explicitly test empathy and emotional contagion during music listening, helping to explain the often-reported emotional response to music in everyday life.
Micro-variations in timing and loudness affect music-evoked mental imagery
Music can shape the vividness, sentiment, and content of directed mental imagery. Yet, the role of specific musical features in these effects remains elusive. One important aspect of human musical performances is the presence of micro-variations—small deviations in timbre, pitch, and timing, driven by motor and attentional processes. These variations enhance perceived “naturalness” compared to mechanical playing without such variations. Here, we investigated whether random micro-variation, as opposed to mechanical playing, affects mental imagery characteristics. One hundred participants performed a directed mental imagery task where they imagined a journey, accompanied either by drumming with micro-variation, drumming without micro-variation, or silence. Participants rated the vividness, distance and time travelled of their imagined content, alongside free-format content responses. Bayesian multilevel regression model showed that repetitive quasi-isochronous drumming enhanced mental imagery vividness, with a stronger effect observed when the drumming contained random micro-variation. Drumming with random micro-variation also increased imagined distance and time travelled compared with silence. Furthermore, individual traits in absorption, ability to imagine vividly, and level of musical training interacted with auditory conditions to further shape mental imagery characteristics. The findings have implications for the use of music to support imagery in creative, recreational, and therapeutic settings.
The Sense of Music
The fictional Dr. Strabismus sets out to write a new comprehensive theory of music. But music's tendency to deconstruct itself combined with the complexities of postmodernism doom him to failure. This is the parable that framesThe Sense of Music,a novel treatment of music theory that reinterprets the modern history of Western music in the terms of semiotics. Based on the assumption that music cannot be described without reference to its meaning, Raymond Monelle proposes that works of the Western classical tradition be analyzed in terms of temporality, subjectivity, and topic theory. Critical of the abstract analysis of musical scores, Monelle argues that the score does not reveal music'ssense.That sense--what a piece of music says and signifies--can be understood only with reference to history, culture, and the other arts. Thus, music is meaningful in that it signifies cultural temporalities and themes, from the traditional manly heroism of the hunt to military power to postmodern \"polyvocality.\" This theoretical innovation allows Monelle to describe how the Classical style of the eighteenth century--which he reads as a balance of lyric and progressive time--gave way to the Romantic need for emotional realism. He argues that irony and ambiguity subsequently eroded the domination of personal emotion in Western music as well as literature, killing the composer's subjectivity with that of the author. This leaves Dr. Strabismus suffering from the postmodern condition, and Raymond Monelle with an exciting, controversial new approach to understanding music and its history.
The Music USE (MUSE) Questionnaire: An Instrument to Measure Engagement in Music
active engagement with music has been associated with cognitive, emotional, and social benefits, although measures of musicianship are typically limited to music training. A self-report questionnaire was developed to assess both quality and quantity of different forms of music use, with eight music background items, and a further 124 items testing music engagement. Analysis of engagement items with an initial sample (N= 210; mean age = 37.55 years,SD= 11.31) generated four reliable engagement styles (Cognitive and Emotional Regulation, Engaged Production, Social Connection, Dance and Physical Exercise). Analysis of an independent sample with a refined 50-item scale (N= 124; mean age = 22.78 years,SD= 6.17) supported the findings, further differentiating between “Physical Exercise” and “Dance.” Taken together with the eight music background items, the Music USE (MUSE) questionnaire can be used as a 58-item, or in a reduced 32-item format. Validity was demonstrated in relationships between music background indices, styles of music engagement, demographics, the brief Music Experience Questionnaire (Werner, Swope, & Heide, 2006), and the Emotion Regulation Questionnaire (Gross & John, 2003). The MUSE offers researchers a sensitive approach to exploring benefits of music engagement, by encapsulating both quality and quantity dimensions of music use.
Age and Musical Expertise Influence Emotion Recognition in Music
We investigated how age and musical expertise influence emotion recognition in music. Musically trained and untrained participants from two age cohorts, young and middle-aged adults (N =80), were presented with music excerpts expressing happiness, peacefulness, sadness, and fear/threat. Participants rated how much each excerpt expressed the four emotions on 10-point scales. The intended emotions were consistently perceived, but responses varied across groups. Advancing age was associated with selective decrements in the recognition of sadness and fear/threat, a finding consistent with previous research (Lima & Castro, 2011a); the recognition of happiness and peacefulness remained stable. Years of music training were associated with enhanced recognition accuracy. These effects were independent of domain-general cognitive abilities and personality traits, but they were echoed in differences in how efficiently music structural cues (e.g., tempo, mode) were relied upon. Thus, age and musical expertise are experiential factors explaining individual variability in emotion recognition in music.
An evaluation of linear and non-linear models of expressive dynamics in classical piano and symphonic music
Expressive interpretation forms an important but complex aspect of music, particularly in Western classical music. Modeling the relation between musical expression and structural aspects of the score being performed is an ongoing line of research. Prior work has shown that some simple numerical descriptors of the score (capturing dynamics annotations and pitch) are effective for predicting expressive dynamics in classical piano performances. Nevertheless, the features have only been tested in a very simple linear regression model. In this work, we explore the potential of non-linear and temporal modeling of expressive dynamics. Using a set of descriptors that capture different types of structure in the musical score, we compare linear and different non-linear models in a large-scale evaluation on three different corpora, involving both piano and orchestral music. To the best of our knowledge, this is the first study where models of musical expression are evaluated on both types of music. We show that, in addition to being more accurate, non-linear models describe interactions between numerical descriptors that linear models do not.