Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
6,131 result(s) for "Musical chords"
Sort by:
Processing of hierarchical syntactic structure in music
Hierarchical structure with nested nonlocal dependencies is a key feature of human language and can be identified theoretically in most pieces of tonal music. However, previous studies have argued against the perception of such structures in music. Here, we show processing of nonlocal dependencies in music. We presented chorales by J. S. Bach and modified versions in which the hierarchical structure was rendered irregular whereas the local structure was kept intact. Brain electric responses differed between regular and irregular hierarchical structures, in both musicians and nonmusicians. This finding indicates that, when listening to music, humans apply cognitive processes that are capable of dealing with long-distance dependencies resulting from hierarchically organized syntactic structures. Our results reveal that a brain mechanism fundamental for syntactic processing is engaged during the perception of music, indicating that processing of hierarchical structure with nested nonlocal dependencies is not just a key component of human language, but a multidomain capacity of human cognition.
The basis of musical consonance as revealed by congenital amusia
Some combinations of musical notes sound pleasing and are termed \"consonant\" but others sound unpleasant and are termed \"dissonant.\" The distinction between consonance and dissonance plays a central role in Western music, and its origins have posed one of the oldest and most debated problems in perception. In modern times, dissonance has been widely believed to be the product of \"beating\": interference between frequency components in the cochlea that has been believed to be more pronounced in dissonant than consonant sounds. However, harmonic frequency relations, a higher-order sound attribute closely related to pitch perception, has also been proposed to account for consonance. To tease apart theories of musical consonance, we tested sound preferences in individuals with congenital amusia, a neurogenetic disorder characterized by abnormal pitch perception. We assessed amusics' preferences for musical chords as well as for the isolated acoustic properties of beating and harmonicity. In contrast to control subjects, amusic listeners showed no preference for consonance, rating the pleasantness of consonant chords no higher than that of dissonant chords. Amusics also failed to exhibit the normally observed preference for harmonic over inharmonic tones, nor could they discriminate such tones from each other. Despite these abnormalities, amusics exhibited normal preferences and discrimination for stimuli with and without beating. This dissociation indicates that contrary to classic theories, beating is unlikely to underlie consonance. Our results instead suggest the need to integrate harmonicrty as a foundation of music preferences, and illustrate how amusia may be used to investigate normal auditory function.
A corpus analysis of rock harmony
In this study, we report a corpus analysis of rock harmony. As a corpus, we used Rolling Stone magazine's list of the ‘500 Greatest Songs of All Time’; we took the 20 top-ranked songs from each decade (the 1950s through the 1990s), creating a set of 100 songs. Both authors analysed all 100 songs by hand, using conventional Roman numeral symbols. Agreement between the two sets of analyses was over 90 per cent. The analyses were encoded using a recursive notation, similar to a context-free grammar, allowing repeating sections to be encoded succinctly. The aggregate data was then subjected to a variety of statistical analyses. We examined the frequency of different chords and chord transitions. The results showed that IV is the most common chord after I and is especially common preceding the tonic. Other results concern the frequency of different root motions, patterns of co-occurrence between chords, and changes in harmonic practice across time.
On Musical Dissonance
psychoacoustic theories of dissonance often follow Helmholtz and attribute it to partials (fundamental frequencies or overtones) near enough in frequency to affect the same region of the basilar membrane and therefore to cause roughness, i.e., rapid beating. In contrast, tonal theories attribute dissonance to violations of harmonic principles embodied in Western music. We propose a dual-process theory that embeds roughness within tonal principles. The theory predicts the robust increasing trend in the dissonance of triads: major < minor < diminished < augmented. Previous experiments used too few chords for a comprehensive test of the theory, and so Experiment 1 examined the rated dissonance of all 55 possible three-note chords, and Experiment 2 examined a representative sample of 48 of the possible four-note chords. The participants' ratings concurred reliably and corroborated the dual-process theory. Experiment 3 showed that, as the theory predicts, consonant chords are rated as less dissonant when they occur in a tonal sequence (the cycle of fifths) than in a random sequence, whereas this manipulation has no reliable effect on dissonant chords outside common musical practice.
Evolution of music by public choice
Music evolves as composers, performers, and consumers favor some musical variants over others. To investigate the role of consumer selection, we constructed a Darwinian music engine consisting of a population of short audio loops that sexually reproduce and mutate. This population evolved for 2,513 generations under the selective influence of 6,931 consumers who rated the loops' aesthetic qualities. We found that the loops quickly evolved into music attributable, in part, to the evolution of aesthetically pleasing chords and rhythms. Later, however, evolution slowed. Applying the Price equation, a general description of evolutionary processes, we found that this stasis was mostly attributable to a decrease in the fidelity of transmission. Our experiment shows how cultural dynamics can be explained in terms of competing evolutionary forces.
Musicianship Modulates Cortical Effects of Attention on Processing Musical Triads
Background: Many studies have demonstrated the benefits of long-term music training (i.e., musicianship) on the neural processing of sound, including simple tones and speech. However, the effects of musicianship on the encoding of simultaneously presented pitches, in the form of complex musical chords, is less well established. Presumably, musicians’ stronger familiarity and active experience with tonal music might enhance harmonic pitch representations, perhaps in an attention-dependent manner. Additionally, attention might influence chordal encoding differently across the auditory system. To this end, we explored the effects of long-term music training and attention on the processing of musical chords at the brainstem and cortical levels. Method: Young adult participants were separated into musician and nonmusician groups based on the extent of formal music training. While recording EEG, listeners heard isolated musical triads that differed only in the chordal third: major, minor, and detuned (4% sharper third from major). Participants were asked to correctly identify chords via key press during active stimulus blocks and watched a silent movie during passive blocks. We logged behavioral identification accuracy and reaction times and calculated information transfer based on the behavioral chord confusion patterns. EEG data were analyzed separately to distinguish between cortical (event-related potential, ERP) and subcortical (frequency-following response, FFR) evoked responses. Results: We found musicians were (expectedly) more accurate, though not faster, than nonmusicians in chordal identification. For subcortical FFRs, responses showed stimulus chord effects but no group differences. However, for cortical ERPs, whereas musicians displayed P2 (~150 ms) responses that were invariant to attention, nonmusicians displayed reduced P2 during passive listening. Listeners’ degree of behavioral information transfer (i.e., success in distinguishing chords) was also better in musicians and correlated with their neural differentiation of chords in the ERPs (but not high-frequency FFRs). Conclusions: Our preliminary results suggest long-term music training strengthens even the passive cortical processing of musical sounds, supporting more automated brain processing of musical chords with less reliance on attention. Our results also suggest that the degree to which listeners can behaviorally distinguish chordal triads is directly related to their neural specificity to musical sounds primarily at cortical rather than subcortical levels. FFR attention effects were likely not observed due to the use of high-frequency stimuli (>220 Hz), which restrict FFRs to brainstem sources.
Generalized Voice-Leading Spaces
Western musicians traditionally classify pitch sequences by disregarding the effects of five musical transformations: octave shift, permutation, transposition, inversion, and cardinality change. We model this process mathematically, showing that it produces 32 equivalence relations on chords, 243 equivalence relations on chord sequences, and 32 families of geometrical quotient spaces, in which both chords and chord sequences are represented. This model reveals connections between music-theoretical concepts, yields new analytical tools, unifies existing geometrical representations, and suggests a way to understand similarity between chord types.
Automatic Transcription of Melody, Bass Line, and Chords in Polyphonic Music
A method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music is proposed. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions.
Geometry of Musical Chords
A musical chord can be represented as a point in a geometrical space called an orbifold. Line segments represent mappings from the notes of one chord to those of another. Composers in a wide range of styles have exploited the non-Euclidean geometry of these spaces, typically by using short line segments between structurally similar chords. Such line segments exist only when chords are nearly symmetrical under translation, reflection, or permutation. Paradigmatically consonant and dissonant chords possess different near-symmetries and suggest different musical uses.
The Generalized Tonnetz
This article relates two categories of music-theoretical graphs, in which points represent notes and chords, respectively. It unifies previous work by Brower, Callender, Cohn, Douthett, Gollin, O’Connell, Quinn, Steinbach, and myself, while also introducing new models of voice-leading structure—including a three-note octahedral Tonnetz and tetrahedral models of four-note diatonic and chromatic chords.