Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
18,994 result(s) for "Melody"
Sort by:
Multi-mmlg: a novel framework of extracting multiple main melodies from MIDI files
As an essential part of music, main melody is the cornerstone of music information retrieval. In the MIR’s sub-field of main melody extraction, the mainstream methods assume that the main melody is unique. However, the assumption cannot be established, especially for music with multiple main melodies such as symphony or music with many harmonies. Hence, the conventional methods ignore some main melodies in the music. To solve this problem, we propose a deep learning-based Multiple Main Melodies Generator (Multi-MMLG) framework that can automatically predict potential main melodies from a MIDI file. This framework consists of two stages: (1) main melody classification using a proposed MIDIXLNet model and (2) conditional prediction using a modified MuseBERT model. Experiment results suggest that the proposed MIDIXLNet model increases the accuracy of main melody classification from 89.62 to 97.37%. In addition, this model requires fewer parameters (71.8 million) than the previous state-of-art approaches. We also conduct ablation experiments on the Multi-MMLG framework. In the best-case scenario, predicting meaningful multiple main melodies for the music are achieved.
Reexamining the Association between Aesthetic Sensitivity to Musical and Visual Complexity
[...]each visual feature-balance, contour, symmetry, and complexity-uniquely contributed to predicting individual liking ratings (see also Clemente, Friberg, & Holzapfel, 2023). Together, these findings powerfully demonstrate the importance of taking AS into account in predicting and explaining aesthetic judgments of musical and visual stimuli, providing fertile ground for additional research, and potentially helping to account for past inconsistencies in the results of studies aimed at uncovering general principles of evaluative preference. Whereas the first component of the latter composite is akin to the number of elements in a visual figure in that it involves computing the number of notes per unit time, the second component captures the redundancy of the notes that appear within a given melody. If so, it would be well in line withNadaletal.'s (2010) contention that inconsistencies between studies testing the relationship between complexity and aesthetic judgments may result from differences in how complexity is defined and measured (see also Van Geert & Wagemans, 2020).
Beethoven's Ukraine Connection: New Light on the Creation of his Flute Variations Opp. 105 and 107
An excellent initial account of the origins of these sets of variations was published by C. B. Oldman in 1951, but this does not take into account either the two new sources or most of Beethoven's other manuscript material, which had not then come to light.1 An updated account of the composition of these works is therefore desirable; and in any case it is hard to discover from the existing literature how all the sources relate to each other. THE SCHEIDE MANUSCRIPT The Scheide Collection in Princeton University Library is well known to Beethoven scholars as the location of his famous Scheide Sketchbook, which contains sketches for numerous works from the period 1815-16.2 This sketchbook tends to overshadow three other Beethoven sketch sources found in the same collection. The manuscript and transcription may therefore have passed direct from Sindrini's family to Lucien Goldschmidt (1912-92), who in 1982 was a rare-book dealer in New York, before they entered the Scheide Collection. [...]the left hand at the start of the third sketch variation resembles the accompaniment pattern in Variation 3. Among the fifteen, one group is actually numbered as far as Variation 9, but little of this material filtered through to the final version.12 In the Diabelli Variations, his initial draft in 1819 showed 23 variations with more to come, but one of the 23 was later discarded.13 And in the second movement of his String Quartet Op. 127, he planned at one stage to alternate slow variations in A flat with quicker ones in C major, but discarded those in С major.\"
Emotionally consistent music melody generation algorithm integrating prompt perception and hyper-network optimization
This paper proposes a stable emotion-consistent music melody generation algorithm (ECM-HPO) that integrates cue-awareness and hyper-network optimization. The algorithm is optimized to address three core challenges in intelligent melody generation: emotion consistency defects, insufficient understanding of user intent, and dynamic adaptability limitations. The algorithm contains three core innovations: (1) Emotion consistency enhancement mechanism. The algorithm constructs a unified emotion space by combining audio features, text cues, and historical melody windows, and optimizes the macro-emotional profile and micro-note expression based on differentiable music theory constraints; (2) Cue-aware melody-guided generation module. The algorithm establishes a unified encoding framework for heterogeneous cues and uses multi-scale cross-attention to achieve semantic alignment. At the same time, the cue vector is injected into the generation process as prior information through a conditional decoder; (3) Hyper-network-optimized dynamic generation architecture. The algorithm adopts a three-level hyper-network (context-aware → parameter generation → dynamic loading) to achieve on-demand prediction of generator weights. At the same time, the model introduces a music-specific search space to take into account music theory constraints such as harmony and rhythm density. Experiments on the constructed EMD-Melody dataset show that ECM-HPO significantly outperforms baseline methods (TGM, Transformer, EAM, CDM, and LSOTAM) in multiple indicators. Numerical results show excellent performances of melody contour smoothness (PCS) of 0.910, rhythm consistency (RC) of 0.891, and emotion recognition accuracy (ERA) of 92.4%. Ablation experiments verify the contribution of each module and the PCS of the complete model is improved by 15.19% compared with the basic model. Cross-style tests further confirm the robustness of the algorithm.
Mel2Word: A Text-Based Melody Representation for Symbolic Music Analysis
The purpose of this research is to present a natural language processing-based approach to symbolic music analysis. We propose Mel2Word, a text-based representation including pitch and rhythm information, and a new natural language processing-based melody segmentation algorithm. We first show how to create a melody dictionary using Byte Pair Encoding (BPE), which finds and merges the most frequent pairs that appear in a collection of melodies in a data-driven manner. The dictionary is then used to tokenize or segment a given melody. Utilizing various symbolic melody datasets, we conduct an exploratory analysis and evaluate the classification performance of melody representation models on the MTC-ANN dataset. A comparison with existing segmentation algorithms is also carried out. The result shows that the proposed model significantly improves classification performance in comparison to various melodic features and several existing segmentation algorithms.
A stochastic process model of melody generation in popular music composition and its contribution to compositional innovation
Digital generation of musical melodies is a new field in improving popular music composition, but due to the stochastic nature of music composition, it is necessary to model and analyze the stochastic process of melody generation. Melody generation is modeled using a classical Markov chain, based on which the Markov algorithm is improved by adding constraints to combine the generated melody and rhythm. The subjective score performance of the generated melodies was verified using three sampling methods, and it was found that the melodies generated by the present method improved their subjective scores by about 21.95%, 31.88%, and 30% compared to those of the traditional Markov. In terms of phrase relevance, interval characteristics, and number of short rhythms, the overall melodic performance of this method is about 1~1.7 times higher than that of traditional Markov and attentional_rnn models. It shows that the method in this paper can indeed generate high-quality melodies for popular music and provide impetus for compositional innovation.
Universality and diversity in human song
It is unclear whether there are universal patterns to music across cultures. Mehr et al. examined ethnographic data and observed music in every society sampled (see the Perspective by Fitch and Popescu). For songs specifically, three dimensions characterize more than 25% of the performances studied: formality of the performance, arousal level, and religiosity. There is more variation in musical behavior within societies than between societies, and societies show similar levels of within-society variation in musical behavior. At the same time, one-third of societies significantly differ from average for any given dimension, and half of all societies differ from average on at least one dimension, indicating variability across cultures. Science , this issue p. eaax0868 ; see also p. 944 Songs exhibit universal patterns across cultures. What is universal about music, and what varies? We built a corpus of ethnographic text on musical behavior from a representative sample of the world’s societies, as well as a discography of audio recordings. The ethnographic corpus reveals that music (including songs with words) appears in every society observed; that music varies along three dimensions (formality, arousal, religiosity), more within societies than across them; and that music is associated with certain behavioral contexts such as infant care, healing, dance, and love. The discography—analyzed through machine summaries, amateur and expert listener ratings, and manual transcriptions—reveals that acoustic features of songs predict their primary behavioral context; that tonality is widespread, perhaps universal; that music varies in rhythmic and melodic complexity; and that elements of melodies and rhythms found worldwide follow power laws.
Joint Detection and Classification of Singing Voice Melody Using Convolutional Recurrent Neural Networks
Singing melody extraction essentially involves two tasks: one is detecting the activity of a singing voice in polyphonic music, and the other is estimating the pitch of a singing voice in the detected voiced segments. In this paper, we present a joint detection and classification (JDC) network that conducts the singing voice detection and the pitch estimation simultaneously. The JDC network is composed of the main network that predicts the pitch contours of the singing melody and an auxiliary network that facilitates the detection of the singing voice. The main network is built with a convolutional recurrent neural network with residual connections and predicts pitch labels that cover the vocal range with a high resolution, as well as non-voice status. The auxiliary network is trained to detect the singing voice using multi-level features shared from the main network. The two optimization processes are tied with a joint melody loss function. We evaluate the proposed model on multiple melody extraction and vocal detection datasets, including cross-dataset evaluation. The experiments demonstrate how the auxiliary network and the joint melody loss function improve the melody extraction performance. Furthermore, the results show that our method outperforms state-of-the-art algorithms on the datasets.
A Century of Angels: The First Hundred Years of a French Carol and Its English Counterparts
Earliest Known Publication The French carol first appeared with the incipit \"Les anges dans nos campagnes\" in 1842 in Choix de cantiques sur des airs nouveaux by Louis Lambillotte (1797-1855), and this is often considered its earliest publication.2 But new research shows that the carol was published in various forms at least nine times before 1842. The discovery in Roche's books thus extends the proven history of the carol by a generation. The full title of the carol, \"Noël du Gloria in Excelsis,\" appears in Roche's table of carols. The sole exception found in the nineteenth century to this pattern of stanza non-addition is a version published in Boston in 1899 with novel initial and final stanzas largely underived from earlier sources.9 If this pattern can be relied upon to establish priority, then Roche's 1805 version should be regarded as primary not only by the date of its publication but also by the number of its stanzas. The music for \"Les anges dans nos campagnes\" is attributed in Lambillotte 1842 to \"W. M.\", sometimes interpreted as Wilfrid Moreau of Poitiers.16 This most likely should read Wulfran Moreau (1827-1905), who, though only about fifteen years old in 1842, was from Poitier and later was professor of rhetoric at Montmorillon and a published composer of religious and school music.17 But the attribution to W. M. may apply to the arrangement for three voices rather than to the melody itself.
The Musicality of Non-Musicians: An Index for Assessing Musical Sophistication in the General Population
Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of 'musical sophistication' which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI) to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n = 147,636). Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication. Results are discussed in terms of theoretical accounts of implicit and statistical music learning and with regard to social conditions of sophisticated musical engagement.