Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
117 result(s) for "Friberg, Anders"
Sort by:
Colour Association with Music Is Mediated by Emotion: Evidence from an Experiment Using a CIE Lab Interface and Interviews
Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds.
Experience-dependent modulation of right anterior insula and sensorimotor regions as a function of noise-masked auditory feedback in singers and nonsingers
Previous studies on vocal motor production in singing suggest that the right anterior insula (AI) plays a role in experience-dependent modulation of feedback integration. Specifically, when somatosensory input was reduced via anesthesia of the vocal fold mucosa, right AI activity was down regulated in trained singers. In the current fMRI study, we examined how masking of auditory feedback affects pitch-matching accuracy and corresponding brain activity in the same participants. We found that pitch-matching accuracy was unaffected by masking in trained singers yet declined in nonsingers. The corresponding brain region with the most differential and interesting activation pattern was the right AI, which was up regulated during masking in singers but down regulated in nonsingers. Likewise, its functional connectivity with inferior parietal, frontal, and voice-relevant sensorimotor areas was increased in singers yet decreased in nonsingers. These results indicate that singers relied more on somatosensory feedback, whereas nonsingers depended more critically on auditory feedback. When comparing auditory vs somatosensory feedback involvement, the right anterior insula emerged as the only region for correcting intended vocal output by modulating what is heard or felt as a function of singing experience. We propose the right anterior insula as a key node in the brain's singing network for the integration of signals of salience across multiple sensory and cognitive domains to guide vocal behavior.
Idealized Computational Models for Auditory Receptive Fields
We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals.
Hedonic and incentive signals for body weight control
Here we review the emerging neurobiological understanding of the role of the brain’s reward system in the regulation of body weight in health and in disease. Common obesity is characterized by the over-consumption of palatable/rewarding foods, reflecting an imbalance in the relative importance of hedonic versus homeostatic signals. The popular ‘incentive salience theory’ of food reward recognises not only a hedonic/pleasure component (‘liking’) but also an incentive motivation component (‘wanting’ or ‘reward-seeking’). Central to the neurobiology of the reward mechanism is the mesoaccumbal dopamine system that confers incentive motivation not only for natural rewards such as food but also by artificial rewards (eg. addictive drugs). Indeed, this mesoaccumbal dopamine system receives and integrates information about the incentive (rewarding) value of foods with information about metabolic status. Problematic over-eating likely reflects a changing balance in the control exerted by hypothalamic versus reward circuits and/or it could reflect an allostatic shift in the hedonic set point for food reward. Certainly, for obesity to prevail, metabolic satiety signals such as leptin and insulin fail to regain control of appetitive brain networks, including those involved in food reward. On the other hand, metabolic control could reflect increased signalling by the stomach-derived orexigenic hormone, ghrelin. We have shown that ghrelin activates the mesoaccumbal dopamine system and that central ghrelin signalling is required for reward from both chemical drugs (eg alcohol) and also from palatable food. Future therapies for problematic over-eating and obesity may include drugs that interfere with incentive motivation, such as ghrelin antagonists.
Visual Perception of Expressiveness in Musicians' Body Movements
MUSICIANS OFTEN MAKE GESTURES and move their bodies expressing a musical intention. In order to explore to what extent emotional intentions can be conveyed through musicians' movements, participants watched and rated silent video clips of musicians performing the emotional intentions Happy, Sad, Angry, and Fearful. In the first experiment participants rated emotional expression and movement character of marimba performances. The results showed that the intentions Happiness, Sadness, and Anger were well communicated, whereas Fear was not. Showing selected parts of the player only slightly influenced the identification of the intended emotion. In the second experiment participants rated the same emotional intentions and movement character for performances on bassoon and soprano saxophone. The ratings from the second experiment confirmed that Fear was not communicated whereas Happiness, Sadness, and Anger were recognized. The rated movement cues were similar in the two experiments and were analogous to their audio counterpart in music performance.
Commentary on Polak How short is the shortest metric subdivision?
This commentary relates to the target paper by Polak on the shortest metric subdivision by presenting measurements on West-African drum music. It provides new evidence that the perceptual lower limit of tone duration is within the range 80-100 ms. Using fairly basic measurement techniques in combination with a musical analysis of the content, the original results in this study represents a valuable addition to the literature. Considering the relevance for music listening, further research would be valuable for determining and understanding the nature of this perceptual limit.
A method for the second-site screening of K-Ras in the presence of a covalently attached first-site ligand
K-Ras is a well-validated cancer target but is considered to be “undruggable” due to the lack of suitable binding pockets. We previously discovered small molecules that bind weakly to K-Ras but wanted to improve their binding affinities by identifying ligands that bind near our initial hits that we could link together. Here we describe an approach for identifying second site ligands that uses a cysteine residue to covalently attach a compound for tight binding to the first site pocket followed by a fragment screen for binding to a second site. This approach could be very useful for targeting Ras and other challenging drug targets.
Personality Traits Bias the Perceived Quality of Sonic Environments
There have been few empirical investigations of how individual differences influence the perception of the sonic environment. The present study included the Big Five traits and noise sensitivity as personality factors in two listening experiments (n = 43, n = 45). Recordings of urban and restaurant soundscapes that had been selected based on their type were rated for Pleasantness and Eventfulness using the Swedish Soundscape Quality Protocol. Multivariate multiple regression analysis showed that ratings depended on the type and loudness of both kinds of sonic environments and that the personality factors made a small yet significant contribution. Univariate models explained 48% (cross-validated adjusted R2) of the variation in Pleasantness ratings of urban soundscapes, and 35% of Eventfulness. For restaurant soundscapes the percentages explained were 22% and 21%, respectively. Emotional stability and noise sensitivity were notable predictors whose contribution to explaining the variation in quality ratings was between one-tenth and nearly half of the soundscape indicators, as measured by squared semipartial correlation. Further analysis revealed that 36% of noise sensitivity could be predicted by broad personality dimensions, replicating previous research. Our study lends empirical support to the hypothesis that personality traits have a significant though comparatively small influence on the perceived quality of sonic environments.
Expressive Timing Facilitates the Neural Processing of Phrase Boundaries in Music: Evidence from Event-Related Potentials
The organization of sound into meaningful units is fundamental to the processing of auditory information such as speech and music. In expressive music performance, structural units or phrases may become particularly distinguishable through subtle timing variations highlighting musical phrase boundaries. As such, expressive timing may support the successful parsing of otherwise continuous musical material. By means of the event-related potential technique (ERP), we investigated whether expressive timing modulates the neural processing of musical phrases. Musicians and laymen listened to short atonal scale-like melodies that were presented either isochronously (deadpan) or with expressive timing cues emphasizing the melodies' two-phrase structure. Melodies were presented in an active and a passive condition. Expressive timing facilitated the processing of phrase boundaries as indicated by decreased N2b amplitude and enhanced P3a amplitude for target phrase boundaries and larger P2 amplitude for non-target boundaries. When timing cues were lacking, task demands increased especially for laymen as reflected by reduced P3a amplitude. In line, the N2b occurred earlier for musicians in both conditions indicating general faster target detection compared to laymen. Importantly, the elicitation of a P3a-like response to phrase boundaries marked by a pitch leap during passive exposure suggests that expressive timing information is automatically encoded and may lead to an involuntary allocation of attention towards significant events within a melody. We conclude that subtle timing variations in music performance prepare the listener for musical key events by directing and guiding attention towards their occurrences. That is, expressive timing facilitates the structuring and parsing of continuous musical material even when the auditory input is unattended.
The EBNA-2 N-Terminal Transactivation Domain Folds into a Dimeric Structure Required for Target Gene Activation
Epstein-Barr virus (EBV) is a γ-herpesvirus that may cause infectious mononucleosis in young adults. In addition, epidemiological and molecular evidence links EBV to the pathogenesis of lymphoid and epithelial malignancies. EBV has the unique ability to transform resting B cells into permanently proliferating, latently infected lymphoblastoid cell lines. Epstein-Barr virus nuclear antigen 2 (EBNA-2) is a key regulator of viral and cellular gene expression for this transformation process. The N-terminal region of EBNA-2 comprising residues 1-58 appears to mediate multiple molecular functions including self-association and transactivation. However, it remains to be determined if the N-terminus of EBNA-2 directly provides these functions or if these activities merely depend on the dimerization involving the N-terminal domain. To address this issue, we determined the three-dimensional structure of the EBNA-2 N-terminal dimerization (END) domain by heteronuclear NMR-spectroscopy. The END domain monomer comprises a small fold of four β-strands and an α-helix which form a parallel dimer by interaction of two β-strands from each protomer. A structure-guided mutational analysis showed that hydrophobic residues in the dimer interface are required for self-association in vitro. Importantly, these interface mutants also displayed severely impaired self-association and transactivation in vivo. Moreover, mutations of solvent-exposed residues or deletion of the α-helix do not impair dimerization but strongly affect the functional activity, suggesting that the EBNA-2 dimer presents a surface that mediates functionally important intra- and/or intermolecular interactions. Our study shows that the END domain is a novel dimerization fold that is essential for functional activity. Since this specific fold is a unique feature of EBNA-2 it might provide a novel target for anti-viral therapeutics.