Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
121 result(s) for "Rodriguez-Fornells, Antoni"
Sort by:
Structural neuroplasticity in expert pianists depends on the age of musical training onset
In the last decade, several studies have investigated the neuroplastic changes induced by long-term musical training. Here we investigated structural brain differences in expert pianists compared to non-musician controls, as well as the effect of the age of onset (AoO) of piano playing. Differences with non-musicians and the effect of sensitive periods in musicians have been studied previously, but importantly, this is the first time in which the age of onset of music-training was assessed in a group of musicians playing the same instrument, while controlling for the amount of practice. We recruited a homogeneous group of expert pianists who differed in their AoO but not in their lifetime or present amount of training, and compared them to an age-matched group of non-musicians. A subset of the pianists also completed a scale-playing task in order to control for performance skill level differences. Voxel-based morphometry analysis was used to examine gray-matter differences at the whole-brain level. Pianists showed greater gray matter (GM) volume in bilateral putamen (extending also to hippocampus and amygdala), right thalamus, bilateral lingual gyri and left superior temporal gyrus, but a GM volume shrinkage in the right supramarginal, right superior temporal and right postcentral gyri, when compared to non-musician controls. These results reveal a complex pattern of plastic effects due to sustained musical training: a network involved in reinforcement learning showed increased GM volume, while areas related to sensorimotor control, auditory processing and score-reading presented a reduction in the volume of GM. Behaviorally, early-onset pianists showed higher temporal precision in their piano performance than late-onset pianists, especially in the left hand. Furthermore, early onset of piano playing was associated with smaller GM volume in the right putamen and better piano performance (mainly in the left hand). Our results, therefore, reveal for the first time in a single large dataset of healthy pianists the link between onset of musical practice, behavioral performance, and putaminal gray matter structure. In summary, skill-related plastic adaptations may include decreases and increases in GM volume, dependent on an optimization of the system caused by an early start of musical training. We believe our findings enrich the plasticity discourse and shed light on the neural basis of expert skill acquisition. •We scanned and compared 36 professional pianists and 17 non-musicians.•Pianists, differing in the age of onset of training, performed a scale-playing task.•Musicianship elicited a pattern of increases and decreases of gray matter.•Piano performance was better in early-onset pianists, especially in the left hand.•The earlier the onset the smaller the amount of gray matter in the right putamen.
Dopamine modulates the reward experiences elicited by music
Understanding how the brain translates a structured sequence of sounds, such as music, into a pleasant and rewarding experience is a fascinating question which may be crucial to better understand the processing of abstract rewards in humans. Previous neuroimaging findings point to a challenging role of the dopaminergic system in music-evoked pleasure. However, there is a lack of direct evidence showing that dopamine function is causally related to the pleasure we experience from music. We addressed this problem through a double blind within-subject pharmacological design in which we directly manipulated dopaminergic synaptic availability while healthy participants (n = 27) were engaged in music listening. We orally administrated to each participant a dopamine precursor (levodopa), a dopamine antagonist (risperidone), and a placebo (lactose) in three different sessions. We demonstrate that levodopa and risperidone led to opposite effects in measures of musical pleasure and motivation: while the dopamine precursor levodopa, compared with placebo, increased the hedonic experience and music-related motivational responses, risperidone led to a reduction of both. This study shows a causal role of dopamine in musical pleasure and indicates that dopaminergic transmission might play different or additive roles than the ones postulated in affective processing so far, particularly in abstract cognitive activities.
Effect and safety of listening to music or audiobooks as a coadjuvant treatment for chronic pain patients under opioid treatment: a study protocol for an open-label, parallel-group, randomised, controlled, proof-of-concept clinical trial in a tertiary hospital in the Barcelona South Metropolitan area
BackgroundChronic non-cancer pain (CNCP) treatment’s primary goal is to maintain physical and mental functioning while improving quality of life. Opioid use in CNCP patients has increased in recent years, and non-pharmacological interventions such as music listening have been proposed to counter it. Unlike other auditive stimuli, music can activate emotional-regulating and reward-regulating circuits, making it a potential tool to modulate attentional processes and regulate mood. This study’s primary objective is to provide the first evidence on the distinct (separate) effects of music listening as a coadjuvant maintenance analgesic treatment in CNCP patients undergoing opioid analgesia.Methods and analysisThis will be a single-centre, phase II, open-label, parallel-group, proof-of-concept randomised clinical trial with CNCP patients under a minimum 4-week regular opioid treatment. We plan to include 70 consecutive patients, which will be randomised (1:1) to either the experimental group (active music listening) or the control group (active audiobooks listening). During 28 days, both groups will listen daily (for at least 30 min and up to 1 hour) to preset playlists tailored to individual preferences.Pain intensity scores at each visit, the changes (differences) from baseline and the proportions of responders according to various definitions based on pain intensity differences will be described and compared between study arms. We will apply longitudinal data assessment methods (mixed generalised linear models) taking the patient as a cluster to assess and compare the endpoints’ evolution. We will also use the mediation analysis framework to adjust for the effects of additional therapeutic measures and obtain estimates of effect with a causal interpretation.Ethics and disseminationThe study protocol has been reviewed, and ethics approval has been obtained from the Bellvitge University Hospital Institutional Review Board, L’Hospitalet de Llobregat, Barcelona, Spain. The results from this study will be actively disseminated through manuscript publications and conference presentations.Trial registration numberNCT05726266.
Enriched Music-supported Therapy for chronic stroke patients: a study protocol of a randomised controlled trial
Background Residual motor deficits of the upper limb in patients with chronic stroke are common and have a negative impact on autonomy, participation and quality of life. Music-Supported Therapy (MST) is an effective intervention to enhance motor and cognitive function, emotional well-being and quality of life in chronic stroke patients. We have adapted the original MST training protocol to a home-based intervention, which incorporates increased training intensity and variability, group sessions, and optimisation of learning to promote autonomy and motivation. Methods A randomised controlled trial will be conducted to test the effectiveness of this enriched MST (eMST) protocol in improving motor functions, cognition, emotional well-being and quality of life of chronic stroke patients when compared to a program of home-based exercises utilizing the Graded Repetitive Arm Supplementary Program (GRASP). Sixty stroke patients will be recruited and randomly allocated to an eMST group ( n  = 30) or a control GRASP intervention group ( n  = 30). Patients will be evaluated before and after a 10-week intervention, as well as at 3-month follow-up. The primary outcome of the study is the functionality of the paretic upper limb measured with the Action Research Arm Test. Secondary outcomes include other motor and cognitive functions, emotional well-being and quality of life measures as well as self-regulation and self-efficacy outcomes. Discussion We hypothesize that patients treated with eMST will show larger improvements in their motor and cognitive functions, emotional well-being and quality of life than patients treated with a home-based GRASP intervention. Trial registration The trial has been registered at ClinicalTrials.gov and identified as NCT04507542 on 8 August 2020.
Statistical learning and prosodic bootstrapping differentially affect neural synchronization during speech segmentation
Neural oscillations constitute an intrinsic property of functional brain organization that facilitates the tracking of linguistic units at multiple time scales through brain-to-stimulus alignment. This ubiquitous neural principle has been shown to facilitate speech segmentation and word learning based on statistical regularities. However, there is no common agreement yet on whether speech segmentation is mediated by a transition of neural synchronization from syllable to word rate, or whether the two time scales are concurrently tracked. Furthermore, it is currently unknown whether syllable transition probability contributes to speech segmentation when lexical stress cues can be directly used to extract word forms. Using Inter-Trial Coherence (ITC) analyses in combinations with Event-Related Potentials (ERPs), we showed that speech segmentation based on both statistical regularities and lexical stress cues was accompanied by concurrent neural synchronization to syllables and words. In particular, ITC at the word rate was generally higher in structured compared to random sequences, and this effect was particularly pronounced in the flat condition. Furthermore, ITC at the syllable rate dynamically increased across the blocks of the flat condition, whereas a similar modulation was not observed in the stressed condition. Notably, in the flat condition ITC at both time scales correlated with each other, and changes in neural synchronization were accompanied by a rapid reconfiguration of the P200 and N400 components with a close relationship between ITC and ERPs. These results highlight distinct computational principles governing neural synchronization to pertinent linguistic units while segmenting speech under different listening conditions.
Foreign speech sound discrimination and associative word learning lead to a fast reconfiguration of resting-state networks
•Learning the meaning of novel words is a multistep process.•Word learning relies on discrimination, word-referent mapping and semantic abilities.•We examined resting-state changes related to word discrimination and word learning.•Foreign sound discrimination and word learning rapidly reconfigure neural networks. Learning new words in an unfamiliar language is a complex endeavor that requires the orchestration of multiple perceptual and cognitive functions. Although the neural mechanisms governing word learning are becoming better understood, little is known about the predictive value of resting-state (RS) metrics for foreign word discrimination and word learning attainment. In addition, it is still unknown which of the multistep processes involved in word learning have the potential to rapidly reconfigure RS networks. To address these research questions, we used electroencephalography (EEG), measured forty participants, and examined scalp-based power spectra, source-based spectral density maps and functional connectivity metrics before (RS1), in between (RS2) and after (RS3) a series of tasks which are known to facilitate the acquisition of new words in a foreign language, namely word discrimination, word-referent mapping and semantic generalization. Power spectra at the scalp level consistently revealed a reconfiguration of RS networks as a function of foreign word discrimination (RS1 vs. RS2) and word learning (RS1 vs. RS3) tasks in the delta, lower and upper alpha, and upper beta frequency ranges. Otherwise, functional reconfigurations at the source level were restricted to the theta (spectral density maps) and to the lower and upper alpha frequency bands (spectral density maps and functional connectivity). Notably, scalp RS changes related to the word discrimination tasks (difference between RS2 and RS1) correlated with word discrimination abilities (upper alpha band) and semantic generalization performance (theta and upper alpha bands), whereas functional changes related to the word learning tasks (difference between RS3 and RS1) correlated with word discrimination scores (lower alpha band). Taken together, these results highlight that foreign speech sound discrimination and word learning have the potential to rapidly reconfigure RS networks at multiple functional scales.
Violating body movement semantics: Neural signatures of self-generated and external-generated errors
How do we recognize ourselves as the agents of our actions? Do we use the same error detection mechanisms to monitor self-generated vs. externally imposed actions? Using event-related brain potentials (ERPs), we identified two different error-monitoring loops involved in providing a coherent sense of the agency of our actions. In the first ERP experiment, the participants were embodied in a virtual body (avatar) while performing an error-prone fast reaction time task. Crucially, in certain trials, participants were deceived regarding their own actions, i.e., the avatar movement did not match the participant's movement. Self-generated real errors and false (avatar) errors showed very different ERP signatures and with different processing latencies: while real errors showed a classical frontal-central error-related negativity (Ne/ERN), peaking 100ms after error commission, false errors elicited a larger and delayed parietal negative component (at about 350–400ms). The violation of the sense of agency elicited by false avatar errors showed a strong similarity to ERP signatures related to semantic or conceptual violations (N400 component). In a follow-up ERP control experiment, a subset of the same participants merely acted as observers of the avatar correct and error movements. This experimental situation did not elicit the N400 component associated with agency violation. Thus, the results show a clear neural dissociation between internal and external error-monitoring loops responsible for distinguishing our self-generated errors from those imposed externally, opening new avenues for the study of the mental processes underlying the integration of internal and sensory feedback information while being actors of our own actions. •Errors were induced in healthy humans fully embodied in a virtual body.•Different neural signatures distinguished own errors vs. false avatar errors.•False-induced avatar errors disrupted the feeling of agency of participants.•Agency disruptions induced by avatar errors elicited an N400 parietal component.
The impact of musical pleasure and musical hedonia on verbal episodic memory
Music listening is one of the most pleasurable activities in our life. As a rewarding stimulus, pleasant music could induce long-term memory improvements for the items encoded in close temporal proximity. In the present study, we behaviourally investigated (1) whether musical pleasure and musical hedonia enhance verbal episodic memory, and (2) whether such enhancement takes place even when the pleasant stimulus is not present during the encoding. Participants (N = 100) were asked to encode words presented in different auditory contexts (highly and lowly pleasant classical music, and control white noise), played before and during (N = 49), or only before (N = 51) the encoding. The Barcelona Music Reward Questionnaire was used to measure participants’ sensitivity to musical reward. 24 h later, participants’ verbal episodic memory was tested (old/new recognition and remember/know paradigm). Results revealed that participants with a high musical reward sensitivity present an increased recollection performance, especially for words encoded in a highly pleasant musical context. Furthermore, this effect persists even when the auditory stimulus is not concurrently present during the encoding of target items. Taken together, these findings suggest that musical pleasure might constitute a helpful encoding context able to drive memory improvements via reward mechanisms.
Neural correlates of specific musical anhedonia
Although music is ubiquitous in human societies, there are some people for whom music holds no reward value despite normal perceptual ability and preserved reward-related responses in other domains. The study of these individuals with specific musical anhedonia may be crucial to understand better the neural correlates underlying musical reward. Previous neuroimaging studies have shown that musically induced pleasure may arise from the interaction between auditory cortical networks and mesolimbic reward networks. If such interaction is critical for music-induced pleasure to emerge, then those individuals who do not experience it should show alterations in the cortical-mesolimbic response. In the current study, we addressed this question using fMRI in three groups of 15 participants, each with different sensitivity to music reward. We demonstrate that the music anhedonic participants showed selective reduction of activity for music in the nucleus accumbens (NAcc), but normal activation levels for a monetary gambling task. Furthermore, this group also exhibited decreased functional connectivity between the right auditory cortex and ventral striatum (including the NAcc). In contrast, individuals with greater than average response to music showed enhanced connectivity between these structures. Thus, our results suggest that specific musical anhedonia may be associated with a reduction in the interplay between the auditory cortex and the subcortical reward network, indicating a pivotal role of this interaction for the enjoyment of music.
Social interaction shapes and boosts second language learning: virtual reality can show us how
Social interaction can play a crucial role in how a second language (L2) is learned. In the current review, we examine theoretical frameworks and empirical studies demonstrating how social factors influence L2 learning, but we also identify gaps in the current literature. We propose using virtual reality (VR) as a methodology to fill these gaps with controlled, ecologically valid social simulations that can elucidate how social factors shape L2 learning.