Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
121
result(s) for
"Grandjean, Didier"
Sort by:
Maternal speech decreases pain scores and increases oxytocin levels in preterm infants during painful procedures
2021
Preterm infants undergo early separation from parents and are exposed to frequent painful clinical procedures, with resultant short- and long-term effects on their neurodevelopment. We aimed to establish whether the mother’s voice could provide an effective and safe analgesia for preterm infants and whether endogenous oxytocin (OXT) could be linked to pain modulation. Twenty preterm infants were exposed to three conditions—mother’s live voice (speaking or singing) and standard care—in random order during a painful procedure. OXT levels (pg/mL) in saliva and plasma cortisol levels were quantified, and the Premature Infant Pain Profile (PIPP) was blindly coded by trained psychologists. During the mother’s live voice, PIPP scores significantly decreased, with a concomitant increase in OXT levels over baseline. The effect on pain perception was marginally significant for singing. No effects on cortisol levels were found. The mother’s live voice modulated preterm infants’ pain indicators. Endogenous OXT released during vocal contact is a promising protective mechanism during early painful interventions in at-risk populations.
Journal Article
Origin of the bright photoluminescence of few-atom silver clusters confined in LTA zeolites
by
Banerjee, Dipanjan
,
Fron, Eduard
,
Baekelant, Wouter
in
Cages
,
Clusters
,
Density functional theory
2018
Small silver clusters stabilized by organic materials or inorganic surfaces can exhibit bright photoluminescence, but the origin of this effect has been difficult to establish, in part because the materials are heterogeneous and contain many larger but inactive clusters. Grandjean et al. studied silver clusters in zeolites, using x-ray excited optical luminescence to monitor only the emissive structures (see the Perspective by Quintanilla and Liz-Marzán). Aided by theoretical calculations, they identified the electronic states of four-atom silver clusters bound with water molecules that produce bright green emission—thus identifying candidate materials for application in lighting, imaging, and therapeutics. Science , this issue p. 686 ; see also p. 645 The bright luminescence of Ag-LTA zeolites originates from long-lived triplet states in Ag 4 (H 2 O) 2 and Ag 4 (H 2 O) 4 clusters. Silver (Ag) clusters confined in matrices possess remarkable luminescence properties, but little is known about their structural and electronic properties. We characterized the bright green luminescence of Ag clusters confined in partially exchanged Ag–Linde Type A (LTA) zeolites by means of a combination of x-ray excited optical luminescence-extended x-ray absorption fine structure, time-dependent–density functional theory calculations, and time-resolved spectroscopy. A mixture of tetrahedral Ag 4 (H 2 O) x 2+ ( x = 2 and x = 4) clusters occupies the center of a fraction of the sodalite cages. Their optical properties originate from a confined two-electron superatom quantum system with hybridized Ag and water O orbitals delocalized over the cluster. Upon excitation, one electron of the s-type highest occupied molecular orbital is promoted to the p-type lowest unoccupied molecular orbitals and relaxes through enhanced intersystem crossing into long-lived triplet states.
Journal Article
Set the tone: Trustworthy and dominant novel voices classification using explicit judgement and machine learning techniques
2022
Prior research has established that valence-trustworthiness and power-dominance are the two main dimensions of voice evaluation at zero-acquaintance. These impressions shape many of our interactions and high-impact decisions, so it is crucial for many domains to understand this dynamic. Yet, the relationship between acoustical properties of novel voices and personality/attitudinal traits attributions remains poorly understood. The fundamental problem of understanding vocal impressions and relative decision-making is linked to the complex nature of the acoustical properties in voices. In order to disentangle this relationship, this study extends the line of research on the acoustical bases of vocal impressions in two ways. First, by attempting to replicate previous finding on the bi-dimensional nature of first impressions: using personality judgements and establishing a correspondence between acoustics and voice-first-impression (VFI) dimensions relative to sex (Study 1). Second (Study 2), by exploring the non-linear relationships between acoustical parameters and VFI by the means of machine learning models. In accordance with literature, a bi-dimensional projection comprising valence-trustworthiness and power-dominance evaluations is found to explain 80% of the VFI. In study 1, brighter (high center of gravity), smoother (low shimmers), and louder (high minimum intensity) voices reflected trustworthiness, while vocal roughness (harmonic to noise-ratio), energy in the high frequencies (Energy3250), pitch (Quantile 1, Quantile 5) and lower range of pitch values reflected dominance. In study 2, above chance classification of vocal profiles was achieved by both Support Vector Machine (77.78%) and Random-Forest (Out-Of-Bag = 36.14) classifiers, generally confirming that machine learning algorithms could predict first impressions from voices. Hence results support a bi-dimensional structure to VFI, emphasize the usefulness of machine learning techniques in understanding vocal impressions, and shed light on the influence of sex on VFI formation.
Journal Article
Basal ganglia and cerebellum contributions to vocal emotion processing as revealed by high-resolution fMRI
by
Grandjean, Didier
,
Péron, Julie
,
Frühholz, Sascha
in
631/378
,
631/378/1457
,
631/378/1457/1936
2021
Until recently, brain networks underlying emotional voice prosody decoding and processing were focused on modulations in primary and secondary auditory, ventral frontal and prefrontal cortices, and the amygdala. Growing interest for a specific role of the basal ganglia and cerebellum was recently brought into the spotlight. In the present study, we aimed at characterizing the role of such subcortical brain regions in vocal emotion processing, at the level of both brain activation and functional and effective connectivity, using high resolution functional magnetic resonance imaging. Variance explained by low-level acoustic parameters (fundamental frequency, voice energy) was also modelled. Wholebrain data revealed expected contributions of the temporal and frontal cortices, basal ganglia and cerebellum to vocal emotion processing, while functional connectivity analyses highlighted correlations between basal ganglia and cerebellum, especially for angry voices. Seed-to-seed and seed-to-voxel effective connectivity revealed direct connections within the basal ganglia—especially between the putamen and external globus pallidus—and between the subthalamic nucleus and the cerebellum. Our results speak in favour of crucial contributions of the basal ganglia, especially the putamen, external globus pallidus and subthalamic nucleus, and several cerebellar lobules and nuclei for an efficient decoding of and response to vocal emotions.
Journal Article
Unfolding and dynamics of affect bursts decoding in humans
2018
The unfolding dynamics of the vocal expression of emotions are crucial for the decoding of the emotional state of an individual. In this study, we analyzed how much information is needed to decode a vocally expressed emotion using affect bursts, a gating paradigm, and linear mixed models. We showed that some emotions (fear, anger, disgust) were significantly better recognized at full-duration than others (joy, sadness, neutral). As predicted, recognition improved when greater proportion of the stimuli was presented. Emotion recognition curves for anger and disgust were best described by higher order polynomials (second to third), while fear, sadness, neutral, and joy were best described by linear relationships. Acoustic features were extracted for each stimulus and subjected to a principal component analysis for each emotion. The principal components were successfully used to partially predict the accuracy of recognition (i.e., for anger, a component encompassing acoustic features such as fundamental frequency (f0) and jitter; for joy, pitch and loudness range). Furthermore, the impact of the principal components on the recognition of anger, disgust, and sadness changed with longer portions being presented. These results support the importance of studying the unfolding conscious recognition of emotional vocalizations to reveal the differential contributions of specific acoustical feature sets. It is likely that these effects are due to the relevance of threatening information to the human mind and are related to urgent motor responses when people are exposed to potential threats as compared with emotions where no such urgent response is required (e.g., joy).
Journal Article
The Perception and Mimicry of Facial Movements Predict Judgments of Smile Authenticity
2014
The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS), using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG) over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar's neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow's feet wrinkles around the eyes) in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major, Orbicularis Oculi, and Corrugator muscles. Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns.
Journal Article
Tuning the energetics and tailoring the optical properties of silver clusters confined in zeolites
2016
The integration of metal atoms and clusters in well-defined dielectric cavities is a powerful strategy to impart new properties to them that depend on the size and geometry of the confined space as well as on metal–host electrostatic interactions. Here, we unravel the dependence of the electronic properties of metal clusters on space confinement by studying the ionization potential of silver clusters embedded in four different zeolite environments over a range of silver concentrations. Extensive characterization reveals a strong influence of silver loading and host environment on the cluster ionization potential, which is also correlated to the cluster’s optical and structural properties. Through fine-tuning of the zeolite host environment, we demonstrate photoluminescence quantum yields approaching unity. This work extends our understanding of structure–property relationships of small metal clusters and applies this understanding to develop highly photoluminescent materials with potential applications in optoelectronics and bioimaging.
Zeolites encapsulating clusters of silver offer interesting optical properties. Here it is shown how the interactions between these clusters and the framework can be tuned to achieve photoluminescence quantum yields approaching unity.
Journal Article
Behavioral correlates of temporal attention biases during emotional prosody perception
2022
Emotional prosody perception (EPP) unfolds in time given the intrinsic temporal nature of auditory stimuli, and has been shown to be modulated by spatial attention. Yet, the influence of temporal attention (TA) on EPP remains largely unexplored. TA studies manipulate subject’s motor preparedness according to an upcoming event, with targets to discriminate during short attended trials arriving quickly, and, targets to discriminate during long unattended trials arriving at a later time point. We used here a classic paradigm manipulating TA to investigate its influence on behavioral responses to EPP (n = 100) and we found that TA bias was associated with slower reaction times (RT) for angry but not neutral prosodies and only during short trials. Importantly, TA biases were observed for accuracy measures only for angry voices and especially during short trials, suggesting that neutral stimuli are less subject to TA biases. Importantly, emotional facilitation, with faster RTs for angry voices in comparison to neutral voices, was observed when the stimuli were temporally attended and during short trials, suggesting an influential role of TA during EPP. Together, these results demonstrate for the first time the major influence of TA in RTs and behavioral performance while discriminating emotional prosody.
Journal Article
Music in premature infants enhances high-level cognitive brain networks
by
Grandjean, Didier
,
Hüppi, Petra S.
,
Van De Ville, Dimitri
in
Biological Sciences
,
Brain
,
Brain architecture
2019
Neonatal intensive care units are willing to apply environmental enrichment via music for preterm newborns. However, no evidence of an effect of music on preterm brain development has been reported to date. Using resting-state fMRI,we characterized a circuitry of interest consisting of three network modules interconnected by the salience network that displays reduced network coupling in preterm compared with full-term newborns. Interestingly, preterm infants exposed to music in the neonatal intensive care units have significantly increased coupling between brain networks previously shown to be decreased in premature infants: the salience network with the superior frontal, auditory, and sensorimotor networks, and the salience network with the thalamus and precuneus networks. Therefore, music exposure leads to functional brain architectures that are more similar to those of full-term newborns, providing evidence for a beneficial effect of music on the preterm brain.
Journal Article
Maternal and paternal infant directed speech is modulated by the child’s age in two and three person interactions
2025
Prosody in infant-directed speech (IDS) serves important functions for the infant’s attention, regulation, and emotional expression. However, how the structural characteristics of this vocal signal are influenced by the presence or absence of one or two parents at different infant ages remains under-investigated. This study aimed to identify the acoustic characteristics of parental vocalizations in 69 families during specific phases of the Lausanne Trilogue Play (LTP) setting. Vocalizations were analyzed in both two-person contexts (mother-baby or father-baby interacting with the infant individually) and three-person contexts (mother-baby or father-baby interactions in the presence of the other parent) at three time points: when the infant was 3, 9, and 18 months old. Videos of interactions were coded, and the parental vocalizations were extracted. Five components of acoustic features related to the prosodic aspects of speech were extracted for subsequent analysis: intensity and its variability, pitch and pitch variability, formant amplitude, the intensity of specific speech frequency bands affecting sound timbre, and the rate of voiced and unvoiced segments per second. The study demonstrated a main effect of infant age on parental acoustic prosodic characteristics, along with significant interactions between infant age and interaction context (two- versus three-person) and between infant age and parental role (mother versus father). Across contexts and parental roles, intensity, pitch, and their variability consistently increased from 3 to 9 months. By 9 months, distinct prosodic patterns emerged, including a reduced syllable rate and formant amplitude, along with an increase in pauses. The mother’s voice exhibited a steady increase in intensity, as well as in pitch and intensity variability. Interestingly, when comparing parents across the two contexts, IDS in the three-person context is characterized by a higher rate of syllables and fewer pauses, with the most pronounced changes observed at 9 months of age. The development of prosodic characteristics in IDS is not constant across age and it is influenced by the complex interactions between age phases, parental gender, and contextual factors, with a dynamic adaptation of the communication strategies in three-person contexts. The current study underscores the importance of taking a comprehensive perspective in analyzing infant-directed speech within an interactive context involving both fathers and mothers in two- and three-person settings.
Journal Article