Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
11,351
result(s) for
"Speech - physiology"
Sort by:
A brain for speech : a view from evolutionary neuroanatomy
This book discusses evolution of the human brain, the origin of speech and language. It covers past and present perspectives on the contentious issue of the acquisition of the language capacity. Divided into two parts, this insightful work covers several characteristics of the human brain including the language-specific network, the size of the human brain, its lateralization of functions and interhemispheric integration, in particular the phonological loop. Aboitiz argues that it is the phonological loop that allowed us to increase our vocal memory capacity and to generate a shared semantic space that gave rise to modern language. The second part examines the neuroanatomy of the monkey brain, vocal learning birds like parrots, emergent evidence of vocal learning capacities in mammals, mirror neurons, and the ecological and social context in which speech evolved in our early ancestors. This book's interdisciplinary topic will appeal to scholars of psychology, neuroscience, linguistics, biology and history -- Back cover.
Musical intervention enhances infants’ neural processing of temporal structure in music and speech
by
Zhao, T. Christina
,
Kuhl, Patricia K.
in
Auditory Perception - physiology
,
Babies
,
Brain - physiology
2016
Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants’ neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants’ neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants’ neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants’ ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing.
Journal Article
Multiple talker processing in autistic adult listeners
2024
Accommodating talker variability is a complex and multi-layered cognitive process. It involves shifting attention to the vocal characteristics of the talker as well as the linguistic content of their speech. Due to an interdependence between voice and phonological processing, multi-talker environments typically incur additional processing costs compared to single-talker environments. A failure or inability to efficiently distribute attention over multiple acoustic cues in the speech signal may have detrimental language learning consequences. Yet, no studies have examined effects of multi-talker processing in populations with atypical perceptual, social and language processing for communication, including autistic people. Employing a classic word-monitoring task, we investigated effects of talker variability in Australian English autistic (
n
= 24) and non-autistic (
n
= 28) adults. Listeners responded to target words (e.g.,
apple
,
duck
,
corn
) in randomised sequences of words. Half of the sequences were spoken by a single talker and the other half by multiple talkers. Results revealed that autistic participants’ sensitivity scores to accurately-spotted target words did not differ to those of non-autistic participants, regardless of whether they were spoken by a single or multiple talkers. As expected, the non-autistic group showed the well-established processing cost associated with talker variability (e.g., slower response times). Remarkably, autistic listeners’ response times did not differ across single- or multi-talker conditions, indicating they did not show perceptual processing costs when accommodating talker variability. The present findings have implications for theories of autistic perception and speech and language processing.
Journal Article
CDP-choline and galantamine, a personalized α7 nicotinic acetylcholine receptor targeted treatment for the modulation of speech MMN indexed deviance detection in healthy volunteers: a pilot study
by
Knott Verner
,
Blais, Crystal M
,
Smith, Dylan
in
Acetylcholine receptors (nicotinic)
,
Allosteric properties
,
Choline
2020
RationaleThe combination of CDP-choline, an α7 nicotinic acetylcholine receptor (α7 nAChR) agonist, with galantamine, a positive allosteric modulator of nAChRs, is believed to counter the fast desensitization rate of the α7 nAChRs and may be of interest for schizophrenia (SCZ) patients. Beyond the positive and negative clinical symptoms, deficits in early auditory prediction-error processes are also observed in SCZ. Regularity violations activate these mechanisms that are indexed by electroencephalography-derived mismatch negativity (MMN) event-related potentials (ERPs) in response to auditory deviance.Objectives/methodsThis pilot study in thirty-three healthy humans assessed the effects of an optimized α7 nAChR strategy combining CDP-choline (500 mg) with galantamine (16 mg) on speech-elicited MMN amplitude and latency measures. The randomized, double-blinded, placebo-controlled, and counterbalanced design with a baseline stratification method allowed for assessment of individual response differences.ResultsIncreases in MMN generation mediated by the acute CDP-choline/galantamine treatment in individuals with low baseline MMN amplitude for frequency, intensity, duration, and vowel deviants were revealed.ConclusionsThese results, observed primarily at temporal recording sites overlying the auditory cortex, implicate α7 nAChRs in the enhancement of speech deviance detection and warrant further examination with respect to dysfunctional auditory deviance processing in individuals with SCZ.
Journal Article
The calming effect of a new wearable device during the anticipation of public speech
2017
We assessed the calming effect of
doppel
, a wearable device that delivers heartbeat-like tactile stimulation on the wrist. We tested whether the use of
doppel
would have a calming effect on physiological arousal and subjective reports of state anxiety during the anticipation of public speech, a validated experimental task that is known to induce anxiety. Two groups of participants were tested in a single-blind design. Both groups wore the device on their wrist during the anticipation of public speech, and were given the cover story that the device was measuring blood pressure. For only one group, the device was turned on and delivered a slow heartbeat-like vibration. Participants in the
doppel
active condition displayed lower increases in skin conductance responses relative to baseline and reported lower anxiety levels compared to the control group. Therefore, the presence, as opposed to its absence, of a slow rhythm, which in the present study was instantiated as an auxiliary slow heartbeat delivered through
doppel
, had a significant calming effect on physiological arousal and subjective experience during a socially stressful situation. This finding is discussed in relation to past research on responses and entrainment to rhythms, and their effects on arousal and mood.
Journal Article
Speech rhythms and their neural foundations
2020
The recognition of spoken language has typically been studied by focusing on either words or their constituent elements (for example, low-level features or phonemes). More recently, the ‘temporal mesoscale’ of speech has been explored, specifically regularities in the envelope of the acoustic signal that correlate with syllabic information and that play a central role in production and perception processes. The temporal structure of speech at this scale is remarkably stable across languages, with a preferred range of rhythmicity of 2– 8 Hz. Importantly, this rhythmicity is required by the processes underlying the construction of intelligible speech. A lot of current work focuses on audio-motor interactions in speech, highlighting behavioural and neural evidence that demonstrates how properties of perceptual and motor systems, and their relation, can underlie the mesoscale speech rhythms. The data invite the hypothesis that the speech motor cortex is best modelled as a neural oscillator, a conjecture that aligns well with current proposals highlighting the fundamental role of neural oscillations in perception and cognition. The findings also show motor theories (of speech) in a different light, placing new mechanistic constraints on accounts of the action–perception interface.Syllables play a central role in speech production and perception. In this Review, Poeppel and Assaneo outline how a simple biophysical model of the speech production system as an oscillator explains the remarkably stable rhythmic structure of spoken language.
Journal Article
Neural language networks at birth
by
Awander, Alfred
,
Lohmann, Gabriele
,
Baldoli, Cristina
in
Acoustic Stimulation - methods
,
Adult
,
Adults
2011
The ability to learn language is a human trait. In adults and children, brain imaging studies have shown that auditory language activates a bilateral frontotemporal network with a left hemispheric dominance. It is an open question whether these activations represent the complete neural basis for language present at birth. Here we demonstrate that in 2-d-old infants, the language-related neural substrate is fully active in both hemispheres with a preponderance in the right auditory cortex. Functional and structural connectivities within this neural network, however, are immature, with strong connectivities only between the two hemispheres, contrasting with the adult pattern of prevalent intrahemispheric connectivities. Thus, although the brain responds to spoken language already at birth, thereby providing a strong biological basis to acquire language, progressive maturation of intrahemispheric functional connectivity is yet to be established with language exposure as the brain develops.
Journal Article
Spatiotemporal dynamics of auditory attention synchronize with speech
by
Maess, Burkhard
,
Wöstmann, Malte
,
Herrmann, Björn
in
Acoustic Stimulation
,
Adult
,
Attention - physiology
2016
Attention plays a fundamental role in selectively processing stimuli in our environment despite distraction. Spatial attention induces increasing and decreasing power of neural alpha oscillations (8–12 Hz) in brain regions ipsilateral and contralateral to the locus of attention, respectively. This study tested whether the hemispheric lateralization of alpha power codes not just the spatial location but also the temporal structure of the stimulus. Participants attended to spoken digits presented to one ear and ignored tightly synchronized distracting digits presented to the other ear. In the magnetoencephalogram, spatial attention induced lateralization of alpha power in parietal, but notably also in auditory cortical regions. This alpha power lateralization was not maintained steadily but fluctuated in synchrony with the speech rate and lagged the time course of low-frequency (1–5 Hz) sensory synchronization. Higher amplitude of alpha power modulation at the speech rate was predictive of a listener’s enhanced performance of stream-specific speech comprehension. Our findings demonstrate that alpha power lateralization is modulated in tune with the sensory input and acts as a spatiotemporal filter controlling the read-out of sensory content.
Journal Article
The cocktail-party problem revisited: early processing and selection of multi-talker speech
How do we recognize what one person is saying when others are speaking at the same time? This review summarizes widespread research in psychoacoustics, auditory scene analysis, and attention, all dealing with early processing and selection of speech, which has been stimulated by this question. Important effects occurring at the peripheral and brainstem levels are mutual masking of sounds and “unmasking” resulting from binaural listening. Psychoacoustic models have been developed that can predict these effects accurately, albeit using computational approaches rather than approximations of neural processing. Grouping—the segregation and streaming of sounds—represents a subsequent processing stage that interacts closely with attention. Sounds can be easily grouped—and subsequently selected—using primitive features such as spatial location and fundamental frequency. More complex processing is required when lexical, syntactic, or semantic information is used. Whereas it is now clear that such processing can take place preattentively, there also is evidence that the processing depth depends on the task-relevancy of the sound. This is consistent with the presence of a feedback loop in attentional control, triggering enhancement of to-be-selected input. Despite recent progress, there are still many unresolved issues: there is a need for integrative models that are neurophysiologically plausible, for research into grouping based on other than spatial or voice-related cues, for studies explicitly addressing endogenous and exogenous attention, for an explanation of the remarkable sluggishness of attention focused on dynamically changing sounds, and for research elucidating the distinction between binaural speech perception and sound localization.
Journal Article