Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
26 result(s) for "Gick, Bryan"
Sort by:
Aero-tactile integration in speech perception
Feel the noise When we listen to human speech we use a combination of the senses: the ears, obviously, and the eyes to see how a speaker's face changes the perception of consonant sounds. Experiments seeking to add the sense of touch to the mix have until now been inconclusive. Many languages use an expulsion of air to change vowel or consonant sounds — in English to distinguish a sound like 'da' from the microphone-popping 'pa'. Bryan Gick and Donald Derrick take that 'puff of air' as the starting point for a study of whether the sense of touch can contribute to what we 'hear'. They applied small, inaudible air puffs to the skin of volunteers who were simultaneously listening to a series of consonant sounds. Air puffs aimed at either the hand or neck made it more likely that aspirated sounds would be heard. So 'b' was misheard as 'p' following an air puff. This work could prove useful in the future development of audio and telecommunication aids for the hearing impaired. Auditory perception can be enhanced or interfered with by visual information from a speaker's face, but previous studies looking at whether tactile information influences speech perception have been limited. Here, by applying inaudible air puffs on participants' skin and thereby mimicking the tiny bursts of aspiration produced by certain speech sounds, it is found that syllables are more likely to be heard as aspirated, demonstrating that tactile information is also integrated in auditory perception. Visual information from a speaker’s face can enhance 1 or interfere with 2 accurate auditory perception. This integration of information across auditory and visual streams has been observed in functional imaging studies 3 , 4 , and has typically been attributed to the frequency and robustness with which perceivers jointly encounter event-specific information from these two modalities 5 . Adding the tactile modality has long been considered a crucial next step in understanding multisensory integration. However, previous studies have found an influence of tactile input on speech perception only under limited circumstances, either where perceivers were aware of the task 6 , 7 or where they had received training to establish a cross-modal mapping 8 , 9 , 10 . Here we show that perceivers integrate naturalistic tactile information during auditory speech perception without previous training. Drawing on the observation that some speech sounds produce tiny bursts of aspiration (such as English ‘p’) 11 , we applied slight, inaudible air puffs on participants’ skin at one of two locations: the right hand or the neck. Syllables heard simultaneously with cutaneous air puffs were more likely to be heard as aspirated (for example, causing participants to mishear ‘b’ as ‘p’). These results demonstrate that perceivers integrate event-relevant tactile information in auditory perception in much the same way as they do visual information.
Gait change in tongue movement
During locomotion, humans switch gaits from walking to running, and horses from walking to trotting to cantering to galloping, as they increase their movement rate. It is unknown whether gait change leading to a wider movement rate range is limited to locomotive-type behaviours, or instead is a general property of any rate-varying motor system. The tongue during speech provides a motor system that can address this gap. In controlled speech experiments, using phrases containing complex tongue-movement sequences, we demonstrate distinct gaits in tongue movement at different speech rates. As speakers widen their tongue-front displacement range, they gain access to wider speech-rate ranges. At the widest displacement ranges, speakers also produce categorically different patterns for their slowest and fastest speech. Speakers with the narrowest tongue-front displacement ranges show one stable speech-gait pattern, and speakers with widest ranges show two. Critical fluctuation analysis of tongue motion over the time-course of speech revealed these speakers used greater effort at the beginning of phrases—such end-state-comfort effects indicate speech planning. Based on these findings, we expect that categorical motion solutions may emerge in any motor system, providing that system with access to wider movement-rate ranges.
Postural adaptation to microgravity underlies fine motor impairment in astronauts’ speech
Understanding the role of anti-gravity behaviour in fine motor control is crucial to achieving a unified theory of motor control. We compare speech from astronauts before and immediately after microgravity exposure to evaluate the role of anti-gravity posture during fine motor skills. Here we show a generalized lowering of vowel space after space travel, which suggests a generalized postural shift of the articulators. Biomechanical modelling of gravitational effects on the vocal tract supports this analysis—the jaw and tongue are pulled down in 1g, but movement trajectories of the tongue are otherwise unaffected. These results demonstrate the role of anti-gravity posture in fine motor behaviour and provide a basis for the unification of motor control models across domains.
Cortical control of posture in fine motor skills: evidence from inter-utterance rest position
The vocal tract continuously employs tonic muscle activity in the maintenance of postural configurations. Gamma-band activity in the sensorimotor cortex underlies transient movements during speech production, yet little is known about the neural control of postural states in the vocal tract. Simultaneously, there is evidence that sensorimotor beta-band activations contribute to a system of inhibition and state maintenance that is integral to postural control in the body. Here we use electrocorticography to assess the contribution of sensorimotor beta-band activity during speech articulation and postural maintenance, and demonstrate that beta-band activity corresponds to the inhibition of discrete speech movements and the maintenance of tonic postural states in the vocal tract. Our findings identify consistencies between the neural control of posture in speech and what is previously reported in gross motor contexts, providing support for a unified theory of postural control across gross and fine motor skills.
Bilinguals Use Language-Specific Articulatory Settings
Purpose: Previous work has shown that monolingual French and English speakers use distinct articulatory settings, the underlying articulatory posture of a language. In the present article, the authors report on an experiment in which they investigated articulatory settings in bilingual speakers. The authors first tested the hypothesis that in order to sound native-like, bilinguals must use distinct, language-specific articulatory settings in monolingual mode. The authors then tested the hypothesis that in bilingual mode, a bilingual individual's articulatory setting is identical to the monolingual-mode setting of 1 of his or her languages. Method: Eight French-English bilinguals each read 90 English and 90 French sentences, and the authors measured their interspeech posture (ISP) using optical tracking of the lips and jaw and ultrasound imaging of the tongue. Results: Results show that bilingual speakers who are perceived as native in both languages exhibit distinct, language-specific ISPs, and those who are not perceived as native in one or more languages do not. In bilingual mode, bilinguals use an ISP that is equivalent to the monolingual-mode ISP of their currently most used language. The most balanced bilingual used a French lip ISP but an English tongue-tip ISP. Conclusion: Results support the claim that bilinguals who sound native in each of their languages have distinct articulatory settings for each language.
Speaking Tongues Are Actively Braced
Purpose: Bracing of the tongue against opposing vocal-tract surfaces such as the teeth or palate has long been discussed in the context of biomechanical, somatosensory, and aeroacoustic aspects of tongue movement. However, previous studies have tended to describe bracing only in terms of contact (rather than mechanical support), and only in limited phonetic contexts, supporting a widespread view of bracing as an occasional state, peculiar to specific sounds or sound combinations. Method: The present study tests the pervasiveness and effortfulness of tongue bracing in continuous English speech passages using electropalatography and 3-D biomechanical simulations. Results: The tongue remains in continuous contact with the upper molars during speech, with only rare exceptions. Use of the term bracing (rather than merely \"contact\") is supported here by biomechanical simulations showing that lateral bracing is an active posture requiring dedicated muscle activation; further, loss of lateral contact for onset /l/ allophones is found to be consistently accompanied by contact of the tongue blade against the anterior palate. In the rare instances where direct evidence for contact is lacking (only in a minority of low vowel and postvocalic /l/ tokens), additional biomechanical simulations show that lateral contact is maintained against pharyngeal structures dorsal to the teeth. Conclusion: Taken together, these results indicate that tongue bracing is both pervasive and active in running speech and essential in understanding tongue movement control.
Spatial and Temporal Properties of Gestures in North American English /r
Systematic syllable-based variation has been observed in the relative spatial and temporal properties of supralaryngeal gestures in a number of complex segments. Generally, more anterior gestures tend to appear at syllable peripheries while less anterior gestures occur closer to syllable peaks. Because previous studies compared only two gestures, it is not clear how to characterize the gestures, nor whether timing offsets are categorical or gradient. North American English /r/ is an unusually complex segment, having three supralaryngeal constrictions, but technological limitations have hindered simultaneous study of all three. A novel combination of M-mode ultrasound and optical tracking was used to measure gestural relations in productions of /r/ by nine speakers of Canadian English. Results show a front-to-back timing pattern in syllable-initial position: Lip then tongue blade (TB), then tongue root (TR). In syllable-final position TR and Lip are followed by TB. There was also a reduction in magnitude affecting Lip and TB gestures in syllable-final position and TR in syllable-initial position. These findings are not wholly consistent with any theory advanced thus far to explain syllable-based allophonic variation. It is proposed that the relative magnitude of gestures is a better predictor of timing than relative anteriority or an assigned phonological classification.
The Quantal Larynx: The Stable Regions of Laryngeal Biomechanics and Implications for Speech Production
Purpose: Recent proposals suggest that (a) the high dimensionality of speech motor control may be reduced via modular neuromuscular organization that takes advantage of intrinsic biomechanical regions of stability and (b) computational modeling provides a means to study whether and how such modularization works. In this study, the focus is on the larynx, a structure that is fundamental to speech production because of its role in phonation and numerous articulatory functions. Method: A 3-dimensional model of the larynx was created using the ArtiSynth platform (http://www.artisynth.org). This model was used to simulate laryngeal articulatory states, including inspiration, glottal fricative, modal prephonation, plain glottal stop, vocal-ventricular stop, and aryepiglotto-epiglottal stop and fricative. Results: Speech-relevant laryngeal biomechanics is rich with \"quantal\" or highly stable regions within muscle activation space. Conclusions: Quantal laryngeal biomechanics complement a modular view of speech control and have implications for the articulatory-biomechanical grounding of numerous phonetic and phonological phenomena.
The Use of Ultrasound in Remediation of North American English /r/ in 2 Adolescents
Contact authors: Barbara Bernhardt or Marcy Adler-Bock, School of Audiology and Speech Sciences, 5804 Fairview Avenue, Vancouver, BC, Canada, V6T 1Z3. E-mail: bernharb{at}interchange.ubc.ca . Purpose: Ultrasound can provide images of the tongue during speech production. The present study set out to examine the potential utility of ultrasound in remediation of North American English /r/. Method: The participants were 2 Canadian English-speaking adolescents who had not yet acquired /r/. The study included an initial period without ultrasound and 13 treatment sessions, each 1 hr long, using ultrasound. Speech samples were recorded at screening and immediately before and after treatment. Samples were analyzed acoustically and with listener judgments. Ultrasound images were obtained before, during, and after the treatment period. Results: Three speech-language pathologists unfamiliar with the participants rated significantly more posttreatment tokens as accurate [r]s in single words and some phrases. Acoustic analyses showed an expected lowering of the third formant after treatment. A qualitative observation of posttreatment ultrasound images for accurate [r] tokens showed tongue shapes to be more similar to those of typical adults than had been observed before treatment. Participants needed continued practice of their newly acquired skills in sentences and conversation. Conclusion: Two-dimensional dynamic ultrasound appears to have potential utility for remediation of /r/ in speakers with residual /r/ impairment. Further research is needed with larger numbers of participants to establish the relative efficacy of ultrasound in treatment. Key Words: ultrasound, remediation of /r/, residual articulation disorder CiteULike     Connotea     Del.icio.us     Digg     Facebook     Reddit     Technorati     Twitter     What's this?