Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
15
result(s) for
"Casserly, Elizabeth D."
Sort by:
Mirrors and toothaches: commonplace manipulations of non-auditory feedback availability change perceived speech intelligibility
by
Marino, Francesca R.
,
Casserly, Elizabeth D.
in
multisensory integration
,
Neuroscience
,
somatosensory feedback
2024
This paper investigates the impact of two non-technical speech feedback perturbations outside the auditory modality: topical application of commercially-available benzocaine to reduce somatosensory feedback from speakers’ lips and tongue tip, and the presence of a mirror to provide fully-detailed visual self-feedback. In experiment 1, speakers were recorded under normal quiet conditions (i.e., baseline), then again with benzocaine application plus auditory degradation, and finally with the addition of mirror feedback. Speech produced under normal and both feedback-altered conditions was assessed via naïve listeners’ intelligibility discrimination judgments. Listeners judged speech produced under bisensory degradation to be less intelligible than speech from the un-degraded baseline, and with a greater degree of difference than previously observed with auditory-only degradation. The introduction of mirror feedback, however, did not result in relative improvements in intelligibility. Experiment 2, therefore, assessed the effect of a mirror on speech intelligibility in isolation with no other sensory feedback manipulations. Speech was recorded at baseline and then again in front of a mirror, and relative intelligibility was discriminated by naïve listeners. Speech produced with mirror feedback was judged as less intelligible than baseline tokens, indicating a negative impact of visual self-feedback in the absence of other sensory manipulations. The results of both experiments demonstrate that relatively accessible manipulations of non-auditory sensory feedback can produce speech-relevant effects, and that those effects are perceptible to naïve listeners.
Journal Article
The Viability of Media Interviews as Materials for Auditory Training
2019
Purpose Rehabilitative auditory training for people with hearing loss faces 2 primary challenges: generalization of learning to novel contexts and user adherence to training goals. We hypothesized that using interview excerpts from popular media as training materials would have the potential to positively influence both of these areas. Interviews contain predictable, structured complexity that promotes perceptual generalization and are also designed to be engaging for consumers. This study tested the viability of such popular media interviews as training materials, comparing their effectiveness to that obtained with sentence transcription training. Method Young adults with normal hearing ( N = 60) completed 1 hr of transcription training using noise-vocoded materials, simulating acoustic perception through an 8-channel cochlear implant. Participants completed pre- and posttraining assessments of vocoded speech perception in quiet and in noise, along with posttraining high-variability sentence recognition and cued isolated word recognition. Scores on all tests were compared across 4 randomly assigned groups differing in training materials: audiovisual interviews, audio-only interviews, isolated sentences, and undegraded isolated sentences (providing an untrained control comparison group). Results Recognition in quiet and in noise improved with both types of interview-based training, and interview training groups outperformed the control group on all generalization tests. Participants in the audiovisual interview group also reported significantly higher, more sustained engagement in a retrospective survey. Conclusions Media interviews appear to be at least as effective as isolated sentences for transcription-based auditory training in simulated hearing loss settings with young adults and may improve engagement and generalization of benefit in auditory training applications.
Journal Article
Auditory Training With Multiple Talkers and Passage-Based Semantic Cohesion
2017
Purpose: Current auditory training methods typically result in improvements to speech recognition abilities in quiet, but learner gains may not extend to other domains in speech (e.g., recognition in noise) or self-assessed benefit. This study examined the potential of training involving multiple talkers and training emphasizing discourse-level top-down processing to produce more generalized learning. Method: Normal-hearing participants (N = 64) were randomly assigned to 1 of 4 auditory training protocols using noise-vocoded speech simulating the processing of an 8-channel cochlear implant: sentence-based single-talker training, training with 24 different talkers, passage-based transcription training, and a control (transcribing unvocoded sentence materials). In all cases, participants completed 2 pretests under cochlear implant simulation, 1 hr of training, and 5 posttests to assess perceptual learning and cross-context generalization. Results: Performance above the control was seen in all 3 experimental groups for sentence recognition in quiet. In addition, the multitalker training method generalized to a context word-recognition task, and the passage training method caused gains in sentence recognition in noise. Conclusion: The gains of the multitalker and passage training groups over the control suggest that, with relatively small modifications, improvements to the generalized outcomes of auditory training protocols may be possible.
Journal Article
Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings
by
Pisoni, David B.
,
Casserly, Elizabeth D.
in
Acknowledgment
,
Acoustic Stimulation - instrumentation
,
Acoustic Stimulation - methods
2015
Purpose: Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method: Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results: Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions: Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task.
Journal Article