Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,592 result(s) for "Hearing Experiments."
Sort by:
Comparison of Supervised-Learning Models and Auditory Discrimination of Infant Cries for the Early Detection of Developmental Disorders / Vergleich von Supervised-Learning Klassifikationsmodellen und menschlicher auditiver Diskriminationsfähigkeit zur Unterscheidung von Säuglingsschreien mit kongenitalen Entwicklungsstörungen
Infant cry classification can be performed in two ways: computational classification of cries or auditory discrimination by human listeners. This article compares both approaches. An auditory listening experiment was performed to examine if various listener groups (naive listeners, parents, nurses/midwives and therapists) were able to distinguish auditorily between healthy and pathological cries as well as to differentiate various pathologies from each other. Listeners were trained in hearing cries of healthy infants and cries of infants suffering from cleft-lip-and-palate, hearing impairment, laryngomalacia, asphyxia and brain damage. After training, a listening experiment was performed by allocating 18 infant cries to the cry groups. Multiple supervised-learning classifications models were calculated on the base of the cries’ acoustic properties. The accuracy of the models was compared to the accuracy of the human listeners. With a Kappa value of 0.491, listeners allocated the cries to the healthy and the five pathological groups with moderate performance. With a sensitivity of 0.64 and a specificity of 0.89, listeners were able to identify that a cry is a pathological one with higher confidence than separating between the single pathologies. Generalized linear mixed models found no significant differences between the classification accuracy of the listener groups. Significant differences between the pathological cry types were found. Supervised-learning classification models performed significantly better than the human listeners in classifying infant cries. The models reached an overall Kappa value of up to 0.837.
Impact of Structural Parameters on the Auditory Perception of Musical Sounds in Closed Spaces: An Experimental Study
This study attempts to investigate the impact of structural parameters (volume, shape, and the wall absorption coefficient) in closed space on the auditory perception of three different musical sound types. With binaural audibility technology and room impulse response measurement (RIR), this paper first verifies the reliability of using ODEON software in simulating simplified closed-space auditory scenes. Then, 96 music binaural signals produced in eight simulated closed spaces with different structural parameters are synthesized. Finally, auditory perception experiment is conducted on the synthesized binaural signals by using pair comparison method, and variance analysis is also made on the experimental results. It is concluded that (1) a hemispherical cabin with a small volume and large wall sound absorption coefficient is most suitable for playing a single instrument, such as the flute or violin, and (2) a cabin with large volume is suitable for playing multiple instruments music such as symphony, but the walls should not be totally reflective. The experimental scheme and results of current study provide guidance for designing the inner structure of the concert hall to achieve preferable auditory perception in practice.
Headphone screening to facilitate web-based auditory experiments
Psychophysical experiments conducted remotely over the internet permit data collection from large numbers of participants but sacrifice control over sound presentation and therefore are not widely employed in hearing research. To help standardize online sound presentation, we introduce a brief psychophysical test for determining whether online experiment participants are wearing headphones. Listeners judge which of three pure tones is quietest, with one of the tones presented 180° out of phase across the stereo channels. This task is intended to be easy over headphones but difficult over loudspeakers due to phase-cancellation. We validated the test in the lab by testing listeners known to be wearing headphones or listening over loudspeakers. The screening test was effective and efficient, discriminating between the two modes of listening with a small number of trials. When run online, a bimodal distribution of scores was obtained, suggesting that some participants performed the task over loudspeakers despite instructions to use headphones. The ability to detect and screen out these participants mitigates concerns over sound quality for online experiments, a first step toward opening auditory perceptual research to the possibilities afforded by crowdsourcing.
Brain-informed speech separation (BISS) for enhancement of target speaker in multitalker speech perception
•Multi-talker speech perception is challenging for people with hearing loss.•Automatic speech separation cannot help without first identifying the target speaker.•We used the brain signal of listeners to jointly identify and extract target speech.•This method eliminates the need for separating sound sources or knowing their number.•We show the efficacy of this method in both normal and hearing impaired subjects. [Display omitted] Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of “neuro-steered” hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS)11BISS: brain-informed speech separation. in which the information about the attended speech, as decoded from the subject’s brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices.
Prevention of acquired sensorineural hearing loss in mice by in vivo Htra2 gene editing
Background Aging, noise, infection, and ototoxic drugs are the major causes of human acquired sensorineural hearing loss, but treatment options are limited. CRISPR/Cas9 technology has tremendous potential to become a new therapeutic modality for acquired non-inherited sensorineural hearing loss. Here, we develop CRISPR/Cas9 strategies to prevent aminoglycoside-induced deafness, a common type of acquired non-inherited sensorineural hearing loss, via disrupting the Htra2 gene in the inner ear which is involved in apoptosis but has not been investigated in cochlear hair cell protection. Results The results indicate that adeno-associated virus (AAV)-mediated delivery of CRISPR/SpCas9 system ameliorates neomycin-induced apoptosis, promotes hair cell survival, and significantly improves hearing function in neomycin-treated mice. The protective effect of the AAV–CRISPR/Cas9 system in vivo is sustained up to 8 weeks after neomycin exposure. For more efficient delivery of the whole CRISPR/Cas9 system, we also explore the AAV–CRISPR/SaCas9 system to prevent neomycin-induced deafness. The in vivo editing efficiency of the SaCas9 system is 1.73% on average. We observed significant improvement in auditory brainstem response thresholds in the injected ears compared with the non-injected ears. At 4 weeks after neomycin exposure, the protective effect of the AAV–CRISPR/SaCas9 system is still obvious, with the improvement in auditory brainstem response threshold up to 50 dB at 8 kHz. Conclusions These findings demonstrate the safe and effective prevention of aminoglycoside-induced deafness via Htra2 gene editing and support further development of the CRISPR/Cas9 technology in the treatment of non-inherited hearing loss as well as other non-inherited diseases.
Harmonicity aids hearing in noise
Hearing in noise is a core problem in audition, and a challenge for hearing-impaired listeners, yet the underlying mechanisms are poorly understood. We explored whether harmonic frequency relations, a signature property of many communication sounds, aid hearing in noise for normal hearing listeners. We measured detection thresholds in noise for tones and speech synthesized to have harmonic or inharmonic spectra. Harmonic signals were consistently easier to detect than otherwise identical inharmonic signals. Harmonicity also improved discrimination of sounds in noise. The largest benefits were observed for two-note up-down “pitch” discrimination and melodic contour discrimination, both of which could be performed equally well with harmonic and inharmonic tones in quiet, but which showed large harmonic advantages in noise. The results show that harmonicity facilitates hearing in noise, plausibly by providing a noise-robust pitch cue that aids detection and discrimination.
Evaluation of Hearing Aids in Everyday Life Using Ecological Momentary Assessment: What Situations Are We Missing?
Background: Ecological momentary assessment (EMA) is a method to evaluate hearing aids in everyday life that uses repeated smartphone-based questionnaires to assess a situation as it happens. Although being ecologically valid and avoiding memory bias, this method may be prone to selection biases due to questionnaires being skipped or the phone not being carried along in certain situations. Purpose: This investigation analyzed which situations are underrepresented in questionnaire responses and physically measured objective EMA data (e.g., sound level), and how such underrepresentation may depend on different triggers. Method: In an EMA study, 20 subjects with hearing impairment provided daily information on reasons for missed data, that is, skipped questionnaires or missing connections between their phone and hearing aids. Results: Participants often deliberately did not bring the study phone to social situations or skipped questionnaires because they considered it inappropriate, for example, during church service or when engaging in conversation. They answered fewer questions in conversations with multiple partners and were more likely to postpone questionnaires when not in quiet environments. Conclusion: Data for social situations will likely be underrepresented in EMA. However, these situations are particularly important for the evaluation of hearing aids, as individuals with hearing impairment often have difficulties communicating in noisy situations. Thus, it is vital to optimize the design of the study to find a balance between avoiding memory bias and enabling subjects to report retrospectively on situations where phone usage may be difficult. The implications for several applications of EMA are discussed. Supplemental Material: https://doi.org/10.23641/asha. 12746849
Speech-in-noise discriminability after noise exposure: Insights from a gerbil model of acoustic trauma
Speech comprehension, especially in the presence of background sounds, allegedly declines as a consequence of noise-induced hearing loss. However, the connection between noise overexposure and deteriorated speech-in-noise perception despite normal audiometric thresholds (hidden hearing loss) is not yet clear. This study investigates speech-in-noise discrimination in young-adult Mongolian gerbils before and after an acoustic trauma to examine the link between noise exposure and speech-in-noise perception. Nine young-adult gerbils were trained to discriminate a deviant consonant-vowel-consonant combination (CVC) or vowel-consonant-vowel combination (VCV) in a sequence of CVC or VCV standards, respectively. The logatomes were spoken by different speakers and masked by a steady-state speech-shaped noise. After the gerbils obtained the behavioral baseline data, they underwent an acoustic trauma and participated in the behavioral experiments again. Applying multidimensional scaling, response latencies were used to generate perceptual maps reflecting the gerbils’ internal representations of the sounds pre- and post-trauma. To evaluate how the discrimination of vowels and consonants was altered after noise exposure, changes in response latencies between phoneme pairs were investigated in relation to their articulatory features. Numbers of intact inner hair cell synapses were counted, and auditory brainstem responses were measured to assess peripheral auditory function. Perceptual maps of vowels and consonants were very similar before and after noise exposure. Interestingly, the gerbils’ overall vowel discrimination ability was improved after the acoustic trauma, even though the gerbils suffered from noise-induced hearing loss with a temporary threshold shift for frequencies above 4 kHz. In contrast, there were only minor changes in the gerbils’ consonant discrimination ability. Moreover, noise exposure showed a differential influence on response latencies for vowel and consonant discriminations depending on the articulatory features. Altogether, the results show that an acoustic trauma followed by a temporary threshold shift is not necessarily linked to speech-in-noise perception difficulties associated with hidden hearing loss.