Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
688
result(s) for
"auditory spatial perception"
Sort by:
Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device
by
Poradosu, Keinan
,
Maimon, Amber
,
Yizhar, Or
in
Algorithms
,
auditory spatial perception
,
Experiments
2023
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes’ identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Journal Article
The plastic ear and perceptual relearning in auditory spatial perception
by
Carlile, Simon
in
adult functional plasticity
,
auditory accommodation
,
Auditory discrimination learning
2014
The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.
Journal Article
Effects of Stimulation Position and Frequency Band on Auditory Spatial Perception with Bilateral Bone Conduction
by
Zheng, Chengshi
,
Cai, Juanjuan
,
Sang, Jinqiu
in
Accuracy
,
Acoustic Stimulation
,
Bone Conduction - physiology
2022
Virtual sound localization tests were conducted to examine the effects of stimulation position (mastoid, condyle, supra-auricular, temple, and bone-anchored hearing aid implant position) and frequency band (low frequency, high frequency, and broadband) on bone-conduction (BC) horizontal localization. Non-individualized head-related transfer functions were used to reproduce virtual sound through bilateral BC transducers. Subjective experiments showed that stimulation at the mastoid gave the best performance while the temple gave the worst performance in localization. Stimulation at the mastoid and condyle did not differ significantly from that using air-conduction (AC) headphones in localization accuracy. However, binaural reproduction at all BC stimulation positions led to similar levels of front-back confusion (FBC), which were also comparable to that with AC headphones. Binaural BC reproduction with high-frequency stimulation led to significantly higher localization accuracy than with low-frequency stimulation. When transcranial attenuation (TA) was measured, the attenuation became larger at the condyle and mastoid, and increased at high frequencies. The experiments imply that larger TAs may improve localization accuracy but do not improve FBC. The present study indicates that the BC stimulation at the mastoid and condyle can effectively convey spatial information, especially with high-frequency stimulation.
Journal Article
Six Degrees of Auditory Spatial Separation
by
Orchard-Mills, Emily
,
Fox, Alex
,
Leung, Johahn
in
Adult
,
Auditory Perception
,
Auditory Threshold
2016
The location of a sound is derived computationally from acoustical cues rather than being inherent in the topography of the input signal, as in vision. Since Lord Rayleigh, the descriptions of that representation have swung between “labeled line” and “opponent process” models. Employing a simple variant of a two-point separation judgment using concurrent speech sounds, we found that spatial discrimination thresholds changed nonmonotonically as a function of the overall separation. Rather than increasing with separation, spatial discrimination thresholds first declined as two-point separation increased before reaching a turning point and increasing thereafter with further separation. This “dipper” function, with a minimum at 6 ° of separation, was seen for regions around the midline as well as for more lateral regions (30 and 45 °). The discrimination thresholds for the binaural localization cues were linear over the same range, so these cannot explain the shape of these functions. These data and a simple computational model indicate that the perception of auditory space involves a local code or multichannel mapping emerging subsequent to the binaural cue coding.
Journal Article
Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss
by
Pardhan, Shahina
,
Moore, Brian C. J.
,
Zahorik, Pavel
in
Acoustic Stimulation
,
Anatomy
,
Auditory Diseases, Central - physiopathology
2016
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Journal Article
Sensory reliability takes priority over the central tendency effect in temporal and spatial estimation
2025
Perception is influenced by contextual factors that help resolve sensory uncertainty. A well-known phenomenon, the central tendency effect, describes how perceptual estimates gravitate toward the mean of a distribution of stimuli, particularly when sensory input is unreliable. However, in multisensory contexts, it remains unclear whether this effect follows a generalized priority across modalities or might be influenced by task-relevant sensory dominance. We studied spatial and temporal estimation in the auditory and visual modalities, testing whether perceptual estimates are driven by a supra-modal prior or by modality reliability specific to the task, and applied Bayesian modeling to explain the results. Participants first performed baseline sessions using only one modality and then a third session in which the modalities were interleaved. In the interleaved session, we found that the changes in auditory and visual estimates were not towards a supra-modal (generalized) prior, but estimates related to the dominant modality (vision for space, audition for time) were stable, while estimates of the other sensory modality (audition for space, vision for time) were pulled towards the dominant modality’s prior. Bayesian modeling also confirmed that the best-fitting models were those in which priors were modality-specific rather than supra-modal. These results highlight that perceptual estimation favors sensory reliability over a general tendency to regress toward the mean, providing insights into how the brain integrates contextual information across modalities.
Journal Article
The addition of a spatial auditory cue improves spatial updating in a virtual reality navigation task
by
Whitaker, Mirinda M.
,
Barhorst-Cates, Erica
,
Hullar, Timothy E.
in
Addition
,
Adult
,
Auditory localization
2024
Auditory cues are integrated with vision and body-based self-motion cues for motion perception, balance, and gait, though limited research has evaluated their effectiveness for navigation. Here, we tested whether an auditory cue co-localized with a visual target could improve spatial updating in a virtual reality homing task. Participants navigated a triangular homing task with and without an easily localizable spatial audio signal co-located with the home location. The main outcome was unsigned angular error, defined as the absolute value of the difference between the participant’s turning response and the correct response towards the home location. Angular error was significantly reduced in the presence of spatial sound compared to a head-fixed identical auditory signal. Participants’ angular error was 22.79° in the presence of spatial audio and 30.09° in its absence. Those with the worst performance in the absence of spatial sound demonstrated the greatest improvement with the added sound cue. These results suggest that auditory cues may benefit navigation, particularly for those who demonstrated the highest level of spatial updating error in the absence of spatial sound.
Journal Article
Ongoing dynamics in large-scale functional connectivity predict perception
by
Mark DâEsposito
,
Andreas Kleinschmidt
,
Sadaghiani, Sepideh
in
Acoustic Stimulation
,
Algorithms
,
Auditory Perception - physiology
2015
Most brain activity occurs in an ongoing manner not directly locked to external events or stimuli. Regional ongoing activity fluctuates in unison with some brain regions but not others, and the degree of long-range coupling is called functional connectivity, often measured with correlation. Strength and spatial distributions of functional connectivity dynamically change in an ongoing manner over seconds to minutes, even when the external environment is held constant. Direct evidence for any behavioral relevance of these continuous large-scale dynamics has been limited. Here, we investigated whether ongoing changes in baseline functional connectivity correlate with perception. In a continuous auditory detection task, participants perceived the target sound in roughly one-half of the trials. Very long (22â40 s) interstimulus intervals permitted investigation of baseline connectivity unaffected by preceding evoked responses. Using multivariate classification, we observed that functional connectivity before the target predicted whether it was heard or missed. Using graph theoretical measures, we characterized the difference in functional connectivity between states that lead to hits vs. misses. Before misses compared with hits and task-free rest, connectivity showed reduced modularity, a measure of integrity of modular network structure. This effect was strongest in the default mode and visual networks and caused by both reduced within-network connectivity and enhanced across-network connections before misses. The relation of behavior to prestimulus connectivity was dissociable from that of prestimulus activity amplitudes. In conclusion, moment to moment dynamic changes in baseline functional connectivity may shape subsequent behavioral performance. A highly modular network structure seems beneficial to perceptual efficiency.
Journal Article
The cocktail-party problem revisited: early processing and selection of multi-talker speech
How do we recognize what one person is saying when others are speaking at the same time? This review summarizes widespread research in psychoacoustics, auditory scene analysis, and attention, all dealing with early processing and selection of speech, which has been stimulated by this question. Important effects occurring at the peripheral and brainstem levels are mutual masking of sounds and “unmasking” resulting from binaural listening. Psychoacoustic models have been developed that can predict these effects accurately, albeit using computational approaches rather than approximations of neural processing. Grouping—the segregation and streaming of sounds—represents a subsequent processing stage that interacts closely with attention. Sounds can be easily grouped—and subsequently selected—using primitive features such as spatial location and fundamental frequency. More complex processing is required when lexical, syntactic, or semantic information is used. Whereas it is now clear that such processing can take place preattentively, there also is evidence that the processing depth depends on the task-relevancy of the sound. This is consistent with the presence of a feedback loop in attentional control, triggering enhancement of to-be-selected input. Despite recent progress, there are still many unresolved issues: there is a need for integrative models that are neurophysiologically plausible, for research into grouping based on other than spatial or voice-related cues, for studies explicitly addressing endogenous and exogenous attention, for an explanation of the remarkable sluggishness of attention focused on dynamically changing sounds, and for research elucidating the distinction between binaural speech perception and sound localization.
Journal Article
The Spatial Organization of Ascending Auditory Pathway Microstructural Maturation From Infancy Through Adolescence Using a Novel Fiber Tracking Approach
by
Bodison, Stefanie C.
,
Cabeen, Ryan P.
,
Voelker, Courtney C. J.
in
Adolescence
,
Adolescent
,
Adolescents
2024
Auditory perception is established through experience‐dependent stimuli exposure during sensitive developmental periods; however, little is known regarding the structural development of the central auditory pathway in humans. The present study characterized the regional developmental trajectories of the ascending auditory pathway from the brainstem to the auditory cortex from infancy through adolescence using a novel diffusion MRI‐based tractography approach and along‐tract analyses. We used diffusion tensor imaging (DTI) and neurite orientation dispersion and density imaging (NODDI) to quantify the magnitude and timing of auditory pathway microstructural maturation. We found spatially varying patterns of white matter maturation along the length of the tract, with inferior brainstem regions developing earlier than thalamocortical projections and left hemisphere tracts developing earlier than the right. These results help to characterize the processes that give rise to functional auditory processing and may provide a baseline for detecting abnormal development. The present study characterizes the microstructural maturation of the auditory pathway from infancy through adolescence using diffusion MRI models. Using NODDI and DTI, we demonstrate auditory pathway maturation is heterogeneous, where brainstem structures mature prior to subcortical white matter.
Journal Article