Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,308 result(s) for "localization tasks"
Sort by:
Olfactory–trigeminal integration in the primary olfactory cortex
Humans naturally integrate signals from the olfactory and intranasal trigeminal systems. A tight interplay has been demonstrated between these two systems, and yet the neural circuitry mediating olfactory–trigeminal (OT) integration remains poorly understood. Using functional magnetic resonance imaging (fMRI), combined with psychophysics, this study investigated the neural mechanisms underlying OT integration. Fifteen participants with normal olfactory function performed a localization task with air‐puff stimuli, phenylethyl alcohol (PEA; rose odor), or a combination thereof while being scanned. The ability to localize PEA to either nostril was at chance. Yet, its presence significantly improved the localization accuracy of weak, but not strong, air‐puffs, when both stimuli were delivered concurrently to the same nostril, but not when different nostrils received the two stimuli. This enhancement in localization accuracy, exemplifying the principles of spatial coincidence and inverse effectiveness in multisensory integration, was associated with multisensory integrative activity in the primary olfactory (POC), orbitofrontal (OFC), superior temporal (STC), inferior parietal (IPC) and cingulate cortices, and in the cerebellum. Multisensory enhancement in most of these regions correlated with behavioral multisensory enhancement, as did increases in connectivity between some of these regions. We interpret these findings as indicating that the POC is part of a distributed brain network mediating integration between the olfactory and trigeminal systems. Practitioner Points Psychophysical and neuroimaging study of olfactory–trigeminal (OT) integration. Behavior, cortical activity, and network connectivity show OT integration. OT integration obeys principles of inverse effectiveness and spatial coincidence. Behavioral and neural measures of OT integration are correlated. Behavior, cortical activity and network connectivity show olfactory–trigeminal (OT) integration. OT integration obeys principles of inverse effectiveness and spatial coincidence. Behavioral and neural measures of OT integration are correlated.
Localization of Moving Sound Stimuli under Conditions of Spatial Masking
The aim of this study was to investigate spatial masking of noise signals in the delayed motion paradigm. Spatial effects were created by interaural level differences (ILDs). Stationary maskers were located laterally or near the head midline, while test signals moved at different velocities from the head midline towards the ears, or in the opposite direction. The masking effect was measured by shifts in the perceived azimuthal positions of the starting and final points of signal trajectories, compared to their positions in silence. The perceived trajectories of all test signals shifted in the opposite direction from the masker. The masking effect was most pronounced in the spatial regions closest to the maskers and was stronger when the signal moved towards the masker, compared to moving away from it. The final points were perceptually shifted further than the starting points. Signal velocity and masker presentation side (left or right) did not change the degree of masking.
Selective effects of a brain tumor on the metric representation of the hand: a pre- versus post-surgery comparison
Body representation disorders are complex, varied, striking, and very disabling in most cases. Deficits of body representation have been described after lesions to multimodal and sensorimotor cortical areas. A few studies have reported the effects of tumors on the representation of the body, but little is known about the changes after tumor resection. Moreover, the impact of brain lesions on the hand size representation has been investigated in few clinical cases. Hands are of special importance, as no other body part has the ability for movement and interaction with the environment that the hands have, and we use them for a multitude of daily activities. Studies with clinical population can add further knowledge into the way hands are represented. Here, we report a single case study of a patient (AM) who was an expert bodybuilder and underwent a surgery to remove a glioblastoma in the left posterior prefrontal and precentral cortex at the level of the hand’s motor region. Pre- (20 days) and post- (4 months) surgery assessment did not show any motor or cognitive impairments. A hand localization task was used, before and after surgery (12 months), to measure possible changes of the metric representation of his right hand. Results showed a post-surgery modulation of the typically distorted hand representation, with an overall accuracy improvement, especially on width dimension. These findings support the direct involvement of sensorimotor areas in the implicit representation of the body size and its relevance on defining specific size representation dimensions.
Enhancing Localization Performance with Extended Funneling Vibrotactile Feedback
This study explores the conventional ‘funneling’ method by introducing two extra locations beyond the virtual reality (VR) controller boundaries, terming it the extended funneling technique. Thirty-two participants engaged in a localization task, with their responses recorded using eye-tracking technology. They were tasked with localizing a virtual ping-pong ball as it bounced both within and outside their virtual hands on a virtual board. Both the experimental and control groups received simultaneous spatial audio and vibrotactile feedback. The experimental group received vibrotactile feedback with extended funneling, while the control group received vibrotactile feedback without funneling for comparison. The results indicate that the experimental group, benefiting from the extended funneling technique, demonstrated a significantly higher accuracy rate (41.79%) in localizing audio–vibrotactile stimuli compared to the control group (28.21%). No significant differences emerged in embodiment or workload scores. These findings highlight the effectiveness of extended funneling for enhancing the localization of sensory stimuli in VR.
Understanding the Nature of the Body Model Underlying Position Sense
Long description: Accurate information about body structure and posture is fundamental for effective control of our actions. It is often assumed that healthy adults have accurate representations of their body. Although people's abilities to visually recognize their own body size and shape are relatively good, the implicit spatial representation of their body is extremely distorted when measured in proprioceptive localization tasks. The aim of this thesis is to understand the nature of spatial distortions of the body model measured in those localization tasks. We especially investigate the perceptual-cognitive components contributing to distortions of implicit representation of the human hand and compare those distortions with the one found on objects in similar tasks.
Searching Through Alternating Sequences: Working Memory and Inhibitory Tagging Mechanisms Revealed Using the MILO Task
We used the Multi-Item Localisation (MILO) task to examine search through two sequences. In Sequential blocks of trials, six letters and six digits were touched in order. In Mixed blocks, participants alternated between letters and digits. These conditions mimic the A and B variants of the Trail Making Test (TMT). In both block types, targets either vanished or remained visible after being touched. There were two key findings. First, in Mixed blocks, reaction times exhibited a saw-tooth pattern, suggesting search for successive pairs of targets. Second, reaction time patterns for vanish and remain conditions were identical in Sequential blocks—indicating that participants could ignore past targets—but diverged in Mixed blocks. This suggests a breakdown of inhibitory tagging. These findings may help explain the elevated completion times observed in TMT-B, relative to TMT-A.
Spontaneous head movements support accurate horizontal auditory localization in a virtual visual environment
This study investigates the relationship between auditory localization accuracy in the horizontal plane and the spontaneous translation and rotation of the head in response to an acoustic stimulus from an invisible sound source. Although a number of studies have suggested that localization ability improves with head movements, most of them measured the perceived source elevation and front-back disambiguation. We investigated the contribution of head movements to auditory localization in the anterior horizontal field in normal hearing subjects. A virtual reality scenario was used to conceal visual cues during the test through a head mounted display. In this condition, we found that an active search of the sound origin using head movements is not strictly necessary, yet sufficient for achieving greater sound source localization accuracy. This result may have important implications in the clinical assessment and training of adults and children affected by hearing and motor impairments.
Deep Audio-visual Learning: A Survey
Audio-visual learning, aimed at exploiting the relationship between audio and visual modalities, has drawn considerable attention since deep learning started to be used successfully. Researchers tend to leverage these two modalities to improve the performance of previously considered single-modality tasks or address new challenging problems. In this paper, we provide a comprehensive survey of recent audio-visual learning development. We divide the current audio-visual learning tasks into four different subfields: audio-visual separation and localization, audio-visual correspondence learning, audio-visual generation, and audio-visual representation learning. State-of-the-art methods, as well as the remaining challenges of each subfield, are further discussed. Finally, we summarize the commonly used datasets and challenges.