Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
17,410 result(s) for "Visual tasks"
Sort by:
Lateralization in Alpha-Band Oscillations Predicts the Locus and Spatial Distribution of Attention
Attending to a task-relevant location changes how neural activity oscillates in the alpha band (8-13Hz) in posterior visual cortical areas. However, a clear understanding of the relationships between top-down attention, changes in alpha oscillations in visual cortex, and attention performance are still poorly understood. Here, we tested the degree to which the posterior alpha power tracked the locus of attention, the distribution of attention, and how well the topography of alpha could predict the locus of attention. We recorded magnetoencephalographic (MEG) data while subjects performed an attention demanding visual discrimination task that dissociated the direction of attention from the direction of a saccade to indicate choice. On some trials, an endogenous cue predicted the target's location, while on others it contained no spatial information. When the target's location was cued, alpha power decreased in sensors over occipital cortex contralateral to the attended visual field. When the cue did not predict the target's location, alpha power again decreased in sensors over occipital cortex, but bilaterally, and increased in sensors over frontal cortex. Thus, the distribution and the topography of alpha reliably indicated the locus of covert attention. Together, these results suggest that alpha synchronization reflects changes in the excitability of populations of neurons whose receptive fields match the locus of attention. This is consistent with the hypothesis that alpha oscillations reflect the neural mechanisms by which top-down control of attention biases information processing and modulate the activity of neurons in visual cortex.
Weaker neural suppression in autism
Abnormal sensory processing has been observed in autism, including superior visual motion discrimination, but the neural basis for these sensory changes remains unknown. Leveraging well-characterized suppressive neural circuits in the visual system, we used behavioral and fMRI tasks to demonstrate a significant reduction in neural suppression in young adults with autism spectrum disorder (ASD) compared to neurotypical controls. MR spectroscopy measurements revealed no group differences in neurotransmitter signals. We show how a computational model that incorporates divisive normalization, as well as narrower top-down gain (that could result, for example, from a narrower window of attention), can explain our observations and divergent previous findings. Thus, weaker neural suppression is reflected in visual task performance and fMRI measures in ASD, and may be attributable to differences in top-down processing. Sensory hypersensitivity is common in autism spectrum disorders. Using functional MRI, psychophysics, and computational modeling, Schallmo et al. show that differences in visual motion perception in ASD are accompanied by weaker neural suppression in visual cortex.
Frequency modulation of neural oscillations according to visual task demands
Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8–12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain’s rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1–3 Hz), theta (3–7 Hz), beta (15–30 Hz), or gamma (30–50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.
Alpha-Band Rhythms in Visual Task Performance: Phase-Locking by Rhythmic Sensory Stimulation
Oscillations are an important aspect of neuronal activity. Interestingly, oscillatory patterns are also observed in behaviour, such as in visual performance measures after the presentation of a brief sensory event in the visual or another modality. These oscillations in visual performance cycle at the typical frequencies of brain rhythms, suggesting that perception may be closely linked to brain oscillations. We here investigated this link for a prominent rhythm of the visual system (the alpha-rhythm, 8-12 Hz) by applying rhythmic visual stimulation at alpha-frequency (10.6 Hz), known to lead to a resonance response in visual areas, and testing its effects on subsequent visual target discrimination. Our data show that rhythmic visual stimulation at 10.6 Hz: 1) has specific behavioral consequences, relative to stimulation at control frequencies (3.9 Hz, 7.1 Hz, 14.2 Hz), and 2) leads to alpha-band oscillations in visual performance measures, that 3) correlate in precise frequency across individuals with resting alpha-rhythms recorded over parieto-occipital areas. The most parsimonious explanation for these three findings is entrainment (phase-locking) of ongoing perceptually relevant alpha-band brain oscillations by rhythmic sensory events. These findings are in line with occipital alpha-oscillations underlying periodicity in visual performance, and suggest that rhythmic stimulation at frequencies of intrinsic brain-rhythms can be used to reveal influences of these rhythms on task performance to study their functional roles.
Involvement of the dorsal and ventral attention networks in visual attention span
Visual attention span (VAS), which refers to the window size of multielement parallel processing in a short time, plays an important role in higher‐level cognition (e.g., reading) as required by encoding large amounts of information input. However, it is still a matter of debate about the underlying neural mechanism of VAS. In the present study, a modified visual 1‐back task was designed by using nonverbal stimuli and nonverbal responses, in which possible influences of target presence and position were considered to identify more pure VAS processing. A task‐driven functional magnetic resonance imaging (fMRI) experiment was then performed, and 30 healthy adults participated in this study. Results of confirmatory and exploratory analyses consistently revealed that both dorsal attention network (DAN) and ventral attention network (VAN) were significantly activated during this visual simultaneous processing. In particular, more significant activation in the left superior parietal lobule (LSPL), as compared to that in the bilateral inferior frontal gyrus (IFGs), suggested a greater involvement of DAN in VAS‐related processing in contrast to VAN. In addition, it was also found that the activation in temporoparietal junctions (TPJs) were suppressed during multielement processing only in the target‐absent condition. The current results suggested the recruitment of LSPL in covert attentional shifts and top‐down control of VAS resources distribution during the rapid visual simultaneous processing, as well as the involvement of bilateral IFGs (especially RIFG) in both VAS processing and inhibitory control. The present findings might bring some enlightenments for diagnosis of the atypicality of attentional disorders and reading difficulties. A prospective visual 1‐back task used during functional magnetic resonance imaging (fMRI) scanning were designed to examine the neural mechanism of visual attention span (VAS) which is a basic cognitive ability and plays an important role in higher‐level cognition (e.g., reading) as required by encoding large amounts of information input. Results of confirmatory and exploratory analyses consistently revealed greater involvement of dorsal attention network (DAN, e.g., left superior parietal lobule) as compared to ventral attention network (VAN, e.g., bilateral inferior frontal gyrus) in this visual simultaneous processing. The present findings bring some enlightenments for diagnosis of the atypicality of attentional disorders and reading difficulties.
Human entorhinal cortex represents visual space using a boundary-anchored grid
When participants performed a visual search task, functional MRI responses in entorhinal cortex exhibited a sixfold periodic modulation by gaze-movement direction. The orientation of this modulation was determined by the shape and orientation of the bounded search space. These results indicate that human entorhinal cortex represents visual space using a boundary-anchored grid, analogous to that used by rodents to represent navigable space.
Enhancing Cognition with Video Games: A Multiple Game Training Study
Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands. We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training. Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.
Rapid and Reversible Recruitment of Early Visual Cortex for Touch
The loss of vision has been associated with enhanced performance in non-visual tasks such as tactile discrimination and sound localization. Current evidence suggests that these functional gains are linked to the recruitment of the occipital visual cortex for non-visual processing, but the neurophysiological mechanisms underlying these crossmodal changes remain uncertain. One possible explanation is that visual deprivation is associated with an unmasking of non-visual input into visual cortex. We investigated the effect of sudden, complete and prolonged visual deprivation (five days) in normally sighted adult individuals while they were immersed in an intensive tactile training program. Following the five-day period, blindfolded subjects performed better on a Braille character discrimination task. In the blindfold group, serial fMRI scans revealed an increase in BOLD signal within the occipital cortex in response to tactile stimulation after five days of complete visual deprivation. This increase in signal was no longer present 24 hours after blindfold removal. Finally, reversible disruption of occipital cortex function on the fifth day (by repetitive transcranial magnetic stimulation; rTMS) impaired Braille character recognition ability in the blindfold group but not in non-blindfolded controls. This disruptive effect was no longer evident once the blindfold had been removed for 24 hours. Overall, our findings suggest that sudden and complete visual deprivation in normally sighted individuals can lead to profound, but rapidly reversible, neuroplastic changes by which the occipital cortex becomes engaged in processing of non-visual information. The speed and dynamic nature of the observed changes suggests that normally inhibited or masked functions in the sighted are revealed by visual loss. The unmasking of pre-existing connections and shifts in connectivity represent rapid, early plastic changes, which presumably can lead, if sustained and reinforced, to slower developing, but more permanent structural changes, such as the establishment of new neural connections in the blind.
Multi-Task Visual Perception for Object Detection and Semantic Segmentation in Intelligent Driving
With the rapid development of intelligent driving vehicles, multi-task visual perception based on deep learning emerges as a key technological pathway toward safe vehicle navigation in real traffic scenarios. However, due to the high-precision and high-efficiency requirements of intelligent driving vehicles in practical driving environments, multi-task visual perception remains a challenging task. Existing methods typically adopt effective multi-task learning networks to concurrently handle multiple tasks. Despite the fact that they obtain remarkable achievements, better performance can be achieved through tackling existing problems like underutilized high-resolution features and underexploited non-local contextual dependencies. In this work, we propose YOLOPv3, an efficient anchor-based multi-task visual perception network capable of handling traffic object detection, drivable area segmentation, and lane detection simultaneously. Compared to prior works, we make essential improvements. On the one hand, we propose architecture enhancements that can utilize multi-scale high-resolution features and non-local contextual dependencies for improving network performance. On the other hand, we propose optimization improvements aiming at enhancing network training, enabling our YOLOPv3 to achieve optimal performance via straightforward end-to-end training. The experimental results on the BDD100K dataset demonstrate that YOLOPv3 sets a new state of the art (SOTA): 96.9% recall and 84.3% mAP50 in traffic object detection, 93.2% mIoU in drivable area segmentation, and 88.3% accuracy and 28.0% IoU in lane detection. In addition, YOLOPv3 maintains competitive inference speed against the lightweight YOLOP. Thus, YOLOPv3 stands as a robust solution for handling multi-task visual perception problems. The code and trained models have been released on GitHub.