Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
489 result(s) for "grouping perception"
Sort by:
A Dual Multi-Head Contextual Attention Network for Hyperspectral Image Classification
To learn discriminative features, hyperspectral image (HSI), containing 3-D cube data, is a preferable means of capturing multi-head self-attention from both spatial and spectral domains if the burden in model optimization and computation is low. In this paper, we design a dual multi-head contextual self-attention (DMuCA) network for HSI classification with the fewest possible parameters and lower computation costs. To effectively capture rich contextual dependencies from both domains, we decouple the spatial and spectral contextual attention into two sub-blocks, SaMCA and SeMCA, where depth-wise convolution is employed to contextualize the input keys in the pure dimension. Thereafter, multi-head local attentions are implemented as group processing when the keys are alternately concatenated with the queries. In particular, in the SeMCA block, we group the spatial pixels by evenly sampling and create multi-head channel attention on each sampling set, to reduce the number of the training parameters and avoid the storage increase. In addition, the static contextual keys are fused with the dynamic attentional features in each block to strengthen the capacity of the model in data representation. Finally, the decoupled sub-blocks are weighted and summed together for 3-D attention perception of HSI. The DMuCA module is then plugged into a ResNet to perform HSI classification. Extensive experiments demonstrate that our proposed DMuCA achieves excellent results over several state-of-the-art attention mechanisms with the same backbone.
Action relations facilitate the identification of briefly-presented objects
The link between perception and action allows us to interact fluently with the world. Objects which ‘afford’ an action elicit a visuomotor response, facilitating compatible responses. In addition, positioning objects to interact with one another appears to facilitate grouping, indicated by patients with extinction being better able to identify interacting objects (e.g. a corkscrew going towards the top of a wine bottle) than the same objects when positioned incorrectly for action (Riddoch, Humphreys, Edwards, Baker, & Willson, Nature Neuroscience , 6 , 82–89, 2003 ). Here, we investigate the effect of action relations on the perception of normal participants. We found improved identification of briefly-presented objects when in correct versus incorrect co-locations for action. For the object that would be ‘active’ in the interaction (the corkscrew), this improvement was enhanced when it was oriented for use by the viewer’s dominant hand. In contrast, the position-related benefit for the ‘passive’ object was stronger when the objects formed an action-related pair (corkscrew and bottle) compared with an unrelated pair (corkscrew and candle), and it was reduced when spatial cues disrupted grouping between the objects. We propose that these results indicate two separate effects of action relations on normal perception: a visuomotor response to objects which strongly afford an action; and a grouping effect between objects which form action-related pairs.
The effects of visual and auditory synchrony on human foraging
Can synchrony in stimulation guide attention and aid perceptual performance? Here, in a series of three experiments, we tested the influence of visual and auditory synchrony on attentional selection during a novel human foraging task. Human foraging tasks are a recent extension of the classic visual search paradigm in which multiple targets must be located on a given trial, making it possible to capture a wide range of performance metrics. Experiment 1 was performed online, where the task was to forage for 10 (out of 20) vertical lines among 60 randomly oriented distractor lines that changed color between yellow and blue at random intervals. The targets either changed colors in visual synchrony or not. In another condition, a non-spatial sound additionally occurred synchronously with the color change of the targets. Experiment 2 was run in the laboratory (within-subjects) with the same design. When the targets changed color in visual synchrony, foraging times were significantly shorter than when they randomly changed colors, but there was no additional benefit for the sound synchrony, in contrast to predictions from the so-called “pip-and-pop” effect (Van der Burg et al., Journal of Experimental Psychology, 1053-1065, 2008 ). In Experiment 3, task difficulty was increased as participants foraged for as many 45° rotated lines as possible among lines of different orientations within 10 s, with the same synchrony conditions as in Experiments 1 and 2. Again, there was a large benefit of visual synchrony but no additional benefit for sound synchronization. Our results provide strong evidence that visual synchronization can guide attention during multiple target foraging. This likely reflects the local grouping of the synchronized targets. Importantly, there was no additional benefit for sound synchrony, even when the foraging task was quite difficult (Experiment 3).
Twice Upon a Time: Multiple Concurrent Temporal Recalibrations of Audiovisual Speech
Audiovisual timing perception can recalibrate following prolonged exposure to asynchronous auditory and visual inputs. It has been suggested that this might contribute to achieving perceptual synchrony for auditory and visual signals despite differences in physical and neural signal times for sight and sound. However, given that people can be concurrently exposed to multiple audiovisual stimuli with variable neural signal times, a mechanism that recalibrates all audiovisual timing percepts to a single timing relationship could be dysfunctional. In the experiments reported here, we showed that audiovisual temporal recalibration can be specific for particular audiovisual pairings. Participants were shown alternating movies of male and female actors containing positive and negative temporal asynchronies between the auditory and visual streams. We found that audiovisual synchrony estimates for each actor were shifted toward the preceding audiovisual timing relationship for that actor and that such temporal recalibrations occurred in positive and negative directions concurrently. Our results show that humans can form multiple concurrent estimates of appropriate timing for audiovisual synchrony.
Temporal ventriloquism along the path of apparent motion: speed perception under different spatial grouping principles
The coordination of intramodal perceptual grouping and crossmodal interactions plays a critical role in constructing coherent multisensory percepts. However, the basic principles underlying such coordinating mechanisms still remain unclear. By taking advantage of an illusion called temporal ventriloquism and its influences on perceived speed, we investigated how audiovisual interactions in time are modulated by the spatial grouping principles of vision. In our experiments, we manipulated the spatial grouping principles of proximity, uniform connectedness, and similarity/common fate in apparent motion displays. Observers compared the speed of apparent motions across different sound timing conditions. Our results revealed that the effects of sound timing (i.e., temporal ventriloquism effects) on perceived speed also existed in visual displays containing more than one object and were modulated by different spatial grouping principles. In particular, uniform connectedness was found to modulate these audiovisual interactions in time. The effect of sound timing on perceived speed was smaller when horizontal connecting bars were introduced along the path of apparent motion. When the objects in each apparent motion frame were not connected or connected with vertical bars, the sound timing was more influential compared to the horizontal bar conditions. Overall, our findings here suggest that the effects of sound timing on perceived speed exist in different spatial configurations and can be modulated by certain intramodal spatial grouping principles such as uniform connectedness.
Perceptual Grouping in Autism Spectrum Disorder: An Exploratory Magnetoencephalography Study
Visual information is organised according to visual grouping principles. In visual grouping tasks individuals with ASD have shown equivocal performance. We explored neural correlates of Gestalt grouping in individuals with and without ASD. Neuromagnetic activity of individuals with (15) and without (18) ASD was compared during a visual grouping task testing grouping by proximity versus similarity. Individuals without ASD showed stronger evoked responses with earlier peaks in response to both grouping types indicating an earlier neuronal differentiation between grouping principles in individuals without ASD. In contrast, individuals with ASD showed particularly prolonged processing of grouping by similarity suggesting a high demand of neural resources. The neuronal processing differences found could explain less efficient grouping performance observed behaviourally in ASD.
Incremental grouping of image elements in vision
One important task for the visual system is to group image elements that belong to an object and to segregate them from other objects and the background. We here present an incremental grouping theory (IGT) that addresses the role of object-based attention in perceptual grouping at a psychological level and, at the same time, outlines the mechanisms for grouping at the neurophysiological level. The IGT proposes that there are two processes for perceptual grouping. The first process is base grouping and relies on neurons that are tuned to feature conjunctions. Base grouping is fast and occurs in parallel across the visual scene, but not all possible feature conjunctions can be coded as base groupings. If there are no neurons tuned to the relevant feature conjunctions, a second process called incremental grouping comes into play. Incremental grouping is a time-consuming and capacity-limited process that requires the gradual spread of enhanced neuronal activity across the representation of an object in the visual cortex. The spread of enhanced neuronal activity corresponds to the labeling of image elements with object-based attention.
Visual search of illusory contours: The role of illusory contour clarity
Illusory contours demonstrate an important function of the visual system—object inference from incomplete boundaries, which can arise from factors such as low luminance, camouflage, or occlusion. Illusory contours can be perceived with varying degrees of clarity depending on the features of their inducers. The present study aimed to evaluate whether illusory contour clarity influences visual search efficiency. Experiment 1 compared visual search performance for Kanizsa illusory stimuli and nonillusory inducer stimuli when manipulating inducer size as a clarity factor. Experiment 2 examined the effects of illusory contour clarity on visual search by manipulating the number of rings with missing arcs (i.e., line ends) comprising the inducers, for both illusory and nonillusory stimuli. To investigate whether surface alterations had an impact on visual search in Experiment 1 , Experiment 3 examined search performance for Kanizsa-like stimuli formed from “smoothed” inducers compared with standard Kanizsa figures. The results of Experiments 1 and 2 indicated that while Kanizsa produced inefficient search, this was not contingent on the clarity of the illusory contours. Experiment 3 suggested that surface alterations of Kanizsa figures did impact visual search performance. Together, the results indicated that illusory contour clarity did not have much bearing on search performance. In certain conditions, Kanizsa figures even facilitated search compared with nonillusory stimuli, suggesting that rather than contour inference, surface features might have greater relevance in guiding visual attention.
Topographic signatures of global object perception in human visual cortex
Our visual system readily groups dynamic fragmented input into global objects. How the brain represents global object perception remains however unclear. To address this question, we recorded brain responses using functional magnetic resonance imaging whilst observers viewed a dynamic bistable stimulus that could either be perceived globally (i.e., as a grouped and coherently moving shape) or locally (i.e., as ungrouped and incoherently moving elements). We further estimated population receptive fields and used these to back-project the brain activity measured during stimulus perception into visual space via a searchlight procedure. Global perception resulted in universal suppression of responses in lower visual cortex accompanied by wide-spread enhancement in higher object-sensitive cortex. However, follow-up experiments indicated that higher object-sensitive cortex is suppressed if global perception lacks shape grouping, and that grouping-related suppression can be diffusely confined to stimulated sites and accompanied by background enhancement once stimulus size is reduced. These results speak to a non-generic involvement of higher object-sensitive cortex in perceptual grouping and point to an enhancement-suppression mechanism mediating the perception of figure and ground. •Lower visual cortex activity to grouped vs ungrouped dynamic stimuli is suppressed.•When grouping a shape, activity in higher object-sensitive cortex is enhanced.•Without shape grouping, activity in higher object-sensitive cortex is suppressed.•Grouping-related suppression can be diffusely confined to stimulated cortical sites.
Ensemble perception during multiple-object tracking
Multiple-object tracking studies consistently reveal attentive tracking limits of approximately three to five items. How do factors such as visual grouping and ensemble perception impact these capacity limits? Which heuristics lead to the perception of multiple objects as a group? This work investigates the role of grouping on multiple-object tracking ability, and more specifically, in identifying the heuristics that lead to the formation and perception of ensembles within dynamic contexts. First, we show that group tracking limits are approximately four groups of objects and are independent of the number of items that compose the groups. Further, we show that group tracking performance declines as inter-object spacing increases. We also demonstrate the role of group rigidity in tracking performance in that disruptions to common fate negatively impact ensemble tracking ability. The findings from this work contribute to our overall understanding of the perception of dynamic groups of objects. They characterize the properties that determine the formation and perception of dynamic object ensembles. In addition, they inform development and design decisions considering cognitive limitations involving tracking groups of objects.