Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
4,290 result(s) for "Visual adaptation"
Sort by:
Investigating orientation adaptation following naturalistic film viewing
Humans display marked changes to their perceptual experience of a stimulus following prolonged or repeated exposure to a preceding stimulus. A well-studied example of such perceptual adaptation is the tilt-aftereffect. Here, prolonged exposure to one orientation leads to a shift in the perception of subsequent orientations. Such a capacity to adapt suggests the tuning of the visual system can change over time in response to our current visual environment. However, it remains unclear to what extent adaptation occurs in response to statistical regularities of features present in naturalistic scenes, such as oriented contrast. We therefore investigated orientation adaptation in response to natural viewing of filtered live-action film stimuli. Within a session, participants freely viewed 45 min of a film which had been filtered to include increased contrast energy within a specified orientation band (0°, 45°, 90°, or 135°; i.e., the adaptor). To measure adaptation effects, the film was intermittently interrupted to have participants perform a simple orientation judgement task. Having participants complete behavioural trials throughout the testing session, including 45 min of total adaptation time, allowed investigation of the accumulation of response biases and changes in such biases over the course of the session. We found very little evidence of adaptation across our conditions. Indeed, in the very few conditions where significant adaptation was observed, these effects were much weaker than those observed under typical tilt-aftereffect paradigms. Further, within a single session, we observed inconsistent development of adaptation effects. The current findings therefore suggest very minimal and, where present, inconsistent effects of adaptation in response to naturalistic viewing conditions. The divergence of our results from those predicted by prior studies using minimalistic studies, and suggests consideration of further barriers to understanding perceptual adaptation as experienced in nature are needed.
Brain representations for acquiring and recalling visual–motor adaptations
Humans readily learn and remember new motor skills, a process that likely underlies adaptation to changing environments. During adaptation, the brain develops new sensory–motor relationships, and if consolidation occurs, a memory of the adaptation can be retained for extended periods. Considerable evidence exists that multiple brain circuits participate in acquiring new sensory–motor memories, though the networks engaged in recalling these and whether the same brain circuits participate in their formation and recall have less clarity. To address these issues, we assessed brain activation with functional MRI while young healthy adults learned and recalled new sensory–motor skills by adapting to world-view rotations of visual feedback that guided hand movements. We found cerebellar activation related to adaptation rate, likely reflecting changes related to overall adjustments to the visual rotation. A set of parietal and frontal regions, including inferior and superior parietal lobules, premotor area, supplementary motor area and primary somatosensory cortex, exhibited non-linear learning-related activation that peaked in the middle of the adaptation phase. Activation in some of these areas, including the inferior parietal lobule, intra-parietal sulcus and somatosensory cortex, likely reflected actual learning, since the activation correlated with learning after-effects. Lastly, we identified several structures having recall-related activation, including the anterior cingulate and the posterior putamen, since the activation correlated with recall efficacy. These findings demonstrate dynamic aspects of brain activation patterns related to formation and recall of a sensory–motor skill, such that non-overlapping brain regions participate in distinctive behavioral events. •Humans adapt to world-based visual distortions.•Cerebellum exhibits activation directly related to error reduction.•Frontal–parietal network demonstrates learning related activation.•Putamen exhibits recall related activation.•Dynamic activation during motor learning and recall across brain structures.
Spatial frequency adaptation modulates population receptive field sizes
The spatial tuning of neuronal populations in the early visual cortical regions is related to the spatial frequency (SF) selectivity of neurons. However, there has been no direct investigation into how this relationship is reflected in population receptive field (pRF) sizes despite the common application of pRF mapping in visual neuroscience. We hypothesised that adaptation to high/low SF would decrease the sensitivity of neurons with respectively small/large receptive field sizes, resulting in a change in pRF sizes as measured by functional magnetic resonance imaging (fMRI). To test this hypothesis, we first quantified the SF aftereffect using a psychophysical paradigm where human observers made SF judgments following adaptation to high/low SF noise patterns. We then incorporated the same adaptation technique into a standard pRF mapping procedure to investigate the spatial tuning of the early visual cortex following SF adaptation. Results showed that adaptation to a low/high SF resulted in smaller/larger pRFs, respectively, as hypothesised. Our results provide the most direct evidence to date that the spatial tuning of the visual cortex, as measured by pRF mapping, is related to the SF selectivity of visual neural populations. This has implications for various domains of visual processing, including size perception and visual acuity.
Prediction error and repetition suppression have distinct effects on neural representations of visual information
Predictive coding theories argue that recent experience establishes expectations in the brain that generate prediction errors when violated. Prediction errors provide a possible explanation for repetition suppression, where evoked neural activity is attenuated across repeated presentations of the same stimulus. The predictive coding account argues repetition suppression arises because repeated stimuli are expected, whereas non-repeated stimuli are unexpected and thus elicit larger neural responses. Here, we employed electroencephalography in humans to test the predictive coding account of repetition suppression by presenting sequences of visual gratings with orientations that were expected either to repeat or change in separate blocks of trials. We applied multivariate forward modelling to determine how orientation selectivity was affected by repetition and prediction. Unexpected stimuli were associated with significantly enhanced orientation selectivity, whereas selectivity was unaffected for repeated stimuli. Our results suggest that repetition suppression and expectation have separable effects on neural representations of visual feature information.
Predictably manipulating photoreceptor light responses to reveal their role in downstream visual responses
Computation in neural circuits relies on the judicious use of nonlinear circuit components. In many cases, multiple nonlinear components work collectively to control circuit outputs. Separating the contributions of these different components is difficult, and this limits our understanding of the mechanistic basis of many important computations. Here, we introduce a tool that permits the design of light stimuli that predictably alter rod and cone phototransduction currents – including stimuli that compensate for nonlinear properties such as light adaptation. This tool, based on well-established models for the rod and cone phototransduction cascade, permits the separation of nonlinearities in phototransduction from those in downstream circuits. This will allow, for example, direct tests of how adaptation in rod and cone phototransduction affects downstream visual signals and perception.
Visual adaptation changes the susceptibility to the fission illusion
Sound-induced flash illusion (SiFI) is the illusion that participants perceive incorrectly that the number of visual flashes is equal to the number of auditory beeps when presented within 100 ms. Although previous studies found that repetition suppression can reduce an individual’s perceptual sensitivity to the SiFI, there is not yet a consensus as to how visual adaptation affects the SiFI. In the present study, we added prolonged adapting visual stimuli prior to the presentation of audiovisual stimuli to investigate whether the bottom-up factor of adaptation affects the SiFI. The adapting visual stimuli consisted of one or two of the same visual stimuli that lasted for 2 minutes in succession, followed by the audiovisual stimuli. Both adaptation conditions showed SiFI effects. The accuracy of adapting double-flashes was significantly lower than that of in adapting a single flash for the fission illusion. Our analyses indicated that such a pattern could be attributed to a lower d′ in adapting double-flashes than in adapting a single flash. However, the accuracy, discriminability and criterion were not significantly different between the two adaptation conditions because of the instability of the fusion illusion. Thus, the present study indicated that the reduced perceptual sensitivity based on visual adaptation could enhance the fission illusion in multisensory integration.
Flexible retinomorphic vision sensors with scotopic and photopic adaptation for a fully flexible neuromorphic machine vision system
Bioinspired neuromorphic machine vision system (NMVS) that integrates retinomorphic sensing and neuromorphic computing into one monolithic system is regarded as the most promising architecture for visual perception. However, the large intensity range of natural lights and complex illumination conditions in actual scenarios always require the NMVS to dynamically adjust its sensitivity according to the environmental conditions, just like the visual adaptation function of the human retina. Although some opto‐sensors with scotopic or photopic adaption have been developed, NMVSs, especially fully flexible NMVSs, with both scotopic and photopic adaptation functions are rarely reported. Here we propose an ion‐modulation strategy to dynamically adjust the photosensitivity and time‐varying activation/inhibition characteristics depending on the illumination conditions, and develop a flexible ion‐modulated phototransistor array based on MoS2/graphdiyne heterostructure, which can execute both retinomorphic sensing and neuromorphic computing. By controlling the intercalated Li+ ions in graphdiyne, both scotopic and photopic adaptation functions are demonstrated successfully. A fully flexible NMVS consisting of front‐end retinomorphic vision sensors and a back‐end convolutional neural network is constructed based on the as‐fabricated 28 × 28 device array, demonstrating quite high recognition accuracies for both dim and bright images and robust flexibility. This effort for fully flexible and monolithic NMVS paves the way for its applications in wearable scenarios. A flexible phototransistor array that can execute both retinomorphic sensing and neuromorphic computing is developed, demonstrating both scotopic and photopic adaptation functions. Based on this device array, a fully flexible neuromorphic machine vision system consisting of front‐end retinomorphic vision sensors and back‐end convolutional neural networks is constructed, demonstrating high recognition accuracies for dim and bright images.
The Role of Temporal and Spatial Attention in Size Adaptation
One of the most important tasks for the visual system is to construct an internal representation of the spatial properties of objects, including their size. Size perception includes a combination of bottom-up (retinal inputs) and top-down (e.g., expectations) information, which makes the estimates of object size malleable and susceptible to numerous contextual cues. For example, it has been shown that size perception is prone to adaptation: brief previous presentations of larger or smaller adapting stimuli at the same region of space changes the perceived size of a subsequent test stimulus. Large adapting stimuli cause the test to appear smaller than its veridical size and vice versa. Here, we investigated whether size adaptation is susceptible to attentional modulation. First, we measured the magnitude of adaptation aftereffects for a size discrimination task. Then, we compared these aftereffects (on average 15-20%) with those measured while participants were engaged, during the adaptation phase, in one of the two highly demanding central visual tasks: Multiple Object Tracking (MOT) or Rapid Serial Visual Presentation (RSVP). Our results indicate that deploying visual attention away from the adapters did not significantly affect the distortions of perceived size induced by adaptation, with accuracy and precision in the discrimination task being almost identical in all experimental conditions. Taken together, these results suggest that visual attention does not play a key role in size adaptation, in line with the idea that this phenomenon can be accounted for by local gain control mechanisms within area V1.
Causal role of the frontal eye field in attention-induced ocular dominance plasticity
Previous research has found that prolonged eye-based attention can bias ocular dominance. If one eye long-termly views a regular movie meanwhile the opposite eye views a backward movie of the same episode, perceptual ocular dominance will shift towards the eye previously viewing the backward movie. Yet it remains unclear whether the role of eye-based attention in this phenomenon is causal or not. To address this issue, the present study relied on both the functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) techniques. We found robust activation of the frontal eye field (FEF) and intraparietal sulcus (IPS) when participants were watching the dichoptic movie while focusing their attention on the regular movie. Interestingly, we found a robust effect of attention-induced ocular dominance shift when the cortical function of vertex or IPS was transiently inhibited by continuous theta burst stimulation (cTBS), yet the effect was significantly attenuated to a negligible extent when cTBS was delivered to FEF. A control experiment verified that the attenuation of ocular dominance shift after inhibitory stimulation of FEF was not due to any impact of the cTBS on the binocular rivalry measurement of ocular dominance. These findings suggest that the fronto-parietal attentional network is involved in controlling eye-based attention in the ‘dichoptic-backward-movie’ adaptation paradigm, and in this network, FEF plays a crucial causal role in generating the attention-induced ocular dominance shift.
Visual mode switching learned through repeated adaptation to color
When the environment changes, vision adapts to maintain accurate perception. For repeatedly encountered environments, learning to adjust more rapidly would be beneficial, but past work remains inconclusive. We tested if the visual system can learn such visual mode switching for a strongly color-tinted environment, where adaptation causes the dominant hue to fade over time. Eleven observers wore bright red glasses for five 1-hr periods per day, for 5 days. Color adaptation was measured by asking observers to identify ‘unique yellow’, appearing neither reddish nor greenish. As expected, the world appeared less and less reddish during the 1-hr periods of glasses wear. Critically, across days the world also appeared significantly less reddish immediately upon donning the glasses. These results indicate that the visual system learned to rapidly adjust to the reddish environment, switching modes to stabilize color vision. Mode switching likely provides a general strategy to optimize perceptual processes.