Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
9,686 result(s) for "Optic flow"
Sort by:
Intermittent control and retinal optic flow when maintaining a curvilinear path
The topic of how humans navigate using vision has been studied for decades. Research has identified the emergent patterns of retinal optic flow from gaze behavior may play an essential role in human curvilinear locomotion. However, the link towards control has been poorly understood. Lately, it has been shown that human locomotor behavior is corrective, formed from intermittent decisions and responses. A simulated virtual reality experiment was conducted where fourteen participants drove through a texture-rich simplistic road environment with left and right curve bends. The goal was to investigate how human intermittent lateral control can be associated with the retinal optic flow-based cues and vehicular heading as sources of information. This work reconstructs dense retinal optic flow using a numerical estimation of optic flow with measured gaze behavior. By combining retinal optic flow with the drivable lane surface, a cross-correlational relation to intermittent steering behavior could be observed. In addition, a novel method of identifying constituent ballistic correction using particle swarm optimization was demonstrated to analyze the incremental correction-based behavior. Through time delay analysis, our results show a human response time of approximately 0.14 s for retinal optic flow-based cues and 0.44 s for heading-based cues, measured from stimulus onset to steering correction onset. These response times were further delayed by 0.17 s when the vehicle-fixed steering wheel was visibly removed. In contrast to classical continuous control strategies, our findings support and argue for the intermittency property in human neuromuscular control of muscle synergies, through the principle of satisficing behavior: to only actuate when there is a perceived need for it. This is aligned with the human sustained sensorimotor model, which uses readily available information and internal models to produce informed responses through evidence accumulation to initiate appropriate ballistic correction, even amidst another correction.
Control and recalibration of path integration in place cells using optic flow
Hippocampal place cells are influenced by both self-motion (idiothetic) signals and external sensory landmarks as an animal navigates its environment. To continuously update a position signal on an internal ‘cognitive map’, the hippocampal system integrates self-motion signals over time, a process that relies on a finely calibrated path integration gain that relates movement in physical space to movement on the cognitive map. It is unclear whether idiothetic cues alone, such as optic flow, exert sufficient influence on the cognitive map to enable recalibration of path integration, or if polarizing position information provided by landmarks is essential for this recalibration. Here, we demonstrate both recalibration of path integration gain and systematic control of place fields by pure optic flow information in freely moving rats. These findings demonstrate that the brain continuously rebalances the influence of conflicting idiothetic cues to fine-tune the neural dynamics of path integration, and that this recalibration process does not require a top-down, unambiguous position signal from landmarks. Using a closed-loop virtual reality system, the authors show that optic flow cues can causally drive and recalibrate the hippocampal place cell system in the absence of an absolute spatial reference frame defined by external landmarks.
Axes of self-motion and object motion shape how we perceive world-relative motion
When we move through the environment, the direction of objects in the optic array changes, producing an optic flow. To perceive world-relative object motion during self-motion, complex flow vectors are decomposed during a process called flow parsing. The real world and realistic VR environments contain abundant depth and distance cues, including size and binocular disparities. When targets move in various directions, the distance signals potentially aid in the flow parsing process. We designed two experiments with our wide-field stereoscopic environment. Participants observed target motions during visually simulated self-motion and indicated the direction of target motion with respect to a scene depicting a large room (Experiment 1) or a cluster of 3D objects (Experiment 2). Forward-backward and left-right target motions, as well as self-motions were simulated. Optic flow and motion vectors were controlled across conditions to examine cues to target distance and motion in depth, such as binocular disparity and object size, and the change in these signals (e.g. looming, change in disparity, interocular velocity difference). During left-right locomotion through both environments, flow parsing gains were significantly lower for left-right than for forward-backward moving targets. However, during forward-backward locomotion, left-right moving targets yielded significantly higher flow parsing gains than forward-backward moving targets. Overall, flow parsing gains were higher when self-motion and target motion are orthogonal to each other, than when they are parallel. These findings provide evidence that depth and distance cues are integrated in perceiving world-relative object motion during self motion. Availability of such signals improves the effectiveness of flow parsing.
Background optic flow modulates responses of multiple descending interneurons to object motion in locusts
Animals flying within natural environments are constantly challenged with complex visual information. Therefore, it is necessary to understand the impact of the visual background on the motion detection system. Locusts possess a well-identified looming detection pathway, comprising the lobula giant movement detector (LGMD) and the descending contralateral movement detector (DCMD). The LGMD/DCMD pathway responds preferably to objects on a collision course, and the response of this pathway is affected by the background complexity. However, multiple other neurons are also responsive to looming stimuli. In this study, we presented looming stimuli against different visual backgrounds to a rigidly-tethered locust, and simultaneously recorded the neural activity with a multichannel electrode. We found that the number of spike-sorted units that responded to looms was not affected by the visual background. However, the peak times of these units were delayed, and the rise phase was shortened in the presence of a flow field background. Dynamic factor analysis (DFA) revealed that fewer types of common trends were present among the units responding to looming stimuli against the flow field background, and the response begin time was delayed among the common trends as well. These results suggest that background complexity affects the response of multiple motion-sensitive neurons, yet the animal is still capable of responding to potentially hazardous visual stimuli.
Molecular and functional dissection using CaMPARI-seq reveals the neuronal organization for dissociating optic flow-dependent behaviors
Optic flow processing is critical for the visual control of body and eye movements in many animals. Rotational and translational binocular optic flow patterns need to be clearly distinguished to induce different behavior outputs. However, the specific neuron types and their connectivity involved in this computation remain unclear. Here, we developed a method to link the functional labeling using a photoconvertible calcium indicator called CaMPARI2 and single-cell RNA sequencing (CaMPARI-seq) to investigate the transcriptional profile of the pretectum, a center for processing optic flow in larval zebrafish. Using this technique, we identified a pretectal cluster expressing tcf7l2 , which can be further classified into molecularly distinct subclusters. In vivo calcium imaging and cell ablation revealed that nkx1.2lb -positive pretectal neurons are commissural inhibitory neurons required for the optomotor response but not for the optokinetic response. Our genetic and functional dissection using CaMPARI-seq uncovered the neuronal organization essential for dissociating different optic flow-dependent behaviors. In this study, the authors develop CaMPARI-seq to link neural activity with molecular profiles in larval zebrafish. They identify inhibitory pretectal neurons required for optomotor response, revealing how the brain distinguishes optic flow to guide behavior.
Precision and temporal dynamics in heading perception assessed by continuous psychophysics
It is a well-established finding that more informative optic flow (e.g., faster, denser, or presented over a larger portion of the visual field) yields decreased variability in heading judgements. Current models of heading perception further predict faster processing under such circumstances, which has, however, not been supported empirically so far. In this study, we validate a novel continuous psychophysics paradigm by replicating the effect of the speed and density of optic flow on variability in performance, and we investigate how these manipulations affect the temporal dynamics. To this end, we tested 30 participants in a continuous psychophysics paradigm administered in Virtual Reality. We immersed them in a simple virtual environment where they experienced four 90-second blocks of optic flow where their linear heading direction (no simulated rotation) at any given moment was determined by a random walk. We asked them to continuously indicate with a joystick the direction in which they perceived themselves to be moving. In each of the four blocks they experienced a different combination of simulated self-motion speeds (SLOW and FAST) and density of optic flow (SPARSE and DENSE). Using a Cross-Correlogram Analysis, we determined that participants reacted faster and displayed lower variability in their performance in the FAST and DENSE conditions than in the SLOW and SPARSE conditions, respectively. Using a Kalman Filter-based analysis approach, we found a similar pattern, where the fitted perceptual noise parameters were higher for SLOW and SPARSE. While replicating previous results on variability, we show that more informative optic flow can speed up heading judgements, while at the same time validating a continuous psychophysics as an efficient method for studying heading perception.
Distinct detection and discrimination sensitivities in visual processing of real versus unreal optic flow
We examined the intricate mechanisms underlying visual processing of complex motion stimuli by measuring the detection sensitivity to contraction and expansion patterns and the discrimination sensitivity to the location of the center of motion (CoM) in various real and unreal optic flow stimuli. We conducted two experiments (N = 20 each) and compared responses to both \"real\" optic flow stimuli containing information about self-movement in a three-dimensional scene and \"unreal\" optic flow stimuli lacking such information. We found that detection sensitivity to contraction surpassed that to expansion patterns for unreal optic flow stimuli, whereas this trend was reversed for real optic flow stimuli. Furthermore, while discrimination sensitivity to the CoM location was not affected by stimulus duration for unreal optic flow stimuli, it showed a significant improvement when stimulus duration increased from 100 to 400 ms for real optic flow stimuli. These findings provide compelling evidence that the visual system employs distinct processing approaches for real versus unreal optic flow even when they are perfectly matched for two-dimensional global features and local motion signals. These differences reveal influences of self-movement in natural environments, enabling the visual system to uniquely process stimuli with significant survival implications.
A linear perception-action mapping accounts for response range-dependent biases in heading estimation from optic flow
Accurate estimation of heading direction from optic flow is a crucial aspect of human spatial perception. Previous psychophysical studies have shown that humans are typically biased in their heading estimates, but the reported results are inconsistent. While some studies found that humans generally underestimate heading direction (center bias), others observed the opposite, an overestimation of heading direction (peripheral bias). We conducted three psychophysical experiments showing that these conflicting findings may not reflect inherent differences in heading perception but can be attributed to the different sizes of the response range that participants were allowed to utilize when reporting their estimates. Notably, we show that participants' heading estimates monotonically scale with the size of the response range, leading to underestimation for small and overestimation for large response ranges. Additionally, neither the speed profile of the optic flow pattern nor the response method (mouse vs. keyboard) significantly affected participants' estimates. Furthermore, we introduce a Bayesian heading estimation model that can quantitatively account for participants' heading reports. The model assumes efficient sensory encoding of heading direction according to a prior inferred from human heading discrimination data. In addition, the model assumes a response mapping that linearly scales the perceptual estimate with a scaling factor that monotonically depends on the size of the response range. This simple perception-action model accurately predicts participants' estimates both in terms of mean and variance across all experimental conditions. Our findings underscore that human heading perception follows efficient Bayesian inference; differences in participants reported estimates can be parsimoniously explained as differences in mapping percept to probe response.
Flexible computation of object motion and depth based on viewing geometry inferred from optic flow
We move our eyes and head to sample the visual environment. While these movements are essential for survival, they greatly complicate the analysis of retinal image motion. Our brain must account for the visual consequences of self-motion to perceive the 3D layout and motion of objects in a scene. We show that traditional models of visual compensation for eye movements fail when the eye both translates and rotates, and we propose a theory that computes both motion and depth in more natural viewing geometries. Consistent with our theoretical predictions, humans exhibit distinct perceptual biases when different viewing geometries are simulated by optic flow, and these biases occur without training or feedback. A neural network model trained to perform the same tasks suggests that viewing geometry modulates the joint tuning of neurons for retinal and eye velocity to mediate these adaptive computations. Our findings unify previously separate bodies of work by demonstrating that the brain adaptively perceives the dynamic 3D environment according to viewing geometry inferred from optic flow. People typically perceive the motion and depth of objects correctly even during walking or running, when visual inputs change markedly. The authors show that this accurate perception is achieved by inferring the observer’s viewing geometry from optic flow.
Ants integrate proprioception as well as visual context and efference copies to make robust predictions
Forward models are mechanisms enabling an agent to predict the sensory outcomes of its actions. They can be implemented through efference copies: copies of motor signals inhibiting the expected sensory stimulation, literally canceling the perceptual outcome of the predicted action. In insects, efference copies are known to modulate optic flow detection for flight control in flies. Here we investigate whether forward models account for the detection of optic flow in walking ants, and how the latter is integrated for locomotion control. We mounted Cataglyphis velox ants in a virtual reality setup and manipulated the relationship between the ants’ movements and the optic flow perceived. Our results show that ants compute predictions of the optic flow expected according to their own movements. However, the prediction is not solely based on efference copies, but involves proprioceptive feedbacks and is fine-tuned by the panorama’s visual structure. Mismatches between prediction and perception are computed for each eye, and error signals are integrated to adjust locomotion through the modulation of internal oscillators. Our work reveals that insects’ forward models are non-trivial and compute predictions based on multimodal information. Vertebrates’ brains produce predictions of the visual outcome expected from their own movements. Here the authors show that ants’ brains also produce such predictions by combining multiple sources of information such as copies of their motor commands, proprioception and the environment’s visual structure.