Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
69 result(s) for "Geisler, Wilson S"
Sort by:
Optimal defocus estimation in individual natural images
Defocus blur is nearly always present in natural images: Objects at only one distance can be perfectly focused. Images of objects at other distances are blurred by an amount depending on pupil diameter and lens properties. Despite the fact that defocus is of great behavioral, perceptual, and biological importance, it is unknown how biological systems estimate defocus. Given a set of natural scenes and the properties of the vision system, we show from first principles how to optimally estimate defocus at each location in any individual image. We show for the human visual system that highprecision, unbiased estimates are obtainable under natural viewing conditions for patches with detectable contrast. The high quality of the estimates is surprising given the heterogeneity of natural images. Additionally, we quantify the degree to which the sign ambiguity often attributed to defocus is resolved by monochromatic aberrations (other than defocus) and chromatic aberrations; chromatic aberrations fully resolve the sign ambiguity. Finally, we show that simple spatial and spatio-chromatic receptive fields extract the information optimally. The approach can be tailored to any environment-vision system pairing:natural or man-made, animal or machine. Thus, it provides a principled general framework for analyzing the psychophysics and neurophysiology of defocus estimation in species across the animal kingdom and for developing optimal image-based defocus and depth estimation algorithms for computational vision systems.
Local reliability weighting explains identification of partially masked objects in natural images
A fundamental natural visual task is the identification of specific target objects in the environments that surround us. It has long been known that some properties of the background have strong effects on target visibility. The most well-known properties are the luminance, contrast, and similarity of the background to the target. In previous studies, we found that these properties have highly lawful effects on detection in natural backgrounds. However, there is another important factor affecting detection in natural backgrounds that has received little or no attention in the masking literature, which has been concerned with detection in simpler backgrounds. Namely, in natural backgrounds the properties of the background often vary under the target, and hence some parts of the target are masked more than others. We began studying this factor, which we call the “partial masking factor,” by measuring detection thresholds in backgrounds of contrastmodulated white noise that was constructed so that the standard template-matching (TM) observer performs equally well whether or not the noise contrast modulates in the target region. If noise contrast is uniform in the target region, then this TM observer is the Bayesian optimal observer. However, when the noise contrast modulates then the Bayesian optimal observer weights the template at each pixel location by the estimated reliability at that location. We find that human performance for modulated noise backgrounds is predicted by this reliability-weighted TM (RTM) observer. More surprisingly, we find that human performance for natural backgrounds is also predicted by the RTM observer.
Optimal decoding of correlated neural population responses in the primate visual cortex
Even the simplest environmental stimuli elicit responses in large populations of neurons in early sensory cortical areas. How these distributed responses are read out by subsequent processing stages to mediate behavior remains unknown. Here we used voltage-sensitive dye imaging to measure directly population responses in the primary visual cortex (V1) of monkeys performing a demanding visual detection task. We then evaluated the ability of different decoding rules to detect the target from the measured neural responses. We found that small visual targets elicit widespread responses in V1, and that response variability at distant sites is highly correlated. These correlations render most previously proposed decoding rules inefficient relative to one that uses spatially antagonistic center-surround summation. This optimal decoder consistently outperformed the monkey in the detection task, demonstrating the sensitivity of our techniques. Overall, our results suggest an unexpected role for inhibitory mechanisms in efficient decoding of neural population responses. NOTE: In the supplementary information initially published online to accompany this article,the “ ¢ ” symbols in Supplementary Figure 6 and Supplementary Methods were incorrectly placed in the equations. The correct symbol should be “ ' ”. The error has been corrected online.
Neural correlates of perceptual similarity masking in primate V1
Visual detection is a fundamental natural task. Detection becomes more challenging as the similarity between the target and the background in which it is embedded increases, a phenomenon termed ‘similarity masking’. To test the hypothesis that V1 contributes to similarity masking, we used voltage sensitive dye imaging (VSDI) to measure V1 population responses while macaque monkeys performed a detection task under varying levels of target-background similarity. Paradoxically, we find that during an initial transient phase, V1 responses to the target are enhanced, rather than suppressed, by target-background similarity. This effect reverses in the second phase of the response, so that in this phase V1 signals are positively correlated with the behavioral effect of similarity. Finally, we show that a simple model with delayed divisive normalization can qualitatively account for our findings. Overall, our results support the hypothesis that a nonlinear gain control mechanism in V1 contributes to perceptual similarity masking.
Independence of luminance and contrast in natural scenes and in the early visual system
The early visual system is endowed with adaptive mechanisms that rapidly adjust gain and integration time based on the local luminance (mean intensity) and contrast (standard deviation of intensity relative to the mean). Here we show that these mechanisms are matched to the statistics of the environment. First, we measured the joint distribution of luminance and contrast in patches selected from natural images and found that luminance and contrast were statistically independent of each other. This independence did not hold for artificial images with matched spectral characteristics. Second, we characterized the effects of the adaptive mechanisms in lateral geniculate nucleus (LGN), the direct recipient of retinal outputs. We found that luminance gain control had the same effect at all contrasts and that contrast gain control had the same effect at all mean luminances. Thus, the adaptive mechanisms for luminance and contrast operate independently, reflecting the very independence encountered in natural images.
Similar neural and perceptual masking effects of low-power optogenetic stimulation in primate V1
Can direct stimulation of primate V1 substitute for a visual stimulus and mimic its perceptual effect? To address this question, we developed an optical-genetic toolkit to ‘read’ neural population responses using widefield calcium imaging, while simultaneously using optogenetics to ‘write’ neural responses into V1 of behaving macaques. We focused on the phenomenon of visual masking, where detection of a dim target is significantly reduced by a co-localized medium-brightness mask (Cornsweet and Pinsker, 1965; Whittle and Swanston, 1974). Using our toolkit, we tested whether V1 optogenetic stimulation can recapitulate the perceptual masking effect of a visual mask. We find that, similar to a visual mask, low-power optostimulation can significantly reduce visual detection sensitivity, that a sublinear interaction between visual- and optogenetic-evoked V1 responses could account for this perceptual effect, and that these neural and behavioral effects are spatially selective. Our toolkit and results open the door for further exploration of perceptual substitutions by direct stimulation of sensory cortex.
Calcium imaging with genetically encoded indicators in behaving primates
Understanding the neural basis of behaviour requires studying brain activity in behaving subjects using complementary techniques that measure neural responses at multiple spatial scales, and developing computational tools for understanding the mapping between these measurements. Here we report the first results of widefield imaging of genetically encoded calcium indicator (GCaMP6f) signals from V1 of behaving macaques. This technique provides a robust readout of visual population responses at the columnar scale over multiple mm2 and over several months. To determine the quantitative relation between the widefield GCaMP signals and the locally pooled spiking activity, we developed a computational model that sums the responses of V1 neurons characterized by prior single unit measurements. The measured tuning properties of the GCaMP signals to stimulus contrast, orientation and spatial position closely match the predictions of the model, suggesting that widefield GCaMP signals are linearly related to the summed local spiking activity. An important question in brain research is how neurons and the circuits they form process information to produce behavior. To understand what happens in a human brain, it is necessary to study a brain of similar complexity, such as that of a primate. Examining how the neurons in a brain region called the visual cortex process information about what we see is especially informative. This is because animals can be taught to perform different visual tasks, and because the visual cortex is relatively easy to access. In principle, therefore, it should be possible to use modern genetic and imaging techniques to study the primate visual system, but, until now, that has not been the case. Like much of the brain, the visual cortex consists of different classes of neurons that can excite, inhibit or modulate the activity of neighboring neurons. One way to study how these different classes of neurons interact with each other is to alter the animal’s DNA, such that only one cell type stands out during the experiment, allowing its role in the brain to be closely monitored. This technique has been used to study the interactions among neurons in the rodent brain, because rodent DNA is easy to alter. However, it is not easy to manipulate primate DNA. Seidemann et al. have, therefore, developed a new technique that can target a specific class of neurons, allowing the activity of just these cells to be distinguished from the rest. The method uses specially designed harmless viruses to produce foreign proteins in the excitatory neurons of the visual cortex in an adult macaque. The optical properties of the proteins change when the neuron they are in is active, allowing the activity of the excitatory neurons to be detected and tracked in awake animals while they perform a visual task. Previously, the activity of neurons in the primate visual cortex could only be measured using dyes that indiscriminately reported the activity of all the neurons present. Seidemann et al. found that, in addition to being more selective than the dye-based method, the new technique also more accurately depicted neuronal action potentials, which are the primary units of information in the brain. Seidemann et al. now plan to use a similar method to study the activity of the inhibitory neurons of the primate visual cortex. Further examination of both excitatory and inhibitory neurons at much higher magnification, using a different microscopy technique, will also reveal more subtle features of their responses during visual tasks.
Constrained sampling experiments reveal principles of detection in natural scenes
A fundamental everyday visual task is to detect target objects within a background scene. Using relatively simple stimuli, vision science has identified several major factors that affect detection thresholds, including the luminance of the background, the contrast of the background, the spatial similarity of the background to the target, and uncertainty due to random variations in the properties of the background and in the amplitude of the target. Here we use an experimental approach based on constrained sampling from multidimensional histograms of natural stimuli, together with a theoretical analysis based on signal detection theory, to discover how these factors affect detection in natural scenes. We sorted a large collection of natural image backgrounds into multidimensional histograms, where each bin corresponds to a particular luminance, contrast, and similarity. Detection thresholds were measured for a subset of bins spanning the space, where a natural background was randomly sampled from a bin on each trial. In low-uncertainty conditions, both the background bin and the amplitude of the target were fixed, and, in high-uncertainty conditions, they varied randomly on each trial. We found that thresholds increase approximately linearly along all three dimensions and that detection accuracy is unaffected by background bin and target amplitude uncertainty. The results are predicted from first principles by a normalized matched-template detector, where the dynamic normalizing gain factor follows directly from the statistical properties of the natural backgrounds. The results provide an explanation for classic laws of psychophysics and their underlying neural mechanisms.
Human Wavelength Discrimination of Monochromatic Light Explained by Optimal Wavelength Decoding of Light of Unknown Intensity
We show that human ability to discriminate the wavelength of monochromatic light can be understood as maximum likelihood decoding of the cone absorptions, with a signal processing efficiency that is independent of the wavelength. This work is built on the framework of ideal observer analysis of visual discrimination used in many previous works. A distinctive aspect of our work is that we highlight a perceptual confound that observers should confuse a change in input light wavelength with a change in input intensity. Hence a simple ideal observer model which assumes that an observer has a full knowledge of input intensity should over-estimate human ability in discriminating wavelengths of two inputs of unequal intensity. This confound also makes it difficult to consistently measure human ability in wavelength discrimination by asking observers to distinguish two input colors while matching their brightness. We argue that the best experimental method for reliable measurement of discrimination thresholds is the one of Pokorny and Smith, in which observers only need to distinguish two inputs, regardless of whether they differ in hue or brightness. We mathematically formulate wavelength discrimination under this wavelength-intensity confound and show a good agreement between our theoretical prediction and the behavioral data. Our analysis explains why the discrimination threshold varies with the input wavelength, and shows how sensitively the threshold depends on the relative densities of the three types of cones in the retina (and in particular predict discriminations in dichromats). Our mathematical formulation and solution can be applied to general problems of sensory discrimination when there is a perceptual confound from other sensory feature dimensions.
Optimal speed estimation in natural image movies predicts human performance
Accurate perception of motion depends critically on accurate estimation of retinal motion speed. Here we first analyse natural image movies to determine the optimal space-time receptive fields (RFs) for encoding local motion speed in a particular direction, given the constraints of the early visual system. Next, from the RF responses to natural stimuli, we determine the neural computations that are optimal for combining and decoding the responses into estimates of speed. The computations show how selective, invariant speed-tuned units might be constructed by the nervous system. Then, in a psychophysical experiment using matched stimuli, we show that human performance is nearly optimal. Indeed, a single efficiency parameter accurately predicts the detailed shapes of a large set of human psychometric functions. We conclude that many properties of speed-selective neurons and human speed discrimination performance are predicted by the optimal computations, and that natural stimulus variation affects optimal and human observers almost identically. Accurate motion perception depends on accurate estimation of retinal motion speed. Here, from natural image movies, the authors derive the optimal computational rules for estimating speed, and show that these computations predict both human speed discrimination performance and the tuning of speed-selective neurons.