Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,579 result(s) for "Form Perception - physiology"
Sort by:
An illusion predicted by V1 population activity implicates cortical topography in shape perception
Here the authors combine computational modeling, voltage-sensitive dye imaging (VSDI) in behaving monkeys, and behavioral measurements in humans, to investigate whether the large-scale topography of V1 population responses influences shape judgments. They find the judgments of human observers were systematically distorted as had been predicted based on the VSDI responses in monkey V1. Mammalian primary visual cortex (V1) is topographically organized such that the pattern of neural activation in V1 reflects the location and spatial extent of visual elements in the retinal image, but it is unclear whether this organization contributes to visual perception. We combined computational modeling, voltage-sensitive dye imaging (VSDI) in behaving monkeys and behavioral measurements in humans to investigate whether the large-scale topography of V1 population responses influences shape judgments. Specifically, we used a computational model to design visual stimuli that had the same physical shape, but were predicted to elicit variable V1 response spread. We confirmed these predictions with VSDI. Finally, we designed a behavioral task in which human observers judged the shapes of these stimuli and found that their judgments were systematically distorted by the spread of V1 activity. This illusion suggests that the topographic pattern of neural population responses in visual cortex contributes to visual perception.
Abnormal Contextual Modulation of Visual Contour Detection in Patients with Schizophrenia
Schizophrenia patients demonstrate perceptual deficits consistent with broad dysfunction in visual context processing. These include poor integration of segments forming visual contours, and reduced visual contrast effects (e.g. weaker orientation-dependent surround suppression, ODSS). Background image context can influence contour perception, as stimuli near the contour affect detection accuracy. Because of ODSS, this contextual modulation depends on the relative orientation between the contour and flanking elements, with parallel flankers impairing contour perception. However in schizophrenia, the impact of abnormal ODSS during contour perception is not clear. It is also unknown whether deficient contour perception marks genetic liability for schizophrenia, or is strictly associated with clinical expression of this disorder. We examined contour detection in 25 adults with schizophrenia, 13 unaffected first-degree biological relatives of schizophrenia patients, and 28 healthy controls. Subjects performed a psychophysics experiment designed to quantify the effect of flanker orientation during contour detection. Overall, patients with schizophrenia showed poorer contour detection performance than relatives or controls. Parallel flankers suppressed and orthogonal flankers enhanced contour detection performance for all groups, but parallel suppression was relatively weaker for schizophrenia patients than healthy controls. Relatives of patients showed equivalent performance with controls. Computational modeling suggested that abnormal contextual modulation in schizophrenia may be explained by suppression that is more broadly tuned for orientation. Abnormal flanker suppression in schizophrenia is consistent with weaker ODSS and/or broader orientation tuning. This work provides the first evidence that such perceptual abnormalities may not be associated with a genetic liability for schizophrenia.
Visual selective attention is equally functional for individuals with low and high working memory capacity: Evidence from accuracy and eye movements
Selective attention and working memory capacity (WMC) are related constructs, but debate about the manner in which they are related remains active. One elegant explanation of variance in WMC is that the efficiency of filtering irrelevant information is the crucial determining factor, rather than differences in capacity per se. We examined this hypothesis by relating WMC (as measured by complex span tasks) to accuracy and eye movements during visual change detection tasks with different degrees of attentional filtering and allocation requirements. Our results did not indicate strong filtering differences between high- and low-WMC groups, and where differences were observed, they were counter to those predicted by the strongest attentional filtering hypothesis. Bayes factors indicated evidence favoring positive or null relationships between WMC and correct responses to unemphasized information, as well as between WMC and the time spent looking at unemphasized information. These findings are consistent with the hypothesis that individual differences in storage capacity, not only filtering efficiency, underlie individual differences in working memory.
Discerning nonrigid 3D shapes from motion cues
Many organisms and objects deform nonrigidly when moving, requiring perceivers to separate shape changes from object motions. Surprisingly, the abilities of observers to correctly infer nonrigid volumetric shapes from motion cues have not been measured, and structure from motion models predominantly use variants of rigidity assumptions. We show that observers are equally sensitive at discriminating cross-sections of flexing and rigid cylinders based on motion cues, when the cylinders are rotated simultaneously around the vertical and depth axes. A computational model based on motion perspective (i.e., assuming perceived depth is inversely proportional to local velocity) predicted the psychometric curves better than shape from motion factorization models using shape or trajectory basis functions. Asymmetric percepts of symmetric cylinders, arising because of asymmetric velocity profiles, provided additional evidence for the dominant role of relative velocity in shape perception. Finally, we show that inexperienced observers are generally incapable of using motion cues to detect inflation/deflation of rigid and flexing cylinders, but this handicap can be overcome with practice for both nonrigid and rigid shapes. The empirical and computational results of this study argue against the use of rigidity assumptions in extracting 3D shape from motion and for the primacy of motion deformations computed from motion shears.
First spikes in ensembles of human tactile afferents code complex spatial fingertip events
It is generally assumed that primary sensory neurons transmit information by their firing rates. However, during natural object manipulations, tactile information from the fingertips is used faster than can be readily explained by rate codes. Here we show that the relative timing of the first impulses elicited in individual units of ensembles of afferents reliably conveys information about the direction of fingertip force and the shape of the surface contacting the fingertip. The sequence in which different afferents initially discharge in response to mechanical fingertip events provides information about these events faster than the fastest possible rate code and fast enough to account for the use of tactile signals in natural manipulation.
Adjective–noun order as representational structure: Native-language grammar influences perception of similarity and recognition memory
This article describes two experiments linking native-language grammar rules with implications for perception of similarity and recognition memory. In prenominal languages (e.g., English), adjectives usually precede nouns, whereas in postnominal languages (e.g., Portuguese), nouns usually precede adjectives. We explored the influence of such rules upon similarity judgments about, and recognition of, objects with multiple category attributes (one nominal attribute and one adjectival attribute). The results supported the hypothesized primacy effect of native-language word order such that nouns generally carried more weight for Portuguese speakers than for English speakers. This pattern was observed for judgments of similarity (i.e., Portuguese speakers tended to judge objects that shared a noun-designated attribute as more similar than did English speakers), as well as for false alarms in recognition memory (i.e., Portuguese speakers tended to falsely recognize more objects if they possessed a familiar noun attribute, relative to English speakers). The implications of such linguistic effects for the cognition of similarity and memory are discussed.
Predictive mechanisms in the control of contour following
In haptic exploration, when running a fingertip along a surface, the control system may attempt to anticipate upcoming changes in curvature in order to maintain a consistent level of contact force. Such predictive mechanisms are well known in the visual system, but have yet to be studied in the somatosensory system. Thus, the present experiment was designed to reveal human capabilities for different types of haptic prediction. A robot arm with a large 3D workspace was attached to the index fingertip and was programmed to produce virtual surfaces with curvatures that varied within and across trials. With eyes closed, subjects moved the fingertip around elliptical hoops with flattened regions or Limaçon shapes, where the curvature varied continuously. Subjects anticipated the corner of the flattened region rather poorly, but for the Limaçon shapes, they varied finger speed with upcoming curvature according to the two-thirds power law. Furthermore, although the Limaçon shapes were randomly presented in various 3D orientations, modulation of contact force also indicated good anticipation of upcoming changes in curvature. The results demonstrate that it is difficult to haptically anticipate the spatial location of an abrupt change in curvature, but smooth changes in curvature may be facilitated by anticipatory predictions.
Birds of a Feather Flock Together: Experience-Driven Formation of Visual Object Categories in Human Ventral Temporal Cortex
The present functional magnetic resonance imaging study provides direct evidence on visual object-category formation in the human brain. Although brain imaging has demonstrated object-category specific representations in the occipitotemporal cortex, the crucial question of how the brain acquires this knowledge has remained unresolved. We designed a stimulus set consisting of six highly similar bird types that can hardly be distinguished without training. All bird types were morphed with one another to create different exemplars of each category. After visual training, fMRI showed that responses in the right fusiform gyrus were larger for bird types for which a discrete category-boundary was established as compared with not-trained bird types. Importantly, compared with not-trained bird types, right fusiform responses were smaller for visually similar birds to which subjects were exposed during training but for which no category-boundary was learned. These data provide evidence for experience-induced shaping of occipitotemporal responses that are involved in category learning in the human brain.
The Influence of Shape Similarity and Shared Labels on Infants' Inductive Inferences about Nonobvious Object Properties
This study examined the influence of object labels and shape similarity on 16- to 21-month-old infants' inductive inferences. In three experiments, a total of 144 infants were presented with novel target objects with or without a nonobvious property, followed by test objects that varied in shape similarity to the target. When objects were not labeled, infants generalized the nonobvious property to test objects that were highly similar in shape (Experiment 1). When objects were labeled with novel nouns, infants relied both on shape similarity and shared labels to generalize properties (Experiment 2). Finally, when objects were labeled with familiar nouns, infants generalized the properties to those objects that shared the same label, regardless of shape similarity (Experiment 3). The results of these experiments delineate the role of perceptual similarity and conceptual information in guiding infants' inductive inferences.
Revealing the multidimensional mental representations of natural objects underlying human similarity judgements
Objects can be characterized according to a vast number of possible criteria (such as animacy, shape, colour and function), but some dimensions are more useful than others for making sense of the objects around us. To identify these core dimensions of object representations, we developed a data-driven computational model of similarity judgements for real-world images of 1,854 objects. The model captured most explainable variance in similarity judgements and produced 49 highly reproducible and meaningful object dimensions that reflect various conceptual and perceptual properties of those objects. These dimensions predicted external categorization behaviour and reflected typicality judgements of those categories. Furthermore, humans can accurately rate objects along these dimensions, highlighting their interpretability and opening up a way to generate similarity estimates from object dimensions alone. Collectively, these results demonstrate that human similarity judgements can be captured by a fairly low-dimensional, interpretable embedding that generalizes to external behaviour. Hebart et al. developed a computational model of similarity judgements for 1,854 natural objects. The model accurately predicted similarity and revealed 49 interpretable dimensions that reflect both perceptual and conceptual object properties.