Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,052 result(s) for "Size Perception - physiology"
Sort by:
Oral somatosensatory acuity is related to particle size perception in chocolate
Texture affects liking or rejection of many foods for clinically relevant populations and the general public. Phenotypic differences in chemosensation are well documented and influence food choices, but oral touch perception is less understood. Here, we used chocolate as a model food to explore texture perception, specifically grittiness perception. In Experiment 1, the Just Noticeable Difference (JND) for particle size in melted chocolate was ~5 μm in a particle size range commonly found in commercial chocolates; as expected, the JND increased with particle size, with a Weber Fraction of ~0.17. In Experiment 2, individual differences in touch perception were explored: detection and discrimination thresholds for oral point pressure were determined with Von Frey Hairs. Discrimination thresholds varied across individuals, allowing us to separate participants into high and low sensitivity groups. Across all participants, two solid commercial chocolates (with particle sizes of 19 and 26 μm; i.e., just above the JND) were successfully discriminated in a forced-choice task. However, this was driven entirely by individuals with better oral acuity: 17 of 20 of more acute individuals correctly identified the grittier chocolate versus 12 of 24 less acute individuals. This suggests phenotypic differences in oral somatosensation can influence texture perception of foods.
The effect of viewing-only, reaching, and grasping on size perception in virtual reality
In virtual environments (VEs), distance perception is often inaccurate but can be improved through active engagement, such as walking. While prior research suggests that action planning and execution can enhance the perception of action-related features, the effects of specific actions on perception in VEs remain unclear. This study investigates how different interactions – viewing-only, reaching, and grasping – affect size perception in Virtual Reality (VR) and whether teleportation (Experiment 1) and smooth locomotion (Experiment 2) influences these effects. Participants approached a virtual object using either teleportation or smooth locomotion and interacted with the target with a virtual hand. They then estimated the target’s size before and after the approach by adjusting the size of a comparison object. Results revealed that size perception improved after interaction across all conditions in both experiments, with viewing-only leading to the most accurate estimations. This suggests that, unlike in real environments, additional manual interaction does not significantly enhance size perception in VR when only visual input is available. Additionally, teleportation was more effective than smooth locomotion for improving size estimations. These findings extend action-based perceptual theories to VR, showing that interaction type and approach method can influence size perception accuracy without tactile feedback. Further, by analysing gaze spatial distribution during the different interaction conditions, this study suggests that specific motor responses combined with movement approaches affect gaze behaviour, offering insights for applied VR settings that prioritize perceptual accuracy.
Object-based attention: A tutorial review
This tutorial provides a selective review of research on object-based deployment of attention. It focuses primarily on behavioral studies with human observers. The tutorial is divided into five sections. It starts with an introduction to object-based attention and a description of the three commonly used experimental paradigms in object-based attention research. These are followed by a review of a variety of manifestations of object effects and the factors that influence object segmentation. The final two sections are devoted to two key issues in object-based research: the mechanisms that give rise to the object effects and the role of space in object-based selection.
Haptic size perception is influenced by body and object orientation
Changes in body orientation from standing have been shown to impact our perception of visual size. This has been attributed to the vestibular system’s involvement in constructing a representation of the space around our body. In the current study we investigated how body posture influences haptic size perception. Blindfolded participants were tasked with estimating the felt length of a rod and then adjusting it back to its previously felt size (after it had been set to a random length). Participants could feel and adjust the rod in the same posture, standing or supine, or after a change in posture. If the body orientation relative to gravity impacts size perception, we might expect changes in haptic size perception following body tilt. In support of this hypothesis, after changing between standing and supine postures there was a change in the rod’s haptically perceived length but only when the orientation of the rod itself also changed with respect to gravity but not when its orientation was constant. This suggests that body posture influences not only visual but also haptic size perception, potentially due to the vestibular system contributing to the encoding of space with respect to gravity.
\Top-Down\ Effects Where None Should Be Found: The El Greco Fallacy in Perception Research
A tidal wave of recent research purports to have discovered that higher-level states such as moods, action capabilities, and categorical knowledge can literally and directly affect how things look. Are these truly effects on perception, or might some instead reflect influences on judgment, memory, or response bias? Here, we exploited an infamous art-historical reasoning error (the so-called \"El Greco fallacy\") to demonstrate that multiple alleged top-down effects (including effects of morality on lightness perception and effects of action capabilities on spatial perception) cannot truly be effects on perception. We suggest that this error may also contaminate several other varieties of top-down effects and that this discovery has implications for debates over the continuity (or lack thereof) of perception and cognition.
Exaggerated groups: amplification in ensemble coding of temporal and spatial features
The human visual system represents summary statistical information (e.g. average) along many visual dimensions efficiently. While studies have indicated that approximately the square root of the number of items in a set are effectively integrated through this ensemble coding, how those samples are determined is still unknown. Here, we report that salient items are preferentially weighted over the other less salient items, by demonstrating that the perceived means of spatial (i.e. size) and temporal (i.e. flickering temporal frequency (TF)) features of the group of items are positively biased as the number of items in the group increases. This illusory ‘amplification effect’ was not the product of decision bias but of perceptual bias. Moreover, our visual search experiments with similar stimuli suggested that this amplification effect was due to attraction of visual attention to the salient items (i.e. large or high TF items). These results support the idea that summary statistical information is extracted from sets with an implicit preferential weighting towards salient items. Our study suggests that this saliency-based weighting may reflect a more optimal and efficient integration strategy for the extraction of spatio-temporal statistical information from the environment, and may thus be a basic principle of ensemble coding.
Capture versus suppression of attention by salient singletons: Electrophysiological evidence for an automatic attend-to-me signal
There is considerable controversy about whether salient singletons capture attention in a bottom-up fashion, irrespective of top-down control settings. One possibility is that salient singletons always generate an attention capture signal, but this signal can be actively suppressed to avoid capture. In the present study, we investigated this issue by using event-related potential recordings, focusing on N2pc (N2-posterior-contralateral; a measure of attentional deployment) and Pd (distractor positivity; a measure of attentional suppression). Participants searched for a specific letter within one of two regions, and irrelevant color singletons were sometimes present. We found that the irrelevant singletons did not elicit N2pc but instead elicited Pd; this occurred equally within the attended and unattended regions. These findings suggest that salient singletons may automatically produce an attend-to-me signal, irrespective of top-down control settings, but this signal can be overridden by an active suppression process to prevent the actual capture of attention.
Topographic representations of object size and relationships with numerosity reveal generalized quantity processing in human parietal cortex
Humans and many animals analyze sensory information to estimate quantities that guide behavior and decisions. These quantities include numerosity (object number) and object size. Having recently demonstrated topographic maps of numerosity, we ask whether the brain also contains maps of object size. Using ultra-high-field (7T) functional MRI and population receptive field modeling, we describe tuned responses to visual object size in bilateral human posterior parietal cortex. Tuning follows linear Gaussian functions and shows surround suppression, and tuning width narrows with increasing preferred object size. Object size-tuned responses are organized in bilateral topographic maps, with similar cortical extents responding to large and small objects. These properties of object size tuning and map organization all differ from the numerosity representation, suggesting that object size and numerosity tuning result from distinct mechanisms. However, their maps largely overlap and object size preferences correlate with numerosity preferences, suggesting associated representations of these two quantities. Object size preferences here show no discernable relation to visual position preferences found in visuospatial receptive fields. As such, object size maps (much like numerosity maps) do not reflect sensory organ structure but instead emerge within the brain. We speculate that, as in sensory processing, optimization of cognitive processing using topographic maps may be a common organizing principle in association cortex. Interactions between object size and numerosity maps may associate cognitive representations of these related features, potentially allowing consideration of both quantities together when making decisions.
Age-related changes in the susceptibility to visual illusions of size
As the global population ages, understanding of the effect of aging on visual perception is of growing importance. This study investigates age-related changes in adulthood along size perception through the lens of three visual illusions: the Ponzo, Ebbinghaus, and Height-width illusions. Utilizing the Bayesian conceptualization of the aging brain, which posits increased reliance on prior knowledge with age, we explored potential differences in the susceptibility to visual illusions across different age groups in adults (ages 20–85 years). To this end, we used the BTPI (Ben-Gurion University Test for Perceptual Illusions), an online validated battery of visual illusions developed in our lab. The findings revealed distinct patterns of age-related changes for each of the illusions, challenging the idea of a generalized increase in reliance on prior knowledge with age. Specifically, we observed a systematic reduction in susceptibility to the Ebbinghaus illusion with age, while susceptibility to the Height-width illusion increased with age. As for the Ponzo illusion, there were no significant changes with age. These results underscore the complexity of age-related changes in visual perception and converge with previous findings to support the idea that different visual illusions of size are mediated by distinct perceptual mechanisms.
Variance aftereffect within and between sensory modalities for visual and auditory domains
We can grasp various features of the outside world using summary statistics efficiently. Among these statistics, variance is an index of information homogeneity or reliability. Previous research has shown that visual variance information in the context of spatial integration is encoded directly as a unique feature, and currently perceived variance can be distorted by that of the preceding stimuli. In this study, we focused on variance perception in temporal integration. We investigated whether any variance aftereffects occurred in visual size and auditory pitch. Furthermore, to examine the mechanism of cross-modal variance perception, we also investigated whether variance aftereffects occur between different modalities. Four experimental conditions (a combination of sensory modalities of adaptor and test: visual-to-visual, visual-to-auditory, auditory-to-auditory, and auditory-to-visual) were conducted. Participants observed a sequence of visual or auditory stimuli perturbed in size or pitch with certain variance and performed a variance classification task before and after the variance adaptation phase. We found that in visual size, within modality adaptation to small or large variance, resulted in a variance aftereffect, indicating that variance judgments are biased in the direction away from that of the adapting stimulus. In auditory pitch, within modality adaptation to small variance caused variance aftereffect. For cross-modal combinations, adaptation to small variance in visual size resulted in variance aftereffect. However, the effect was weak, and variance aftereffect did not occur in other conditions. These findings indicate that the variance information of sequentially presented stimuli is encoded independently in visual and auditory domains.