Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
6,721 result(s) for "Size perception."
Sort by:
Top 10 biggest
Discover fascinating facts and figures about the world's biggest everything! From the mammoth mammal the Blue Whale to the vast planet Jupiter, find out what makes it into the biggest top ten ever.
Exaggerated groups: amplification in ensemble coding of temporal and spatial features
The human visual system represents summary statistical information (e.g. average) along many visual dimensions efficiently. While studies have indicated that approximately the square root of the number of items in a set are effectively integrated through this ensemble coding, how those samples are determined is still unknown. Here, we report that salient items are preferentially weighted over the other less salient items, by demonstrating that the perceived means of spatial (i.e. size) and temporal (i.e. flickering temporal frequency (TF)) features of the group of items are positively biased as the number of items in the group increases. This illusory ‘amplification effect’ was not the product of decision bias but of perceptual bias. Moreover, our visual search experiments with similar stimuli suggested that this amplification effect was due to attraction of visual attention to the salient items (i.e. large or high TF items). These results support the idea that summary statistical information is extracted from sets with an implicit preferential weighting towards salient items. Our study suggests that this saliency-based weighting may reflect a more optimal and efficient integration strategy for the extraction of spatio-temporal statistical information from the environment, and may thus be a basic principle of ensemble coding.
Age-related changes in the susceptibility to visual illusions of size
As the global population ages, understanding of the effect of aging on visual perception is of growing importance. This study investigates age-related changes in adulthood along size perception through the lens of three visual illusions: the Ponzo, Ebbinghaus, and Height-width illusions. Utilizing the Bayesian conceptualization of the aging brain, which posits increased reliance on prior knowledge with age, we explored potential differences in the susceptibility to visual illusions across different age groups in adults (ages 20–85 years). To this end, we used the BTPI (Ben-Gurion University Test for Perceptual Illusions), an online validated battery of visual illusions developed in our lab. The findings revealed distinct patterns of age-related changes for each of the illusions, challenging the idea of a generalized increase in reliance on prior knowledge with age. Specifically, we observed a systematic reduction in susceptibility to the Ebbinghaus illusion with age, while susceptibility to the Height-width illusion increased with age. As for the Ponzo illusion, there were no significant changes with age. These results underscore the complexity of age-related changes in visual perception and converge with previous findings to support the idea that different visual illusions of size are mediated by distinct perceptual mechanisms.
Are you small?
\"Are YOU small? This tiny question allows readers to zoom in from an average-sized kid down to a single quark\"-- Provided by publisher.
Haptic size perception is influenced by body and object orientation
Changes in body orientation from standing have been shown to impact our perception of visual size. This has been attributed to the vestibular system’s involvement in constructing a representation of the space around our body. In the current study we investigated how body posture influences haptic size perception. Blindfolded participants were tasked with estimating the felt length of a rod and then adjusting it back to its previously felt size (after it had been set to a random length). Participants could feel and adjust the rod in the same posture, standing or supine, or after a change in posture. If the body orientation relative to gravity impacts size perception, we might expect changes in haptic size perception following body tilt. In support of this hypothesis, after changing between standing and supine postures there was a change in the rod’s haptically perceived length but only when the orientation of the rod itself also changed with respect to gravity but not when its orientation was constant. This suggests that body posture influences not only visual but also haptic size perception, potentially due to the vestibular system contributing to the encoding of space with respect to gravity.
‘See what you feel’: The impact of visual scale distance in haptic-to-visual crossmodal matching
Two experiments were conducted to explore the impact of the distance of a visual scale employed in the crossmodal matching method dubbed See What You Feel (SWYF) used to study the Uznadze haptic aftereffect. Previous studies reported that SWYF leads to a general underestimation of out-of-sight handheld spheres, which seems to increase with visual scale distance. Experiment 1 tested the effect of visual scale distance in haptic-to-visual crossmodal matching. A 19-step visual scale, made of actual 3D spheres (diameters ranging from 2.0 to 5.6 cm), was set at one of three possible distances (30, 160, 290 cm); participants’ task was to find the matching visual spheres for four out-of-sight handheld test spheres (diameters 3.0, 3.8, 4.6, 5.0 cm). Results confirmed the underestimation effect and only partially confirmed the role of scale distance. Experiment 2 investigated the role of scale distance in a visual-to-visual matching task in which the same visual scale was employed, set at one of three distances (37, 160, 290 cm). Participants’ task was to find a match for the same four test stimuli. Results showed no statistical difference between matched and actual sphere sizes with distance 37 cm; underestimations were observed with the far distances, thus reflecting overestimations of scale sphere sizes. Results from both experiments allow us to conclude that the underestimation effect observed with SWYF is a general feature of haptic-to-visual crossmodal matching, and that the SWYF method is a valuable tool for measuring haptic size perception with handheld stimuli when the visual scale is set at a visually comfortable peripersonal distance.
Encoding contact size using static and dynamic electrotactile finger stimulation: natural decoding vs. trained cues
Electrotactile stimulation through matrix electrodes is a promising technology to restore high-resolution tactile feedback in extended reality applications. One of the fundamental tactile effects that should be simulated is the change in the size of the contact between the finger and a virtual object. The present study investigated how participants perceive the increase of stimulation area when stimulating the index finger using static or dynamic (moving) stimuli produced by activating 1 to 6 electrode pads. To assess the ability to interpret the stimulation from the natural cues (natural decoding), without any prior training, the participants were instructed to draw the size of the stimulated area and identify the size difference when comparing two consecutive stimulations. To investigate if other “non-natural” cues can improve the size estimation, the participants were asked to enumerate the number of active pads following a training protocol. The results demonstrated that participants could perceive the change in size without prior training (e.g., the estimated area correlated with the stimulated area, p < 0.001; ≥ two-pad difference recognized with > 80% success rate). However, natural decoding was also challenging, as the response area changed gradually and sometimes in complex patterns when increasing the number of active pads (e.g., four extra pads needed for the statistically significant difference). Nevertheless, by training the participants to utilize additional cues the limitations of natural perception could be compensated. After the training, the mismatch in the activated and estimated number of pads was less than one pad regardless of the stimulus size. Finally, introducing the movement of the stimulus substantially improved discrimination (e.g., 100% median success rate to recognize ≥ one-pad difference). The present study, therefore, provides insights into stimulation size perception, and practical guidelines on how to modulate pad activation to change the perceived size in static and dynamic scenarios.