Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
20 result(s) for "Perception of shadow and depth"
Sort by:
Depth perception between dots and the background face reduces trypophobic discomfort
Background Studies have shown that viewing a cluster of dots evokes feelings of discomfort in viewers and that the discomfort becomes especially strong when the dots are placed on background images of human skin. This phenomenon has been explained by the physical properties and spatial and semantic relationships between the dots and the background. However, it was not known whether the perceived, as well as the physical, spatial relationships contributes to the generation of discomfort. Methods We evoked illusory depth perception between black dots and the background face by drawing shadow-like gray dots around the black dots, while keeping the same black dots at the same positions, and examined whether illusory depth perception could increase or decrease discomfort. In each trial, participants viewed one of the following types of facial images: (a) face-only (face without dots), (b) a cluster of black dots on the face, (c) a cluster of gray dots on the face, and (d) a cluster of black dots and shadow-like gray dots on the face. After seeing each picture, they evaluated how much discomfort they felt from viewing the picture using a Likert scale and reported whether they perceived depth between the dots and the face. Results Participants felt discomfort toward all three types of faces with dots, that is, faces with black dots, gray dots, and both. However, interestingly, participants felt less discomfort when both black and gray dots were presented on the face than when only black dots were presented. The participants perceived depth between the black dots and the face in 85% of the trials with black dots and shadow-like gray dots, and there was a significant correlation between discomfort and frequency of depth perception. However, in the trials with black dots only and gray dots only, they perceived depth in only 18% and 27% of the trials, respectively, and the correlations between the frequencies of depth perception and discomfort were not significant. Conclusions Our results suggest that the perceived spatial relationship, such as attached vs. separate, as well as the physical spatial relationship, contribute to the generation of discomfort.
Missing depth cues in virtual reality limit performance and quality of three dimensional reaching movements
Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects' motor skills.
Stereoscopic 3D geometric distortions analyzed from the viewer’s point of view
Stereoscopic 3D (S3D) geometric distortions can be introduced by mismatches among image capture, display, and viewing configurations. In previous work of S3D geometric models, geometric distortions have been analyzed from a third-person perspective based on the binocular depth cue (i.e., binocular disparity). A third-person perspective is different from what the viewer sees since monocular depth cues (e.g., linear perspective, occlusion, and shadows) from different perspectives are different. However, depth perception in a 3D space involves both monocular and binocular depth cues. Geometric distortions that are solely predicted by the binocular depth cue cannot describe what a viewer really perceives. In this paper, we combine geometric models and retinal disparity models to analyze geometric distortions from the viewer's perspective where both monocular and binocular depth cues are considered. Results show that binocular and monocular depth-cue conflicts in a geometrically distorted S3D space. Moreover, user-initiated head translations averting from the optimal viewing position in conventional S3D displays can also introduce geometric distortions, which are inconsistent with our natural 3D viewing condition. The inconsistency of depth cues in a dynamic scene may be a source of visually induced motions sickness.
The role of pictorial cues and contrast for camouflage
Shadows that are produced across the surface of an object (self-shadows) are potentially an important source of information for visual systems. Animal patterns may exploit this principle for camouflage, using pictorial cues to produce false depth information that manipulates the viewer’s detection/recognition processes. However, pictorial cues could also facilitate camouflage by matching the contrast (e.g. due to shadows) of 3D backgrounds. Aside from studies of countershading (patterning that may conceal depth information), the role of self-shadows in camouflage patterns remains unclear. Here we investigated whether pictorial cues (self-shadows) increase the survival probability of moth-like prey presented to free-living wild bird predators relative to targets without these cues. We manipulated the presence of self-shadows by adjusting the illumination conditions to produce patterned targets under directional lighting (lit from above or from below; self-shadows present) or diffuse lighting (no self-shadows). We used non-patterned targets (uniform colour) as controls. We manipulated the direction of illumination because it has been linked with depth perception in birds; objects lit from above may appear convex while those lit from below can appear concave. As shadows influence contrast, which also determines detectability, we photographed the targets in situ over the observation period, allowing us to evaluate the effect of visual metrics on survival. We found some evidence that patterned targets without self-shadows had a lower probability of survival than patterned targets with self-shadows and targets with uniform colour. Surprisingly, none of the visual metrics explained variation in survival probability. However, predators increased their foraging efficiency over time, suggesting that predator learning may have overridden the benefits afforded by camouflaging coloration.
RDAH-Net: Bridging Relative Depth and Absolute Height for Monocular Height Estimation in Remote Sensing
Generating high-precision normalized digital surface models (nDSMs) from a single remote sensing image remains a challenging and ill-posed problem due to the absence of reliable geometric constraints. In this work, we show that monocular depth provides structurally stable cues of local geometry but lacks the global scale and vertical reference required for absolute height recovery. This intrinsic mismatch limits direct depth-to-height regression, particularly when transferring across heterogeneous terrains, land-cover compositions, and imaging conditions. Building on this idea, we propose the Relative Depth–Absolute Height Prediction Network (RDAH-Net), a framework that exploits relative depth as a geometry-aware prior while learning terrain-dependent height mappings from image appearance to absolute height. As the backbone, we employ a lightweight MobileNetV2 enhanced with a Convolutional Block Attention Module (CBAM), and further incorporate a cross-modal bidirectional attention fusion scheme with positional encoding to achieve a deep and effective fusion of image appearance and depth prior cues. Finally, a PixelShuffle-based upsampling strategy is used to sharpen prediction details and mitigate typical upsampling artifacts. Extensive experiments across diverse regions demonstrate that RDAH-Net achieves robust and generalizable height estimation, providing a practical alternative for large-scale mapping and rapid update scenarios.
Activation of the Human MT Complex by Motion in Depth Induced by a Moving Cast Shadow
A moving cast shadow is a powerful monocular depth cue for motion perception in depth. For example, when a cast shadow moves away from or toward an object in a two-dimensional plane, the object appears to move toward or away from the observer in depth, respectively, whereas the size and position of the object are constant. Although the cortical mechanisms underlying motion perception in depth by cast shadow are unknown, the human MT complex (hMT+) is likely involved in the process, as it is sensitive to motion in depth represented by binocular depth cues. In the present study, we examined this possibility by using a functional magnetic resonance imaging (fMRI) technique. First, we identified the cortical regions sensitive to the motion of a square in depth represented via binocular disparity. Consistent with previous studies, we observed significant activation in the bilateral hMT+, and defined functional regions of interest (ROIs) there. We then investigated the activity of the ROIs during observation of the following stimuli: 1) a central square that appeared to move back and forth via a moving cast shadow (mCS); 2) a segmented and scrambled cast shadow presented beside the square (sCS); and 3) no cast shadow (nCS). Participants perceived motion of the square in depth in the mCS condition only. The activity of the hMT+ was significantly higher in the mCS compared with the sCS and nCS conditions. Moreover, the hMT+ was activated equally in both hemispheres in the mCS condition, despite presentation of the cast shadow in the bottom-right quadrant of the stimulus. Perception of the square moving in depth across visual hemifields may be reflected in the bilateral activation of the hMT+. We concluded that the hMT+ is involved in motion perception in depth induced by moving cast shadow and by binocular disparity.
Virtual Shadow Drawing System Using Augmented Reality for Laparoscopic Surgery
Laparoscopic surgery holds great promise in medicine but remains challenging for surgeons because it is difficult to perceive depth while suturing. In addition to binocular parallax, such as three-dimensional vision, shadow is essential for depth perception. This paper presents an augmented reality system that draws virtual shadows to aid depth perception. On the visual display, the system generates shadows that mimic actual shadows by estimating shadow positions using image processing. The distance and angle between the forceps tip and the surface were estimated to evaluate the accuracy of the system. To validate the usefulness of this system in surgical applications, novices performed suturing tasks with and without the augmented reality system. The system error and delay were sufficiently small, and the generated shadows were similar to actual shadows. Furthermore, the suturing error decreased significantly when the augmented reality system was used. The shadow-drawing system developed in this study may help surgeons perceive depth during laparoscopic surgery.
Change blindness for cast shadows in natural scenes: Even informative shadow changes are missed
Previous work has shown that human observers discount or neglect cast shadows in natural and artificial scenes across a range of visual tasks. This is a reasonable strategy for a visual system designed to recognize objects under a range of lighting conditions, since cast shadows are not intrinsic properties of the scene—they look different (or disappear entirely) under different lighting conditions. However, cast shadows can convey useful information about the three-dimensional shapes of objects and their spatial relations. In this study, we investigated how well people detect changes to cast shadows, presented in natural scenes in a change blindness paradigm, and whether shadow changes that imply the movement or disappearance of an object are more easily noticed than shadow changes that imply a change in lighting. In Experiment 1 , a critical object’s shadow was removed, rotated to another direction, or shifted down to suggest that the object was floating. All of these shadow changes were noticed less often than changes to physical objects or surfaces in the scene, and there was no difference in the detection rates for the three types of changes. In Experiment 2 , the shadows of visible or occluded objects were removed from the scenes. Although removing the cast shadow of an occluded object could be seen as an object deletion, both types of shadow changes were noticed less often than deletions of the visible, physical objects in the scene. These results show that even informative shadow changes are missed, suggesting that cast shadows are discounted fairly early in the processing of natural scenes.
Perception of object motion in three-dimensional space induced by cast shadows
Cast shadows can be salient depth cues in three-dimensional (3D) vision. Using a motion illusion in which a ball is perceived to roll in depth on the bottom or to flow in the front plane depending on the slope of the trajectory of its cast shadow, we investigated cortical mechanisms underlying 3D vision based on cast shadows using fMRI techniques. When modified versions of the original illusion, in which the slope of the shadow trajectory (shadow slope) was changed in 5 steps from the same one as the ball trajectory to the horizontal, were presented to participants, their perceived ball trajectory shifted gradually from rolling on the bottom to floating in the front plane as the change of the shadow slope. This observation suggests that the perception of the ball trajectory in this illusion is strongly affected by the motion of the cast shadow. In the fMRI study, cortical activity during observation of the movies of the illusion was investigated. We found that the bilateral posterior-occipital sulcus (POS) and right ventral precuneus showed activation related to the perception of the ball trajectory induced by the cast shadows in the illusion. Of these areas, it was suggested that the right POS may be involved in the inferring of the ball trajectory by the given spatial relation between the ball and the shadow. Our present results suggest that the posterior portion of the medial parietal cortex may be involved in 3D vision by cast shadows. [Display omitted] ►Motion perception of an object is strongly affected by the motion of its cast shadow. ►Activity of hMT+ correlates with physical quantity of motion in visual stimulus. ►Activity of medial parietal cortex correlates with perceived object motion in 3D. ►Medial parietal cortex is involved in motion perception in 3D space by cast shadows.