Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
351 result(s) for "Stereoscopic vision"
Sort by:
An Artificial Visual System for Three Dimensional Motion Direction Detection
For mammals, enormous amounts of visual information are processed by neurons of the visual nervous system. The research of the direction selectivity is of great significance and local direction-selective ganglion neurons have been discovered. However, research is still at the one dimensional level and concentrated on a single cell. It remains challenging to explain the function and mechanism of the overall motion direction detection. In our previous papers, we have proposed a motion direction detection mechanism on the two dimensional level to solve these problems. The previous studies did not take into account that the information in the left and right retina is different and cannot be used to detect the three dimensional motion direction. Further effort is required to develop a more realistic system in three dimensions. In this paper, we propose a new three-dimensional artificial visual system to extend motion direction detection mechanism into three dimensions. We assumed that a neuron could detect the local motion of a single voxel object within three dimensional space. We also took into consideration that the information of the left and right retinas is different. Based on this binocular disparity, a realistic motion direction mechanism for three dimensions was established: the neurons received signals from the primary visual cortex of each eye and responded to motion in specific directions. There are a series of local direction-selective ganglion neurons arrayed on the retina by a logical AND operation. The response of each local direction detection neuron will be further integrated by the next neural layer to obtain the global motion direction. We carry out several computer simulations to demonstrate the validity of the mechanism. It shows that the proposed mechanism is capable of detecting the motion of complex three dimensional objects, which is consistent with most known physiological experimental results.
Virtual Reality as an Educational and Training Tool for Medicine
Until very recently, we considered Virtual Reality as something that was very close, but it was still science fiction. However, today Virtual Reality is being integrated into many different areas of our lives, from videogames to different industrial use cases and, of course, it is starting to be used in medicine. There are two great general classifications for Virtual Reality. Firstly, we find a Virtual Reality in which we visualize a world completely created by computer, three-dimensional and where we can appreciate that the world we are visualizing is not real, at least for the moment as rendered images are improving very fast. Secondly, there is a Virtual Reality that basically consists of a reflection of our reality. This type of Virtual Reality is created using spherical or 360 images and videos, so we lose three-dimensional visualization capacity (until the 3D cameras are more developed), but on the other hand we gain in terms of realism in the images. We could also mention a third classification that merges the previous two, where virtual elements created by computer coexist with 360 images and videos. In this article we will show two systems that we have developed where each of them can be framed within one of the previous classifications, identifying the technologies used for their implementation as well as the advantages of each one. We will also analize how these systems can improve the current methodologies used for medical training. The implications of these developments as tools for teaching, learning and training are discussed.
Electroencephalography recognition based on encephalic region and temporal sequence transformer
Stereoscopic vision is the key to good motor control and accurate cognition, and its formation is closely related to brain control. The early methods of measuring stereoscopic vision rely on the subject’s judgment, which might be influenced by inadvertent misjudgments. To solve this problem, we collected the Electroencephalography (EEG) of subjects watching dynamic random dot stereogram for stereogram recognition. To analyze stereogram EEG signals, this paper proposed a transformer-based encephalic region temporal sequence analysis network. Inspired by the concept of brain regions, this network designs an encephalic region Transformer module to capture global spatial features in each brain region and among the whole brain regions. Based on the spatial features of electrodes in different brain regions, the global spatial dependence of all electrodes can be further obtained. Then, the temporal sequence Transformer module is adopted to learn the global temporal EEG features. Finally, we utilize the spatial-temporal multi-scale convolution module to extract advanced spatial and temporal fusion features for recognition. The simulation results on two public EEG datasets illustrate the excellent classification performance of the proposed model, which is better than 9 existing comparison models in EEG recognition.
Vision-based collective motion: A locust-inspired reductionist model
Naturally occurring collective motion is a fascinating phenomenon in which swarming individuals aggregate and coordinate their motion. Many theoretical models of swarming assume idealized, perfect perceptual capabilities, and ignore the underlying perception processes, particularly for agents relying on visual perception. Specifically, biological vision in many swarming animals, such as locusts, utilizes monocular non-stereoscopic vision, which prevents perfect acquisition of distances and velocities. Moreover, swarming peers can visually occlude each other, further introducing estimation errors. In this study, we explore necessary conditions for the emergence of ordered collective motion under restricted conditions, using non-stereoscopic, monocular vision. We present a model of vision-based collective motion for locust-like agents: elongated shape, omni-directional visual sensor parallel to the horizontal plane, and lacking stereoscopic depth perception. The model addresses (i) the non-stereoscopic estimation of distance and velocity, (ii) the presence of occlusions in the visual field. We consider and compare three strategies that an agent may use to interpret partially-occluded visual information at the cost of the computational complexity required for the visual perception processes. Computer-simulated experiments conducted in various geometrical environments (toroidal, corridor, and ring-shaped arenas) demonstrate that the models can result in an ordered or near-ordered state. At the same time, they differ in the rate at which order is achieved. Moreover, the results are sensitive to the elongation of the agents. Experiments in geometrically constrained environments reveal differences between the models and elucidate possible tradeoffs in using them to control swarming agents. These suggest avenues for further study in biology and robotics.
A Review of Visual Estimation Research on Live Pig Weight
The weight of live pigs is directly related to their health, nutrition management, disease prevention and control, and the overall economic benefits to livestock enterprises. Direct weighing can induce stress responses in pigs, leading to decreased productivity. Therefore, modern livestock industries are increasingly turning to non-contact techniques for estimating pig weight, such as automated monitoring systems based on computer vision. These technologies provide continuous, real-time weight-monitoring data without disrupting the pigs’ normal activities or causing stress, thereby enhancing breeding efficiency and management levels. Two methods of pig weight estimation based on image and point cloud data are comprehensively analyzed in this paper. We first analyze the advantages and disadvantages of the two methods and then discuss the main problems and challenges in the field of pig weight estimation technology. Finally, we predict the key research areas and development directions in the future.
The role of low-frequency oscillations in three-dimensional perception with depth cues in virtual reality
Currently, vision-related neuroscience studies are undergoing a trend from simplified image stimuli toward more naturalistic stimuli. Virtual reality (VR), as an emerging technology for visual immersion, provides more depth cues for three-dimensional (3D) presentation than two-dimensional (2D) image. It is still unclear whether the depth cues used to create 3D visual perception modulate specific cortical activation. Here, we constructed two visual stimuli presented by stereoscopic vision in VR and graphical projection with 2D image, respectively, and used electroencephalography to examine neural oscillations and their functional connectivity during 3D perception. We find that neural oscillations are specific to delta and theta bands in stereoscopic vision and the functional connectivity in the two bands increase in cortical areas related to visual pathways. These findings indicate that low-frequency oscillations play an important role in 3D perception with depth cues.
The Effectiveness of Teleglaucoma versus In-Patient Examination for Glaucoma Screening: A Systematic Review and Meta-Analysis
Glaucoma is the leading cause of irreversible visual impairment in the world affecting 60.5 million people worldwide in 2010, which is expected to increase to approximately 79.6 million by 2020. Therefore, glaucoma screening is important to detect, diagnose, and treat patients at the earlier stages to prevent disease progression and vision loss. Teleglaucoma uses stereoscopic digital imaging to take ocular images, which are transmitted electronically to an ocular specialist. The purpose is to synthesize literature to evaluate teleglaucoma, its diagnostic accuracy, healthcare system benefits, and cost-effectiveness. A systematic search was conducted to help locate published and unpublished studies. Studies which evaluate teleglaucoma as a screening device for glaucoma were included. A meta-analysis was conducted to provide estimates of diagnostic accuracy, diagnostic odds ratio, and the relative percentage of glaucoma cases detected. The improvements to healthcare service quality and cost data were assessed. Of 11237 studies reviewed, 45 were included. Our results indicated that, teleglaucoma is more specific and less sensitive than in-person examination. The pooled estimates of sensitivity was 0.832 [95% CI 0.770, 0.881] and specificity was 0.790 [95% CI 0.668, 0.876]. The relative odds of a positive screen test in glaucoma cases are 18.7 times more likely than a negative screen test in a non-glaucoma cases. Additionally, the mean cost for every case of glaucoma detected was $1098.67 US and of teleglaucoma per patient screened was $922.77 US. Teleglaucoma can accurately discriminate between screen test results with greater odds for positive cases. It detects more cases of glaucoma than in-person examination. Both patients and the healthcare systems benefit from early detection, reduction in wait and travel times, increased specialist referral rates, and cost savings. Teleglaucoma is an effective screening tool for glaucoma specifically for remote and under-services communities.
A multimodal virtual vision platform as a next-generation vision system for a surgical robot
   Robot-assisted surgery platforms are utilized globally thanks to their stereoscopic vision systems and enhanced functional assistance. However, the necessity of ergonomic improvement for their use by surgeons has been increased. In surgical robots, issues with chronic fatigue exist owing to the fixed posture of the conventional stereo viewer (SV) vision system. A head-mounted display was adopted to alleviate the inconvenience, and a virtual vision platform (VVP) is proposed in this study. The VVP can provide various critical data, including medical images, vital signs, and patient records, in three-dimensional virtual reality space so that users can access medical information simultaneously. An availability of the VVP was investigated based on various user evaluations by surgeons and novices, who executed the given tasks and answered questionnaires. The performances of the SV and VVP were not significantly different; however, the craniovertebral angle of the VVP was 16.35° higher on average than that of the SV. Survey results regarding the VVP were positive; participants indicated that the optimal number of displays was six, preferring the 2 × 3 array. Reflecting the tendencies, the VVP can be a neoconceptual candidate to be customized for medical use, which opens a new prospect in a next-generation surgical robot. Graphical Abstract
Laparoscopic Robotic Surgery: Current Perspective and Future Directions
Just as laparoscopic surgery provided a giant leap in safety and recovery for patients over open surgery methods, robotic-assisted surgery (RAS) is doing the same to laparoscopic surgery. The first laparoscopic-RAS systems to be commercialized were the Intuitive Surgical, Inc. (Sunnyvale, CA, USA) da Vinci and the Computer Motion Zeus. These systems were similar in many aspects, which led to a patent dispute between the two companies. Before the dispute was settled in court, Intuitive Surgical bought Computer Motion, and thus owned critical patents for laparoscopic-RAS. Recently, the patents held by Intuitive Surgical have begun to expire, leading to many new laparoscopic-RAS systems being developed and entering the market. In this study, we review the newly commercialized and prototype laparoscopic-RAS systems. We compare the features of the imaging and display technology, surgeons console and patient cart of the reviewed RAS systems. We also briefly discuss the future directions of laparoscopic-RAS surgery. With new laparoscopic-RAS systems now commercially available we should see RAS being adopted more widely in surgical interventions and costs of procedures using RAS to decrease in the near future.
On the development of a collaborative robotic system for industrial coating cells
For remaining competitive in the current industrial manufacturing markets, coating companies need to implement flexible production systems for dealing with mass customization and mass production workflows. The introduction of robotic manipulators capable of mimicking with accuracy the motions executed by highly skilled technicians is an important factor in enabling coating companies to cope with high customization. However, there are some limitations associated with the usage of a fully automated system for coating applications, especially when considering customized products of large dimensions and complex geometry. This paper addresses the development of a collaborative coating cell to increase the flexibility and efficiency of coating processes. The robot trajectory is taught with an intuitive programming by demonstration system, in which an icosahedron marker with multicoloured LEDs is attached to the coating tool for tracking its trajectories using a stereoscopic vision system. For avoiding the construction of fixtures and allowing the operator to freely place products within the coating work cell, a modular 3D perception system was developed, relying on principal component analysis for performing the initial point cloud alignment and on the iterative closest point algorithm for 6 DoF pose estimation. Furthermore, to enable safe and intuitive human-robot collaboration, a non-intrusive zone monitoring safety system was employed to track the position of the operator in the cell.