Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
18 result(s) for "structured light stereo vision"
Sort by:
Compensation for Vanadium Oxide Temperature with Stereo Vision on Long-Wave Infrared Light Measurement
In this paper, using automated optical inspection equipment and a thermal imager, the position and the temperature of the heat source or measured object can effectively be grasped. The high-resolution depth camera is with the stereo vision distance measurement and the low-resolution thermal imager is with the long-wave infrared measurement. Based on Planck’s black body radiation law and Stefan–Boltzmann law, the binocular stereo calibration of the two cameras was calculated. In order to improve the measured temperature error at different distances, equipped with Intel Real Sense Depth Camera D435, a compensator is proposed to ensure that the measured temperature of the heat source is correct and accurate. From the results, it can be clearly seen that the actual measured temperature at each distance is proportional to the temperature of the thermal image vanadium oxide, while the actual measured temperature is inversely proportional to the distance of the test object. By the proposed compensation function, the compensation temperature at varying vanadium oxide temperatures can be obtained. The errors between the average temperature at each distance and the constant temperature of the test object at 39 °C are all less than 0.1%.
Optical Sensors and Methods for Underwater 3D Reconstruction
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered.
Structured Light-Based 3D Reconstruction System for Plants
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.
Flexible Three-Dimensional Reconstruction via Structured-Light-Based Visual Positioning and Global Optimization
Three-dimensional (3D) reconstruction using line structured light vision system commonly cooperates with motion restraint devices, such as parallel guide rail push-broom devices. In this study, we propose a visual positioning method to eliminate the motion constraint. An extended orthogonal iteration algorithm for visual positioning is proposed to obtain the precise position of the line structured light binocular camera system during movement. The algorithm uses the information acquired by the binocular camera, and produces a better positioning accuracy than the traditional vision localization algorithm. Furthermore, a global optimization method is proposed to calculate the poses of the camera relative to the world coordinate system at each shooting position. This algorithm effectively reduces the error accumulation and pose drift during visual positioning, and 3D information of the surface can be measured via the proposed free-moving line structured light vision system. The simulation and physical experiments performed herein validate the proposed method and demonstrate the significant improvement in the reconstruction accuracy: when the test distance is 1.5 m, the root mean square error of the point cloud is within 0.5 mm.
Research on a Handheld 3D Laser Scanning System for Measuring Large-Sized Objects
A handheld 3D laser scanning system is proposed for measuring large-sized objects on site. This system is mainly composed of two CCD cameras and a line laser projector, in which the two CCD cameras constitute a binocular stereo vision system to locate the scanner’s position in the fixed workpiece coordinate system online, meanwhile the left CCD camera and the laser line projector constitute a structured light system to get the laser lines modulated by the workpiece features. The marked points and laser line are both obtained in the coordinate system of the left camera in each moment. To get the workpiece outline, the handheld scanner’s position is evaluated online by matching up the marked points got by the binocular stereo vision system and those in the workpiece coordinate system measured by a TRITOP system beforehand; then the laser line with workpiece’s features got at this moment is transformed into the fixed workpiece coordinate system. Finally, the 3D information composed by the laser lines can be reconstructed in the workpiece coordinate system. A ball arm with two standard balls, which is placed on a glass plate with many marked points randomly stuck on, is measured to test the system accuracy. The distance errors between the two balls are within ±0.05 mm, the radius errors of the two balls are all within ±0.04 mm, the distance errors from the scatter points to the fitted sphere are distributed evenly, within ±0.25 mm, without accumulated errors. Measurement results of two typical workpieces show that the system can measure large-sized objects completely with acceptable accuracy and have the advantage of avoiding some deficiencies, such as sheltering and limited measuring range.
Advancements in 3D field-crop phenotyping using point clouds: a comparative review of sensor technology, target traits, and challenges under controlled and field conditions
3D phenotyping refers to the quantitative characterization of a plant’s structural and morphological traits in three-dimensional space, allowing for a detailed analysis of plant architecture and growth patterns. In recent years, rapid advancements in non-destructive, high-throughput 3D imaging technologies have enabled the precise measurement of these traits. Initially focused on single-plant traits under controlled conditions, the field has now expanded towards robust applications in real-world field environments, enabling large-scale analyses of plant canopies and complex structures. This study focuses on the recent advancements in 3D crop phenotyping using point cloud technologies. It compares sensor technology and its application in controlled environments (Chamber-Crop Phenotyping, CCP) and field conditions (Field-Crop Phenotyping, FCP). Technologies such as Multiview stereo (MVS) reconstruction, LiDAR, and laser triangulation have enhanced plant phenomics by enabling high-throughput, non-destructive measurements of key traits such as canopy structure, leaf area, and stem diameter. This review highlights the strengths of the CCP, where environmental variables and flexibility are tightly controlled, facilitating precise trait measurement, and contrasts it with the challenges of the FCP, where unpredictable factors, such as occlusion, wind, light variability, and terrain complexity, complicate data acquisition. Various sensor platforms, including ground-based robotic systems and unmanned aerial vehicles (UAVs), have been discussed regarding their ability to overcome occlusion and limited sensor range in real-world conditions. The need to transition these technologies from laboratory environments to real-world agricultural applications is emphasized, highlighting their potential to improve crop management and plant breeding through accurate phenotypic trait extraction. Finally, current research gaps and future directions for integrating advanced sensor platforms and analytical techniques in both CCP and FCP settings are identified, emphasizing the need to enhance the scalability and robustness of 3D phenotyping for field applications.
Advances and Prospects of Vision-Based 3D Shape Measurement Methods
Vision-based three-dimensional (3D) shape measurement techniques have been widely applied over the past decades in numerous applications due to their characteristics of high precision, high efficiency and non-contact. Recently, great advances in computing devices and artificial intelligence have facilitated the development of vision-based measurement technology. This paper mainly focuses on state-of-the-art vision-based methods that can perform 3D shape measurement with high precision and high resolution. Specifically, the basic principles and typical techniques of triangulation-based measurement methods as well as their advantages and limitations are elaborated, and the learning-based techniques used for 3D vision measurement are enumerated. Finally, the advances of, and the prospects for, further improvement of vision-based 3D shape measurement techniques are proposed.
libBICOS – An Open Source GPU-Accelerated Library implementing BInary COrrespondence Search for 3D Reconstruction
In this paper, we present an implementation and publish an open source library for binary correspondence search (BICOS), an efficient method for accurate 3D reconstruction from structured light stereo imagery. Starting with two stacks of stereo-rectified images of a scene illuminated by a statistical light pattern the proposed method solves the problem of a pixelwise correspondence search. Our GPU-accelerated implementation reduces the latency of disparity computation using 7MP images on modern hardware down to 20 milliseconds. Based on the algorithm introduced by Dietrich et al. (2019), we extend their approach by increasing the descriptor size while augmenting postprocessing to increase its applicability on other types of pattern projections. Lastly, we provide benchmarks and example reconstructions using a stereo camera setup combined with an off-the-shelf projector to validate the algorithm’s performance. While many state-of-the-art single-shot stereo implementations are included in common computer vision libraries, high performance multi-shot methods are not broadly available. By publishing this method as a freely available library, in both a CUDA and CPU implementation, we hope for others to quickly gain traction in this field. The source code with build instructions and command-line tooling is available at https://github.com/JMUWRobotics/libBICOS under the GNU LGPLv3.
Spatiotemporal Matching Cost Function Based on Differential Evolutionary Algorithm for Random Speckle 3D Reconstruction
Random speckle structured light can increase the texture information of the object surface, so it is added in the binocular stereo vision system to solve the matching ambiguity problem caused by the surface with repetitive pattern or no texture. To improve the reconstruction quality, many current researches utilize multiple speckle patterns for projection and use stereo matching methods based on spatiotemporal correlation. This paper presents a novel random speckle 3D reconstruction scheme, in which multiple speckle patterns are used and a weighted-fusion-based spatiotemporal matching cost function (STMCF) is proposed to find the corresponding points in speckle stereo image pairs. Furthermore, a parameter optimization method based on differential evolutionary (DE) algorithm is designed for automatically determining the values of all parameters included in STMCF. In this method, since there is no suitable training data with ground truth, we explore a training strategy where a passive stereo vision dataset with ground truth is used as training data and then apply the learned parameter value to the stereo matching of speckle stereo image pairs. Various experimental results verify that our scheme can realize accurate and high-quality 3D reconstruction efficiently and the proposed STMCF exhibits superior performance in terms of accuracy, computation time and reconstruction quality than the state-of-the-art method based on spatiotemporal correlation.