Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,404
result(s) for
"stereo vision"
Sort by:
Comprehensive Bird Preservation at Wind Farms
by
Kaniecki, Damian
,
Gradolewski, Dawid
,
Jaworski, Adam
in
Aircraft detection
,
Airports
,
algorithm
2021
Wind as a clean and renewable energy source has been used by humans for centuries. However, in recent years with the increase in the number and size of wind turbines, their impact on avifauna has become worrisome. Researchers estimated that in the U.S. up to 500,000 birds die annually due to collisions with wind turbines. This article proposes a system for mitigating bird mortality around wind farms. The solution is based on a stereo-vision system embedded in distributed computing and IoT paradigms. After a bird’s detection in a defined zone, the decision-making system activates a collision avoidance routine composed of light and sound deterrents and the turbine stopping procedure. The development process applies a User-Driven Design approach along with the process of component selection and heuristic adjustment. This proposal includes a bird detection method and localization procedure. The bird identification is carried out using artificial intelligence algorithms. Validation tests with a fixed-wing drone and verifying observations by ornithologists proved the system’s desired reliability of detecting a bird with wingspan over 1.5 m from at least 300 m. Moreover, the suitability of the system to classify the size of the detected bird into one of three wingspan categories, small, medium and large, was confirmed.
Journal Article
Research on Defect Detection Method of Fusion Reactor Vacuum Chamber Based on Photometric Stereo Vision
2024
This paper addresses image enhancement and 3D reconstruction techniques for dim scenes inside the vacuum chamber of a nuclear fusion reactor. First, an improved multi-scale Retinex low-light image enhancement algorithm with adaptive weights is designed. It can recover image detail information that is not visible in low-light environments, maintaining image clarity and contrast for easy observation. Second, according to the actual needs of target plate defect detection and 3D reconstruction inside the vacuum chamber, a defect reconstruction algorithm based on photometric stereo vision is proposed. To optimize the position of the light source, a light source illumination profile simulation system is designed in this paper to provide an optimized light array for crack detection inside vacuum chambers without the need for extensive experimental testing. Finally, a robotic platform mounted with a binocular stereo-vision camera is constructed and image enhancement and defect reconstruction experiments are performed separately. The results show that the above method can broaden the gray level of low-illumination images and improve the brightness value and contrast. The maximum depth error is less than 24.0% and the maximum width error is less than 15.3%, which achieves the goal of detecting and reconstructing the defects inside the vacuum chamber.
Journal Article
Stereo Vision-Based High Dynamic Range Imaging Using Differently-Exposed Image Pair
by
Sung-Jea Ko
,
Won Jae Park
,
Seok Kang
in
Cameras
,
Chemical technology
,
high dynamic range imaging
2017
In this paper, a high dynamic range (HDR) imaging method based on the stereo vision system is presented. The proposed method uses differently exposed low dynamic range (LDR) images captured from a stereo camera. The stereo LDR images are first converted to initial stereo HDR images using the inverse camera response function estimated from the LDR images. However, due to the limited dynamic range of the stereo LDR camera, the radiance values in under/over-exposed regions of the initial main-view (MV) HDR image can be lost. To restore these radiance values, the proposed stereo matching and hole-filling algorithms are applied to the stereo HDR images. Specifically, the auxiliary-view (AV) HDR image is warped by using the estimated disparity between initial the stereo HDR images and then effective hole-filling is applied to the warped AV HDR image. To reconstruct the final MV HDR, the warped and hole-filled AV HDR image is fused with the initial MV HDR image using the weight map. The experimental results demonstrate objectively and subjectively that the proposed stereo HDR imaging method provides better performance compared to the conventional method.
Journal Article
Sensor-Aided Calibration of Relative Extrinsic Parameters for Outdoor Stereo Vision Systems
by
Zhang, Dongsheng
,
Han, Yongsheng
,
Yu, Qifeng
in
3D displacement measurement
,
Accuracy
,
Algorithms
2023
Calibration of the stereo vision systems is a crucial step for precise 3D measurements. Restricted by the outdoors’ large field of view (FOV), the conventional method based on precise calibration boards is not suitable since the calibration process is time consuming and the calibration accuracy is not guaranteed. In this paper, we propose a calibration method for estimating the extrinsic parameters of the stereo vision system aided by an inclinometer and a range sensor. Through the parameters given by the sensors, the initial rotation angle of the extrinsic parameters and the translation vector are pre-established by solving a set of linear equations. The metric scale of the translation vector is determined by the baseline length provided by the range sensor or GNSS signals. Finally, the optimal extrinsic parameters of the stereo vision systems are obtained by nonlinear optimization of inverse depth parameterization. The most significant advantage of this method is that it enhances the capability of the stereo vision measurement in the outdoor environment, and can achieve fast and accurate calibration results. Both simulation and outdoor experiments have verified the feasibility and correctness of this method, and the relative error in the outdoor large FOV was less than 0.3%. It shows that this calibration method is a feasible solution for outdoor measurements with a large FOV and long working distance.
Journal Article
Compensation for Vanadium Oxide Temperature with Stereo Vision on Long-Wave Infrared Light Measurement
2022
In this paper, using automated optical inspection equipment and a thermal imager, the position and the temperature of the heat source or measured object can effectively be grasped. The high-resolution depth camera is with the stereo vision distance measurement and the low-resolution thermal imager is with the long-wave infrared measurement. Based on Planck’s black body radiation law and Stefan–Boltzmann law, the binocular stereo calibration of the two cameras was calculated. In order to improve the measured temperature error at different distances, equipped with Intel Real Sense Depth Camera D435, a compensator is proposed to ensure that the measured temperature of the heat source is correct and accurate. From the results, it can be clearly seen that the actual measured temperature at each distance is proportional to the temperature of the thermal image vanadium oxide, while the actual measured temperature is inversely proportional to the distance of the test object. By the proposed compensation function, the compensation temperature at varying vanadium oxide temperatures can be obtained. The errors between the average temperature at each distance and the constant temperature of the test object at 39 °C are all less than 0.1%.
Journal Article
SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks
2019
Refining raw disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships among pixels limit the accuracy and robustness of existing methods and there is no standard method for fusion of different kinds of depth data. In this paper, we introduce a new method to fuse disparity maps from different sources, while incorporating supplementary information (intensity, gradient, etc.) into a refiner network to better refine raw disparity inputs. A discriminator network classifies disparities at different receptive fields and scales. Assuming a Markov Random Field for the refined disparity map produces better estimates of the true disparity distribution. Both fully supervised and semi-supervised versions of the algorithm are proposed. The approach includes a more robust loss function to inpaint invalid disparity values and requires much less labeled data to train in the semi-supervised learning mode. The algorithm can be generalized to fuse depths from different kinds of depth sources. Experiments explored different fusion opportunities: stereo-monocular fusion, stereo-ToF fusion and stereo-stereo fusion. The experiments show the superiority of the proposed algorithm compared with the most recent algorithms on public synthetic datasets (Scene Flow, SYNTH3, our synthetic garden dataset) and real datasets (Kitti2015 dataset and Trimbot2020 Garden dataset).
Journal Article
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
by
Hu, Shaopeng
,
Ishii, Idaku
,
Matsumoto, Yuji
in
Cameras
,
catadioptric stereo
,
high-speed vision
2017
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512 × 512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system.
Journal Article
Fast and accurate vision-based stereo reconstruction and motion estimation for image-guided liver surgery
by
Himidan, Sharifa
,
Ma, Burton
,
Wildes, Richard P.
in
ablation
,
accurate vision-based stereo reconstruction
,
adaptive CTF matching approach
2018
Image-guided liver surgery aims to enhance the precision of resection and ablation by providing fast localisation of tumours and adjacent complex vasculature to improve oncologic outcome. This Letter presents a novel end-to-end solution for fast stereo reconstruction and motion estimation that demonstrates high accuracy with phantom and clinical data. The authors’ computationally efficient coarse-to-fine (CTF) stereo approach facilitates liver imaging by accounting for low texture regions, enabling precise three-dimensional (3D) boundary recovery through the use of adaptive windows and utilising a robust 3D motion estimator to reject spurious data. To the best of their knowledge, theirs is the only adaptive CTF matching approach to reconstruction and motion estimation that registers time series of reconstructions to a single key frame for registration to a volumetric computed tomography scan. The system is evaluated empirically in controlled laboratory experiments with a liver phantom and motorised stages for precise quantitative evaluation. Additional evaluation is provided through testing with patient data during liver resection.
Journal Article
Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)
by
Liangzhi Li
,
Nanfeng Xiao
,
Hong Tang
in
binocular stereo vision
,
Chemical technology
,
cyber physical social sensing (CPSS)
2017
Although many researchers have begun to study the area of Cyber Physical Social Sensing (CPSS), few are focused on robotic sensors. We successfully utilize robots in CPSS, and propose a sensor trajectory planning method in this paper. Trajectory planning is a fundamental problem in mobile robotics. However, traditional methods are not suited for robotic sensors, because of their low efficiency, instability, and non-smooth-generated paths. This paper adopts an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial which can output a smooth trajectory for the robotic sensor. Simulations demonstrate that our approach is robust and efficient, and can be well applied in the CPSS field.
Journal Article
Evidential Sensor Fusion of Long-Wavelength Infrared Stereo Vision and 3D-LIDAR for Rangefinding in Fire Environments
by
Starr, Joseph W.
,
Lattimer, B. Y.
in
Attenuation
,
Cameras
,
Characterization and Evaluation of Materials
2017
A method of sensor fusion was developed to combine long-wavelength infrared (LWIR) stereo vision and a spinning LIDAR for improved rangefinding in smoke-obscured environments. This method allows rangefinding in clear and smoke conditions, relying on LIDAR’s high accuracy in clear conditions and the perception ability of LWIR cameras in smoke. Sensor data were combined using evidential (Dempster–Shafer) theory in a 3D multi-resolution voxel domain for occupied and free space states. A heuristic method was produced for separating significantly attenuated and low-attenuation LIDAR returns using return intensity and distance. A sensor model was developed to apply free space state information from LIDAR high-attenuation returns. Sensor models were developed for applying occupied and free space state information from LIDAR low-attenuation returns and from LWIR stereo vision points. The fusion method was evaluated in two fire environments: a room-hallway scenario with a range of clear to dense-smoke conditions and a shipboard fire scenario. Room-hallway tests were evaluated by assessing performance against baseline rangefinding. For the occupied state, the fusion method and LIDAR are within typically 5% to 10% for clear conditions, and the fusion method is more accurate than the LIDAR by typically 5% to 10% for smoke conditions, with LIDAR providing no data in the densest smoke. For the free space state, the fusion method outperformed the LIDAR in smoke conditions by as much as 40% and was typically within 5% of the LIDAR in clear conditions.
Journal Article