Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
82
result(s) for
"2D LiDAR"
Sort by:
A Survey of Low-Cost 3D Laser Scanning Technology
2021
By moving a commercial 2D LiDAR, 3D maps of the environment can be built, based on the data of a 2D LiDAR and its movements. Compared to a commercial 3D LiDAR, a moving 2D LiDAR is more economical. A series of problems need to be solved in order for a moving 2D LiDAR to perform better, among them, improving accuracy and real-time performance. In order to solve these problems, estimating the movements of a 2D LiDAR, and identifying and removing moving objects in the environment, are issues that should be studied. More specifically, calibrating the installation error between the 2D LiDAR and the moving unit, the movement estimation of the moving unit, and identifying moving objects at low scanning frequencies, are involved. As actual applications are mostly dynamic, and in these applications, a moving 2D LiDAR moves between multiple moving objects, we believe that, for a moving 2D LiDAR, how to accurately construct 3D maps in dynamic environments will be an important future research topic. Moreover, how to deal with moving objects in a dynamic environment via a moving 2D LiDAR has not been solved by previous research.
Journal Article
An Obstacle-Finding Approach for Autonomous Mobile Robots Using 2D LiDAR Data
by
Tkachenko, Roman
,
Hladun, Yaroslav
,
Mochurad, Lesia
in
2D LiDAR image clustering
,
2D LiDAR sensor
,
Algorithms
2023
Obstacle detection is crucial for the navigation of autonomous mobile robots: it is necessary to ensure their presence as accurately as possible and find their position relative to the robot. Autonomous mobile robots for indoor navigation purposes use several special sensors for various tasks. One such study is localizing the robot in space. In most cases, the LiDAR sensor is employed to solve this problem. In addition, the data from this sensor are critical, as the sensor is directly related to the distance of objects and obstacles surrounding the robot, so LiDAR data can be used for detection. This article is devoted to developing an obstacle detection algorithm based on 2D LiDAR sensor data. We propose a parallelization method to speed up this algorithm while processing big data. The result is an algorithm that finds obstacles and objects with high accuracy and speed: it receives a set of points from the sensor and data about the robot’s movements. It outputs a set of line segments, where each group of such line segments describes an object. The two proposed metrics assessed accuracy, and both averages are high: 86% and 91% for the first and second metrics, respectively. The proposed method is flexible enough to optimize it for a specific configuration of the LiDAR sensor. Four hyperparameters are experimentally found for a given sensor configuration to maximize the correspondence between real and found objects. The work of the proposed algorithm has been carefully tested on simulated and actual data. The authors also investigated the relationship between the selected hyperparameters’ values and the algorithm’s efficiency. Potential applications, limitations, and opportunities for future research are discussed.
Journal Article
Low-Cost Calibration of Matching Error between Lidar and Motor for a Rotating 2D Lidar
2021
For a rotating 2D lidar, the inaccurate matching between the 2D lidar and the motor is an important error resource of the 3D point cloud, where the error is shown both in shape and attitude. Existing methods need to measure the angle position of the motor shaft in real time to synchronize the 2D lidar data and the motor shaft angle. However, the sensor used for measurement is usually expensive, which can increase the cost. Therefore, we propose a low-cost method to calibrate the matching error between the 2D lidar and the motor, without using an angular sensor. First, the sequence between the motor and the 2D lidar is optimized to eliminate the shape error of the 3D point cloud. Next, we eliminate the attitude error with uncertainty of the 3D point cloud by installing a triangular plate on the prototype. Finally, the Levenberg–Marquardt method is used to calibrate the installation error of the triangular plate. Experiments verified that the accuracy of our method can meet the requirements of the 3D mapping of indoor autonomous mobile robots. While we use a 2D lidar Hokuyo UST-10LX with an accuracy of ±40 mm in our prototype, we can limit the mapping error within ±50 mm when the distance is no more than 2.2996 m for a 1 s scan (mode 1), and we can limit the mapping error within ±50 mm at the measuring range 10 m for a 16 s scan (mode 7). Our method can reduce the cost while the accuracy is ensured, which can make a rotating 2D lidar cheaper.
Journal Article
Mobile Robot Self-Localization with 2D Push-Broom LIDAR in a 2D Map
2020
This paper proposes mobile robot self-localization based on an onboard 2D push-broom (or tilted-down) LIDAR using a reference 2D map previously obtained with a 2D horizontal LIDAR. The hypothesis of this paper is that a 2D reference map created with a 2D horizontal LIDAR mounted on a mobile robot or in another mobile device can be used by another mobile robot to locate its location using the same 2D LIDAR tilted-down. The motivation to tilt-down a 2D LIDAR is the direct detection of holes or small objects placed on the ground that remain undetected for a fixed horizontal 2D LIDAR. The experimental evaluation of this hypothesis has demonstrated that self-localization with a 2D push-broom LIDAR is possible by detecting and deleting the ground and ceiling points from the scan data, and projecting the remaining scan points in the horizontal plane of the 2D reference map before applying a 2D self-location algorithm. Therefore, an onboard 2D push-broom LIDAR offers self-location and accurate ground supervision without requiring an additional motorized device to change the tilt of the LIDAR in order to get these two combined characteristics in a mobile robot.
Journal Article
LiDAR-based SLAM for robotic mapping: state of the art and new frontiers
2024
PurposeIn recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.Design/methodology/approachThis paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.FindingsThis paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.Originality/valueTo the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.
Journal Article
The method of reflection-based marker detection and identification to ensure accurate AGV docking
2025
Accurate localization is essential for Automated Guided Vehicles (AGVs) to ensure reliable motion planning and precise execution of docking tasks. A key challenge lies in robust environmental perception for industrial applications. This paper introduces a novel reflection-based marker detection and identification method that relies solely on two-dimensional Light Detection and Ranging (2D LiDAR) technology. The proposed docking method and 2D marker design enable the AGV to accurately estimate the marker’s distance and orientation, reliably identify it, and determine the docking point. Experimental validation on a heavy industrial AGV demonstrated that the docking method achieves accuracy of up to 1 cm in position and below 0.05 degree in YAW orientation. As a result, the AGV achieved docking precision at an assembly station with a standard deviation below 2 cm in X and Y axes and YAW orientation below 1.8 degree.
Journal Article
A Smart Cane Based on 2D LiDAR and RGB-D Camera Sensor-Realizing Navigation and Obstacle Recognition
2024
In this paper, an intelligent blind guide system based on 2D LiDAR and RGB-D camera sensing is proposed, and the system is mounted on a smart cane. The intelligent guide system relies on 2D LiDAR, an RGB-D camera, IMU, GPS, Jetson nano B01, STM32, and other hardware. The main advantage of the intelligent guide system proposed by us is that the distance between the smart cane and obstacles can be measured by 2D LiDAR based on the cartographer algorithm, thus achieving simultaneous localization and mapping (SLAM). At the same time, through the improved YOLOv5 algorithm, pedestrians, vehicles, pedestrian crosswalks, traffic lights, warning posts, stone piers, tactile paving, and other objects in front of the visually impaired can be quickly and effectively identified. Laser SLAM and improved YOLOv5 obstacle identification tests were carried out inside a teaching building on the campus of Hainan Normal University and on a pedestrian crossing on Longkun South Road in Haikou City, Hainan Province. The results show that the intelligent guide system developed by us can drive the omnidirectional wheels at the bottom of the smart cane and provide the smart cane with a self-leading blind guide function, like a “guide dog”, which can effectively guide the visually impaired to avoid obstacles and reach their predetermined destination, and can quickly and effectively identify the obstacles on the way out. The mapping and positioning accuracy of the system’s laser SLAM is 1 m ± 7 cm, and the laser SLAM speed of this system is 25~31 FPS, which can realize the short-distance obstacle avoidance and navigation function both in indoor and outdoor environments. The improved YOLOv5 helps to identify 86 types of objects. The recognition rates for pedestrian crosswalks and for vehicles are 84.6% and 71.8%, respectively; the overall recognition rate for 86 types of objects is 61.2%, and the obstacle recognition rate of the intelligent guide system is 25–26 FPS.
Journal Article
LiDAR-Based System and Optical VHR Data for Building Detection and Mapping
by
Meoli, Giuseppe
,
Zarro, Chiara
,
Focareta, Mariano
in
2d lidars
,
3d lidars
,
analysis and classification
2020
The aim of this paper is to highlight how the employment of Light Detection and Ranging (LiDAR) technique can enhance greatly the performance and reliability of many monitoring systems applied to the Earth Observation (EO) and Environmental Monitoring. A short presentation of LiDAR systems, underlying their peculiarities, is first given. References to some review papers are highlighted, as they can be regarded as useful guidelines for researchers interested in using LiDARs. Two case studies are then presented and discussed, based on the use of 2D and 3D LiDAR data. Some considerations are done on the performance achieved through the use of LiDAR data combined with data from other sources. The case studies show how the LiDAR-based systems, combined with optical Very High Resolution (VHR) data, succeed in improving the analysis and monitoring of specific areas of interest, specifically how LiDAR data help in exploring external environment and extracting building features from urban areas. Moreover the discussed Case Studies demonstrate that the use of the LiDAR data, even with a low density of points, allows the development of an automatic procedure for accurate building features extraction, through object-oriented classification techniques, therefore by underlying the importance that even simple LiDAR-based systems play in EO and Environmental Monitoring.
Journal Article
Self-adaptive learning particle swarm optimization-based path planning of mobile robot using 2D Lidar environment
2024
The loading and unloading operations of smart logistic application robots depend largely on their perception system. However, there is a paucity of study on the evaluation of Lidar maps and their SLAM algorithms in complex environment navigation system. In the proposed work, the Lidar information is finetuned using binary occupancy grid approach and implemented Improved Self-Adaptive Learning Particle Swarm Optimization (ISALPSO) algorithm for path prediction. The approach makes use of 2D Lidar mapping to determine the most efficient route for a mobile robot in logistical applications. The Hector SLAM method is used in the Robot Operating System (ROS) platform to implement mobile robot real-time location and map building, which is subsequently transformed into a binary occupancy grid. To show the path navigation findings of the proposed methodologies, a navigational model has been created in the MATLAB 2D virtual environment using 2D Lidar mapping point data. The ISALPSO algorithm adapts its parameters inertia weight, acceleration coefficients, learning coefficients, mutation factor, and swarm size, based on the performance of the generated path. In comparison to the other five PSO variants, the ISALPSO algorithm has a considerably shorter path, a quick convergence rate, and requires less time to compute the distance between the locations of transporting and unloading environments, based on the simulation results that was generated and its validation using a 2D Lidar environment. The efficiency and effectiveness of path planning for mobile robots in logistic applications are validated using Quanser hardware interfaced with 2D Lidar and operated in environment 3 using proposed algorithm for production of optimal path.
Journal Article
Research on Distance Transform and Neural Network Lidar Information Sampling Classification-Based Semantic Segmentation of 2D Indoor Room Maps
2021
Semantic segmentation of room maps is an essential issue in mobile robots’ execution of tasks. In this work, a new approach to obtain the semantic labels of 2D lidar room maps by combining distance transform watershed-based pre-segmentation and a skillfully designed neural network lidar information sampling classification is proposed. In order to label the room maps with high efficiency, high precision and high speed, we have designed a low-power and high-performance method, which can be deployed on low computing power Raspberry Pi devices. In the training stage, a lidar is simulated to collect the lidar detection line maps of each point in the manually labelled map, and then we use these line maps and the corresponding labels to train the designed neural network. In the testing stage, the new map is first pre-segmented into simple cells with the distance transformation watershed method, then we classify the lidar detection line maps with the trained neural network. The optimized areas of sparse sampling points are proposed by using the result of distance transform generated in the pre-segmentation process to prevent the sampling points selected in the boundary regions from influencing the results of semantic labeling. A prototype mobile robot was developed to verify the proposed method, the feasibility, validity, robustness and high efficiency were verified by a series of tests. The proposed method achieved higher scores in its recall, precision. Specifically, the mean recall is 0.965, and mean precision is 0.943.
Journal Article