Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
4,284
result(s) for
"LiDAR sensor"
Sort by:
A Methodology to Model the Rain and Fog Effect on the Performance of Automotive LiDAR Sensors
by
Michael H. Köhler
,
Martin Jakobi
,
Lukas Haas
in
Accuracy
,
advanced driver-assistance system
,
Analysis
2023
In this work, we introduce a novel approach to model the rain and fog effect on the light detection and ranging (LiDAR) sensor performance for the simulation-based testing of LiDAR systems. The proposed methodology allows for the simulation of the rain and fog effect using the rigorous applications of the Mie scattering theory on the time domain for transient and point cloud levels for spatial analyses. The time domain analysis permits us to benchmark the virtual LiDAR signal attenuation and signal-to-noise ratio (SNR) caused by rain and fog droplets. In addition, the detection rate (DR), false detection rate (FDR), and distance error derror of the virtual LiDAR sensor due to rain and fog droplets are evaluated on the point cloud level. The mean absolute percentage error (MAPE) is used to quantify the simulation and real measurement results on the time domain and point cloud levels for the rain and fog droplets. The results of the simulation and real measurements match well on the time domain and point cloud levels if the simulated and real rain distributions are the same. The real and virtual LiDAR sensor performance degrades more under the influence of fog droplets than in rain.
Journal Article
Performance Evaluation of MEMS-Based Automotive LiDAR Sensor and Its Simulation Model as per ASTM E3125-17 Standard
by
Ludwig Kastner
,
Michael H. Köhler
,
Martin Jakobi
in
advanced driver-assistance system
,
Article ; micro-electro-mechanical systems ; automotive LiDAR sensor ; ASTM E3125-17 standard ; advanced driver-assistance system ; open simulation interface ; functional mock-up interface ; functional mock-up unit ; point-to-point distance tests ; user-selected tests ; proving ground ; PointPillars
,
ASTM E3125-17 standard
2023
Measurement performance evaluation of real and virtual automotive light detection and ranging (LiDAR) sensors is an active area of research. However, no commonly accepted automotive standards, metrics, or criteria exist to evaluate their measurement performance. ASTM International released the ASTM E3125-17 standard for the operational performance evaluation of 3D imaging systems commonly referred to as terrestrial laser scanners (TLS). This standard defines the specifications and static test procedures to evaluate the 3D imaging and point-to-point distance measurement performance of TLS. In this work, we have assessed the 3D imaging and point-to-point distance estimation performance of a commercial micro-electro-mechanical system (MEMS)-based automotive LiDAR sensor and its simulation model according to the test procedures defined in this standard. The static tests were performed in a laboratory environment. In addition, a subset of static tests was also performed at the proving ground in natural environmental conditions to determine the 3D imaging and point-to-point distance measurement performance of the real LiDAR sensor. In addition, real scenarios and environmental conditions were replicated in the virtual environment of a commercial software to verify the LiDAR model’s working performance. The evaluation results show that the LiDAR sensor and its simulation model under analysis pass all the tests specified in the ASTM E3125-17 standard. This standard helps to understand whether sensor measurement errors are due to internal or external influences. We have also shown that the 3D imaging and point-to-point distance estimation performance of LiDAR sensors significantly impacts the working performance of the object recognition algorithm. That is why this standard can be beneficial in validating automotive real and virtual LiDAR sensors, at least in the early stage of development. Furthermore, the simulation and real measurements show good agreement on the point cloud and object recognition levels.
Journal Article
A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data
2016
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%).
Journal Article
Decentralized fault-tolerant control of multi-mobile robot system addressing LiDAR sensor faults
by
Elshalakani, Mohamed
,
Hammad, Sherif A.
,
Maged, Shady A.
in
639/166
,
639/166/987
,
639/166/988
2024
The control of multi-robot formations is a crucial aspect in various applications, such as transport, surveillance and monitoring environments. Maintaining robots in a specific formation pose or performing a cooperative task is a significant challenge when a fault occurs among any of the robots. This work presents a Decentralized Fault-Tolerant Control (DFTC) scheme that addresses lidar sensor faults within a system of multiple differential wheeled mobile robots. The robots change the formation shape according to the number of available robots within the formation. A Graph theory is implemented to represent the multi-robot formation and communication. Each mobile robot is equipped with three sensors: a wheel encoder, an Inertial Measurement Unit (IMU), and a lidar sensor. Sensor fault detection and isolation (FDI) is implemented at two levels. The pose estimation obtained from the wheel encoder and IMU is fused using an extended Kalman filter (EKF), and this pose estimation is utilized at the local level of lidar sensor FDI. At the system level, the FDI of the lidar sensor involves computing a residual by comparing the pose estimation with other lidar sensors mounted on other mobile robots within the formation. The presented FTC scheme is simulated in Simulink multi-robot environments.
Journal Article
Tunnel Deformation Inspection via Global Spatial Axis Extraction from 3D Raw Point Cloud
2020
Global inspection of large-scale tunnels is a fundamental yet challenging task to ensure the structural stability of tunnels and driving safety. Advanced LiDAR scanners, which sample tunnels into 3D point clouds, are making their debut in the Tunnel Deformation Inspection (TDI). However, the acquired raw point clouds inevitably possess noticeable occlusions, missing areas, and noise/outliers. Considering the tunnel as a geometrical sweeping feature, we propose an effective tunnel deformation inspection algorithm by extracting the global spatial axis from the poor-quality raw point cloud. Essentially, we convert tunnel axis extraction into an iterative fitting optimization problem. Specifically, given the scanned raw point cloud of a tunnel, the initial design axis is sampled to generate a series of normal planes within the corresponding Frenet frame, followed by intersecting those planes with the tunnel point cloud to yield a sequence of cross sections. By fitting cross sections with circles, the fitted circle centers are approximated with a B-Spline curve, which is considered as an updated axis. The procedure of “circle fitting and B-SPline approximation” repeats iteratively until convergency, that is, the distance of each fitted circle center to the current axis is smaller than a given threshold. By this means, the spatial axis of the tunnel can be accurately obtained. Subsequently, according to the practical mechanism of tunnel deformation, we design a segmentation approach to partition cross sections into meaningful pieces, based on which various inspection parameters can be automatically computed regarding to tunnel deformation. A variety of practical experiments have demonstrated the feasibility and effectiveness of our inspection method.
Journal Article
Individual Tree Canopy Parameters Estimation Using UAV-Based Photogrammetric and LiDAR Point Clouds in an Urban Park
2021
Estimation of urban tree canopy parameters plays a crucial role in urban forest management. Unmanned aerial vehicles (UAV) have been widely used for many applications particularly forestry mapping. UAV-derived images, captured by an onboard camera, provide a means to produce 3D point clouds using photogrammetric mapping. Similarly, small UAV mounted light detection and ranging (LiDAR) sensors can also provide very dense 3D point clouds. While point clouds derived from both photogrammetric and LiDAR sensors can allow the accurate estimation of critical tree canopy parameters, so far a comparison of both techniques is missing. Point clouds derived from these sources vary according to differences in data collection and processing, a detailed comparison of point clouds in terms of accuracy and completeness, in relation to tree canopy parameters using point clouds is necessary. In this research, point clouds produced by UAV-photogrammetry and -LiDAR over an urban park along with the estimated tree canopy parameters are compared, and results are presented. The results show that UAV-photogrammetry and -LiDAR point clouds are highly correlated with R2 of 99.54% and the estimated tree canopy parameters are correlated with R2 of higher than 95%.
Journal Article
Evaluation of Different LiDAR Technologies for the Documentation of Forgotten Cultural Heritage under Forest Environments
by
Piras, Marco
,
Di Pietra, Vincenzo
,
Mat?-Gonz?lez, Miguel ?ngel
in
airborne LiDAR sensor
,
Civil engineering
,
Cultural heritage
2022
In the present work, three LiDAR technologies (Faro Focus 3D X130?Terrestrial Laser Scanner, TLS-, Kaarta Stencil 2?16?Mobile mapping system, MMS-, and DJI Zenmuse L1?Airborne LiDAR sensor, ALS-) have been tested and compared in order to assess the performances in surveying built heritage in vegetated areas. Each of the mentioned devices has their limits of usability, and different methods to capture and generate 3D point clouds need to be applied. In addition, it has been necessary to apply a methodology to be able to position all the point clouds in the same reference system. While the TLS scans and the MMS data have been geo-referenced using a set of vertical markers and sphere measured by a GNSS receiver in RTK mode, the ALS model has been geo-referenced by the GNSS receiver integrated in the unmanned aerial system (UAS), which presents different characteristics and accuracies. The resulting point clouds have been analyzed and compared, focusing attention on the number of points acquired by the different systems, the density, and the nearest neighbor distance.
Journal Article
Unmanned aerial vehicle (UAV) paired with LiDAR sensor to detect bodies on surface under vegetation cover: Preliminary test
2025
The use of unmanned aerial vehicles (UAVs) has become increasingly accessible, enabling their deployment in a diverse range of operational contexts. UAVs have been tested as part of search and rescue missions. Following the successful use of UAVs in the wilderness medicine literature, we questioned their ability to be used in forensic context to search for missing persons or human remains, especially under canopy cover. Subsequently, various sensors were then repurposed from their original applications to address forensic concerns. This preliminary study aimed to evaluate the efficacy of airborne Light Detection and Ranging sensors (LiDAR) in detecting a concealed human body on a surface within a densely vegetated search area. To proceed, two LiDAR sensors were tested with several modalities. A dendrometric method was used to estimate the tree density of the search area, and the Normalized Difference Vegetation Index (NDVI) was used to bring precision in the appreciation of canopy cover density. The results showed that airborne LiDAR sensors can capture body signatures in areas with dense vegetation. The ground point density reached 0.26 % in a high-vegetated area. The study highlighted the importance of refining data processing techniques, including point cloud selection and the implementation of true positive/false positive analysis, to improve detection accuracy. Furthermore, the potential integration of complementary sensors such as thermographic and multispectral sensors was discussed, which may enhance the detection of thermal anomalies and chemical markers associated with decomposition.
•The airborne LiDAR allows to detect body signature on surface under canopy.•LiDAR supporting 5 echoes is more efficient than 3 echoes LiDAR in locating body signature.•The RGB mapping camera included in the LiDAR L2 Zenmuse allows to detect cloths colors under vegetation cover.•The L1 LiDAR sensor ground point percentage was 0.11 % and L2 LiDAR sensor ground point percentage was 0.26 % despite canopy.
Journal Article
Real-Time 3D Object Detection and Classification in Autonomous Driving Environment Using 3D LiDAR and Camera Sensors
by
Tamilarasi, K.
,
Arikumar, K. S.
,
Gadekallu, Thippa Reddy
in
Accuracy
,
Artificial neural networks
,
Automobiles
2022
The rapid development of Autonomous Vehicles (AVs) increases the requirement for the accurate prediction of objects in the vicinity to guarantee safer journeys. For effectively predicting objects, sensors such as Three-Dimensional Light Detection and Ranging (3D LiDAR) and cameras can be used. The 3D LiDAR sensor captures the 3D shape of the object and produces point cloud data that describes the geometrical structure of the object. The LiDAR-only detectors may be subject to false detection or even non-detection over objects located at high distances. The camera sensor captures RGB images with sufficient attributes that describe the distinct identification of the object. The high-resolution images produced by the camera sensor benefit the precise classification of the objects. However, hindrances such as the absence of depth information from the images, unstructured point clouds, and cross modalities affect assertion and boil down the environmental perception. To this end, this paper proposes an object detection mechanism that fuses the data received from the camera sensor and the 3D LiDAR sensor (OD-C3DL). The 3D LiDAR sensor obtains point clouds of the object such as distance, position, and geometric shape. The OD-C3DL employs Convolutional Neural Networks (CNN) for further processing point clouds obtained from the 3D LiDAR sensor and the camera sensor to recognize the objects effectively. The point cloud of the LiDAR is enhanced and fused with the image space on the Regions of Interest (ROI) for easy recognition of the objects. The evaluation results show that the OD-C3DL can provide an average of 89 real-time objects for a frame and reduces the extraction time by a recall rate of 94%. The average processing time is 65ms, which makes the OD-C3DL model incredibly suitable for the AVs perception. Furthermore, OD-C3DL provides mean accuracy for identifying automobiles and pedestrians at a moderate degree of difficulty is higher than that of the previous models at 79.13% and 88.76%.
Journal Article
Mapping System Autonomous Electric Vehicle Based on Lidar-Sensor Using Hector SLAM Algorithm
by
SUPRAPTO Bhakti Yudho
,
DWIJAYANTI Suci
,
WIJAYA Patrick Kesuma
in
autonomous electric vehicle
,
hector slam algorithm
,
lidar sensor
2024
Autonomous electric vehicles (EVs) need to recognize the surrounding environment through mapping. The mapping provides directions for driving in new locations and uncharted areas. However, few studies have discussed the mapping of unknown outdoor areas using light detection and ranging (LiDAR) with simultaneous localization and mapping (SLAM). LiDAR can reduce the limitations of GPS, which cannot track the current location, and it covers a limited area. Hence, this study used the Hector SLAM algorithm, which maps based on the data generated by LiDAR sensors. This study was conducted at the Universitas Sriwijaya using two routes: the Palembang and Inderalaya campuses. A comparison is made with the map on Google Maps to determine the accuracy of the algorithm. The route of the Palembang campus was divided into four points: A-B-C-D; route AB exhibits the highest accuracy of 85.7%. In contrast, the route of the Inderalaya campus was established by adding routes with buildings closer to the road. A marker point was allocated on the route: A-B-C-D-E; route CE exhibits the highest accuracy of 83.6%. Overall, this study shows that the Hector SLAM algorithm and LiDAR can be used to map the unknown environment of autonomous EVs.
Journal Article