Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
45
result(s) for
"Jakobi, Martin"
Sort by:
A Methodology to Model the Rain and Fog Effect on the Performance of Automotive LiDAR Sensors
by
Michael H. Köhler
,
Martin Jakobi
,
Lukas Haas
in
Accuracy
,
advanced driver-assistance system
,
Analysis
2023
In this work, we introduce a novel approach to model the rain and fog effect on the light detection and ranging (LiDAR) sensor performance for the simulation-based testing of LiDAR systems. The proposed methodology allows for the simulation of the rain and fog effect using the rigorous applications of the Mie scattering theory on the time domain for transient and point cloud levels for spatial analyses. The time domain analysis permits us to benchmark the virtual LiDAR signal attenuation and signal-to-noise ratio (SNR) caused by rain and fog droplets. In addition, the detection rate (DR), false detection rate (FDR), and distance error derror of the virtual LiDAR sensor due to rain and fog droplets are evaluated on the point cloud level. The mean absolute percentage error (MAPE) is used to quantify the simulation and real measurement results on the time domain and point cloud levels for the rain and fog droplets. The results of the simulation and real measurements match well on the time domain and point cloud levels if the simulated and real rain distributions are the same. The real and virtual LiDAR sensor performance degrades more under the influence of fog droplets than in rain.
Journal Article
Development of High-Fidelity Automotive LiDAR Sensor Model with Standardized Interfaces
by
Köhler, Michael H.
,
Cichy, Yannik
,
Fink, Maximilian
in
advanced driver-assistance systems
,
Automobile industry
,
automotive LiDAR sensor
2022
This work introduces a process to develop a tool-independent, high-fidelity, ray tracing-based light detection and ranging (LiDAR) model. This virtual LiDAR sensor includes accurate modeling of the scan pattern and a complete signal processing toolchain of a LiDAR sensor. It is developed as a functional mock-up unit (FMU) by using the standardized open simulation interface (OSI) 3.0.2, and functional mock-up interface (FMI) 2.0. Subsequently, it was integrated into two commercial software virtual environment frameworks to demonstrate its exchangeability. Furthermore, the accuracy of the LiDAR sensor model is validated by comparing the simulation and real measurement data on the time domain and on the point cloud level. The validation results show that the mean absolute percentage error (MAPE) of simulated and measured time domain signal amplitude is 1.7%. In addition, the MAPE of the number of points Npoints and mean intensity Imean values received from the virtual and real targets are 8.5% and 9.3%, respectively. To the author’s knowledge, these are the smallest errors reported for the number of received points Npoints and mean intensity Imean values up until now. Moreover, the distance error derror is below the range accuracy of the actual LiDAR sensor, which is 2 cm for this use case. In addition, the proving ground measurement results are compared with the state-of-the-art LiDAR model provided by commercial software and the proposed LiDAR model to measure the presented model fidelity. The results show that the complete signal processing steps and imperfections of real LiDAR sensors need to be considered in the virtual LiDAR to obtain simulation results close to the actual sensor. Such considerable imperfections are optical losses, inherent detector effects, effects generated by the electrical amplification, and noise produced by the sunlight.
Journal Article
Velocity Estimation from LiDAR Sensors Motion Distortion Effect
by
Koch, Alexander W.
,
Jakobi, Martin
,
Zeh, Thomas
in
advanced driver assistance systems
,
Building automation
,
deep learning
2023
Many modern automated vehicle sensor systems use light detection and ranging (LiDAR) sensors. The prevailing technology is scanning LiDAR, where a collimated laser beam illuminates objects sequentially point-by-point to capture 3D range data. In current systems, the point clouds from the LiDAR sensors are mainly used for object detection. To estimate the velocity of an object of interest (OoI) in the point cloud, the tracking of the object or sensor data fusion is needed. Scanning LiDAR sensors show the motion distortion effect, which occurs when objects have a relative velocity to the sensor. Often, this effect is filtered, by using sensor data fusion, to use an undistorted point cloud for object detection. In this study, we developed a method using an artificial neural network to estimate an object’s velocity and direction of motion in the sensor’s field of view (FoV) based on the motion distortion effect without any sensor data fusion. This network was trained and evaluated with a synthetic dataset featuring the motion distortion effect. With the method presented in this paper, one can estimate the velocity and direction of an OoI that moves independently from the sensor from a single point cloud using only one single sensor. The method achieves a root mean squared error (RMSE) of 0.1187 m s−1 and a two-sigma confidence interval of [−0.0008 m s−1, 0.0017 m s−1] for the axis-wise estimation of an object’s relative velocity, and an RMSE of 0.0815 m s−1 and a two-sigma confidence interval of [0.0138 m s−1, 0.0170 m s−1] for the estimation of the resultant velocity. The extracted velocity information (4D-LiDAR) is available for motion prediction and object tracking and can lead to more reliable velocity data due to more redundancy for sensor data fusion.
Journal Article
LiDAR Sensor Parameter Augmentation and Data-Driven Influence Analysis on Deep-Learning-Based People Detection
2025
Light detection and ranging (LiDAR) sensor technology for people detection offers a significant advantage in data protection. However, to design these systems cost- and energy-efficiently, the relationship between the measurement data and final object detection output with deep neural networks (DNNs) has to be elaborated. Therefore, this paper presents augmentation methods to analyze the influence of the distance, resolution, noise, and shading parameters of a LiDAR sensor in real point clouds for people detection. Furthermore, their influence on object detection using DNNs was investigated. A significant reduction in the quality requirements for the point clouds was possible for the measurement setup with only minor degradation on the object list level. The DNNs PointVoxel-Region-based Convolutional Neural Network (PV-RCNN) and Sparsely Embedded Convolutional Detection (SECOND) both only show a reduction in object detection of less than 5% with a reduced resolution of up to 32 factors, for an increase in distance of 4 factors, and with a Gaussian noise up to μ=0 and σ=0.07. In addition, both networks require an unshaded height of approx. 0.5 m from a detected person’s head downwards to ensure good people detection performance without special training for these cases. The results obtained, such as shadowing information, are transferred to a software program to determine the minimum number of sensors and their orientation based on the mounting height of the sensor, the sensor parameters, and the ground area under consideration, both for detection at the point cloud level and object detection level.
Journal Article
Performance Evaluation of MEMS-Based Automotive LiDAR Sensor and Its Simulation Model as per ASTM E3125-17 Standard
by
Ludwig Kastner
,
Michael H. Köhler
,
Martin Jakobi
in
advanced driver-assistance system
,
Article ; micro-electro-mechanical systems ; automotive LiDAR sensor ; ASTM E3125-17 standard ; advanced driver-assistance system ; open simulation interface ; functional mock-up interface ; functional mock-up unit ; point-to-point distance tests ; user-selected tests ; proving ground ; PointPillars
,
ASTM E3125-17 standard
2023
Measurement performance evaluation of real and virtual automotive light detection and ranging (LiDAR) sensors is an active area of research. However, no commonly accepted automotive standards, metrics, or criteria exist to evaluate their measurement performance. ASTM International released the ASTM E3125-17 standard for the operational performance evaluation of 3D imaging systems commonly referred to as terrestrial laser scanners (TLS). This standard defines the specifications and static test procedures to evaluate the 3D imaging and point-to-point distance measurement performance of TLS. In this work, we have assessed the 3D imaging and point-to-point distance estimation performance of a commercial micro-electro-mechanical system (MEMS)-based automotive LiDAR sensor and its simulation model according to the test procedures defined in this standard. The static tests were performed in a laboratory environment. In addition, a subset of static tests was also performed at the proving ground in natural environmental conditions to determine the 3D imaging and point-to-point distance measurement performance of the real LiDAR sensor. In addition, real scenarios and environmental conditions were replicated in the virtual environment of a commercial software to verify the LiDAR model’s working performance. The evaluation results show that the LiDAR sensor and its simulation model under analysis pass all the tests specified in the ASTM E3125-17 standard. This standard helps to understand whether sensor measurement errors are due to internal or external influences. We have also shown that the 3D imaging and point-to-point distance estimation performance of LiDAR sensors significantly impacts the working performance of the object recognition algorithm. That is why this standard can be beneficial in validating automotive real and virtual LiDAR sensors, at least in the early stage of development. Furthermore, the simulation and real measurements show good agreement on the point cloud and object recognition levels.
Journal Article
Optical Setup for Error Compensation in a Laser Triangulation System
by
Köhler, Michael H.
,
Batarilo, Lorena
,
Akgül, Markus
in
absolute distance measurement (ADM)
,
Accuracy
,
Algorithms
2020
Absolute distance measurement is a field of research with a large variety of applications. Laser triangulation is a well-tested and developed technique using geometric relations to calculate the absolute distance to an object. The advantages of laser triangulation include its simple and cost-effective setup with yet a high achievable accuracy and resolution in short distances. A main problem of the technology is that even small changes of the optomechanical setup, e.g., due to thermal expansion, lead to significant measurement errors. Therefore, in this work, we introduce an optical setup containing only a beam splitter and a mirror, which splits the laser into a measurement beam and a reference beam. The reference beam can then be used to compensate for different error sources, such as laser beam dithering or shifts of the measurement setup due to the thermal expansion of the components. The effectiveness of this setup is proven by extensive simulations and measurements. The compensation setup improves the deviation in static measurements by up to 75%, whereas the measurement uncertainty at a distance of 1 m can be reduced to 85 μm. Consequently, this compensation setup can improve the accuracy of classical laser triangulation devices and make them more robust against changes in environmental conditions.
Journal Article
Dimensionality reduction in hyperspectral imaging using standard deviation-based band selection for efficient classification
2025
Hyperspectral imaging generates vast amounts of data containing spatial and spectral information. Dimensionality reduction methods can reduce data size while preserving essential spectral features and are grouped into feature extraction or band selection methods. This study demonstrates the efficiency of the standard deviation as a band selection approach combined with a straightforward convolutional neural network for classifying organ tissues with high spectral similarity. To evaluate the classification performance, the method was applied to eleven groups of different organ samples, consisting of 100 datasets per group. Using the standard deviation is an effective method for dimensionality reduction while maintaining the characteristic spectral features and effectively decreasing data size by up to 97.3%, achieving a classification accuracy of 97.21% compared to 99.30% without any processing. Even in comparison with mutual information– and Shannon entropy–based band selection methods, the standard deviation exhibited superior stability and efficiency while maintaining equally high classification accuracy. The results highlight the potential of dimensionality reduction for hyperspectral imaging classification tasks that require large datasets and fast processing speed without sacrificing accuracy.
Journal Article
MEMS-Scanner Testbench for High Field of View LiDAR Applications
2021
LiDAR sensors are a key technology for enabling safe autonomous cars. For highway applications, such systems must have a long range, and the covered field of view (FoV) of >45° must be scanned with resolutions higher than 0.1°. These specifications can be met by modern MEMS scanners, which are chosen for their robustness and scalability. For the automotive market, these sensors, and especially the scanners within, must be tested to the highest standards. We propose a novel measurement setup for characterizing and validating these kinds of scanners based on a position-sensitive detector (PSD) by imaging a deflected laser beam from a diffuser screen onto the PSD. A so-called ray trace shifting technique (RTST) was used to minimize manual calibration effort, to reduce external mounting errors, and to enable dynamical one-shot measurements of the scanner’s steering angle over large FoVs. This paper describes the overall setup and the calibration method according to a standard camera calibration. We further show the setup’s capabilities by validating it with a statically set rotating stage and a dynamically oscillating MEMS scanner. The setup was found to be capable of measuring LiDAR MEMS scanners with a maximum FoV of 47° dynamically, with an uncertainty of less than 1%.
Journal Article
Fiber Bragg Sensors Embedded in Cast Aluminum Parts: Axial Strain and Temperature Response
by
Roths, Johannes
,
Koch, Alexander W.
,
Lindner, Markus
in
Additive manufacturing
,
Aluminum
,
casting
2021
In this study, the response of fiber Bragg gratings (FBGs) embedded in cast aluminum parts under thermal and mechanical load were investigated. Several types of FBGs in different types of fibers were used in order to verify general applicability. To monitor a temperature-induced strain, an embedded regenerated FBG (RFBG) in a cast part was placed in a climatic chamber and heated up to 120 ∘C within several cycles. The results show good agreement with a theoretical model, which consists of a shrink-fit model and temperature-dependent material parameters. Several cast parts with different types of FBGs were machined into tensile test specimens and tensile tests were executed. For the tensile tests, a cyclic procedure was chosen, which allowed us to distinguish between the elastic and plastic deformation of the specimen. An analytical model, which described the elastic part of the tensile test, was introduced and showed good agreement with the measurements. Embedded FBGs - integrated during the casting process - showed under all mechanical and thermal load conditions no hysteresis, a reproducible sensor response, and a high reliable operation, which is very important to create metallic smart structures and packaged fiber optic sensors for harsh environments.
Journal Article
Online 3D Displacement Measurement Using Speckle Interferometer with a Single Illumination-Detection Path
2018
Measurement systems for online nondestructive full-field three-dimensional (3D) displacement based on the single-shot and multiplexing techniques attract more and more interest, especially throughout the manufacturing industries. This paper proposes an accurate and easy-to-implement method based on an electronic speckle pattern interferometer (ESPI) with single illumination-detection path to realize the online nondestructive full-field 3D displacement measurement. The simple and compact optical system generates three different sensitivity vectors to enable the evaluation of the three orthogonal displacement components. By applying the spatial carrier phase-shifting technique, the desired information can be obtained in real time. The theoretical analysis and the measurement results have proven the feasibility of this ESPI system and quantified its relative measurement error.
Journal Article