Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeDegree TypeIs Full-Text AvailableSubjectPublisherSourceGranting InstitutionDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
2,064
result(s) for
"Driver assistance systems"
Sort by:
Autonomous driving and advanced driver-assistance systems (ADAS) : applications, development, legal issues, and testing
\"Autonomous Driving and Advanced Driver Assistance Systems (ADAS) outlines the latest research relating to autonomous cars and advanced driver-assistance systems, including the development, testing and verification for real-time situations of sensor fusion, sensor placement, control algorithms, computer vision, and more. With an infinite number of real-time possibilities that need to be addressed, the methods and examples included make this book a valuable source of information for academic and industrial researchers, automotive companies and suppliers\"-- Provided by publisher.
Performance Evaluation of MEMS-Based Automotive LiDAR Sensor and Its Simulation Model as per ASTM E3125-17 Standard
by
Ludwig Kastner
,
Michael H. Köhler
,
Martin Jakobi
in
advanced driver-assistance system
,
Article ; micro-electro-mechanical systems ; automotive LiDAR sensor ; ASTM E3125-17 standard ; advanced driver-assistance system ; open simulation interface ; functional mock-up interface ; functional mock-up unit ; point-to-point distance tests ; user-selected tests ; proving ground ; PointPillars
,
ASTM E3125-17 standard
2023
Measurement performance evaluation of real and virtual automotive light detection and ranging (LiDAR) sensors is an active area of research. However, no commonly accepted automotive standards, metrics, or criteria exist to evaluate their measurement performance. ASTM International released the ASTM E3125-17 standard for the operational performance evaluation of 3D imaging systems commonly referred to as terrestrial laser scanners (TLS). This standard defines the specifications and static test procedures to evaluate the 3D imaging and point-to-point distance measurement performance of TLS. In this work, we have assessed the 3D imaging and point-to-point distance estimation performance of a commercial micro-electro-mechanical system (MEMS)-based automotive LiDAR sensor and its simulation model according to the test procedures defined in this standard. The static tests were performed in a laboratory environment. In addition, a subset of static tests was also performed at the proving ground in natural environmental conditions to determine the 3D imaging and point-to-point distance measurement performance of the real LiDAR sensor. In addition, real scenarios and environmental conditions were replicated in the virtual environment of a commercial software to verify the LiDAR model’s working performance. The evaluation results show that the LiDAR sensor and its simulation model under analysis pass all the tests specified in the ASTM E3125-17 standard. This standard helps to understand whether sensor measurement errors are due to internal or external influences. We have also shown that the 3D imaging and point-to-point distance estimation performance of LiDAR sensors significantly impacts the working performance of the object recognition algorithm. That is why this standard can be beneficial in validating automotive real and virtual LiDAR sensors, at least in the early stage of development. Furthermore, the simulation and real measurements show good agreement on the point cloud and object recognition levels.
Journal Article
Prediction of Driver’s Intention of Lane Change by Augmenting Sensor Information Using Machine Learning Techniques
by
Bong, Jae-Hwan
,
Park, Jooyoung
,
Park, Shinsuk
in
Accuracy
,
advanced driver assistance system (ADAS)
,
artificial neural network (ANN)
2017
Driver assistance systems have become a major safety feature of modern passenger vehicles. The advanced driver assistance system (ADAS) is one of the active safety systems to improve the vehicle control performance and, thus, the safety of the driver and the passengers. To use the ADAS for lane change control, rapid and correct detection of the driver’s intention is essential. This study proposes a novel preprocessing algorithm for the ADAS to improve the accuracy in classifying the driver’s intention for lane change by augmenting basic measurements from conventional on-board sensors. The information on the vehicle states and the road surface condition is augmented by using an artificial neural network (ANN) models, and the augmented information is fed to a support vector machine (SVM) to detect the driver’s intention with high accuracy. The feasibility of the developed algorithm was tested through driving simulator experiments. The results show that the classification accuracy for the driver’s intention can be improved by providing an SVM model with sufficient driving information augmented by using ANN models of vehicle dynamics.
Journal Article
A Methodology to Model the Rain and Fog Effect on the Performance of Automotive LiDAR Sensors
by
Michael H. Köhler
,
Martin Jakobi
,
Lukas Haas
in
Accuracy
,
advanced driver-assistance system
,
Analysis
2023
In this work, we introduce a novel approach to model the rain and fog effect on the light detection and ranging (LiDAR) sensor performance for the simulation-based testing of LiDAR systems. The proposed methodology allows for the simulation of the rain and fog effect using the rigorous applications of the Mie scattering theory on the time domain for transient and point cloud levels for spatial analyses. The time domain analysis permits us to benchmark the virtual LiDAR signal attenuation and signal-to-noise ratio (SNR) caused by rain and fog droplets. In addition, the detection rate (DR), false detection rate (FDR), and distance error derror of the virtual LiDAR sensor due to rain and fog droplets are evaluated on the point cloud level. The mean absolute percentage error (MAPE) is used to quantify the simulation and real measurement results on the time domain and point cloud levels for the rain and fog droplets. The results of the simulation and real measurements match well on the time domain and point cloud levels if the simulated and real rain distributions are the same. The real and virtual LiDAR sensor performance degrades more under the influence of fog droplets than in rain.
Journal Article
Camera calibration for the surround-view system: a benchmark and dataset
by
Qin, Leidong
,
Lin, Chunyu
,
Huang, Shujuan
in
Advanced driver assistance systems
,
Algorithms
,
Annotations
2024
Surround-view system (SVS) is widely used in the advanced driver assistance system (ADAS). SVS uses four fish-eye lenses to monitor real-time scenes around the vehicle. However, accurate intrinsic and extrinsic parameter estimation is required for the proper functioning of the system. At present, the intrinsic calibration can be pipeline by utilizing checkerboard algorithm, while extrinsic calibration is still immature. Therefore, we proposed a specific calibration pipeline to estimate extrinsic parameters robustly. This scheme takes a driving sequence of four cameras as input. It firstly utilizes lane line to roughly estimate each camera pose. Considering the environmental condition differences in each camera, we separately select strategies from two methods to accurately estimate the extrinsic parameters. To achieve accurate estimates for both front and rear camera, we proposed a method that mutually iterating line detection and pose estimation. As for bilateral camera, we iteratively adjust the camera pose and position by minimizing texture and edge error between ground projections of adjacent cameras. After estimating the extrinsic parameters, the surround-view image can be synthesized by homography-based transformation. The proposed pipeline can robustly estimate the four SVS camera extrinsic parameters in real driving environments. In addition, to evaluate the proposed scheme, we build a surround-view fish-eye dataset, which contains 40 videos with 32,000 frames, acquired from different real traffic scenarios. All the frames in each video are manually labeled with lane annotation, with its GT extrinsic parameters. Moreover, this surround-view dataset could be used by other researchers to evaluate their performance. The dataset will be available soon.
Journal Article
Identifying and managing data quality requirements: a design science study in the field of automated driving
by
Knauss, Eric
,
Pradhan, Shameer Kumar
,
Heyn, Hans-Martin
in
Advanced driver assistance systems
,
Autonomous vehicles
,
Data
2024
Good data quality is crucial for any data-driven system’s effective and safe operation. For critical safety systems, the significance of data quality is even higher since incorrect or low-quality data may cause fatal faults. However, there are challenges in identifying and managing data quality. In particular, there is no accepted process to define and continuously test data quality concerning what is necessary for operating the system. This lack is problematic because even safety-critical systems become increasingly dependent on data. Here, we propose a Candidate Framework for Data Quality Assessment and Maintenance (CaFDaQAM) to systematically manage data quality and related requirements based on design science research. The framework is constructed based on an advanced driver assistance system (ADAS) case study. The study is based on empirical data from a literature review, focus groups, and design workshops. The proposed framework consists of four components: a Data Quality Workflow, a List of Data Quality Challenges, a List of Data Quality Attributes, and Solution Candidates. Together, the components act as tools for data quality assessment and maintenance. The candidate framework and its components were validated in a focus group.
Journal Article
Towards applying image retrieval approach for finding semantic locations in autonomous vehicles
by
Unar, Salahuddin
,
Liu, Pengbo
,
Wang, Yafei
in
Advanced driver assistance systems
,
Autonomous vehicles
,
Color
2024
The current world today is indisputably digital and its advanced digital technologies have grown more rapidly than ever since. The recent scientific and engineering progress in autonomous vehicles (AVs) and advanced driver assistance systems (ADAS) make us believe that autonomous vehicles will be fully functional without human intervention in the near future. The current ADAS methods are best at realizing different modes of AV, however, it still lacks behind to handle uncertain situations such as deciding the specific location to stop. To overcome this, we propose a novel image retrieval approach for finding the semantic locations by using vigorous features and color information. Firstly, the proposed method offers different image categories and the driver selects a query image of the semantic location. Secondly, the method extracts its salient features and color information using the proposed technique. Thirdly, the method computes the similarity between the query image and the dataset images. Finally, if the threshold similarity is found, the method asks the driver for appropriate actions (e.g. slow down or stop). The experimental results on three benchmark datasets show the efficiency and accuracy of the proposed method for finding the semantic locations in autonomous vehicles.
Journal Article
Automated Lane Centering: An Off-the-Shelf Computer Vision Product vs. Infrastructure-Based Chip-Enabled Raised Pavement Markers
by
Kadav, Parth
,
Wang, Chieh (Ross)
,
Meyer, Richard T.
in
ADAS
,
advanced driver assistance systems
,
advanced driver assistance systems (ADAS)
2024
Safe autonomous vehicle (AV) operations depend on an accurate perception of the driving environment, which necessitates the use of a variety of sensors. Computational algorithms must then process all of this sensor data, which typically results in a high on-vehicle computational load. For example, existing lane markings are designed for human drivers, can fade over time, and can be contradictory in construction zones, which require specialized sensing and computational processing in an AV. But, this standard process can be avoided if the lane information is simply transmitted directly to the AV. High definition maps and road side units (RSUs) can be used for direct data transmission to the AV, but can be prohibitively expensive to establish and maintain. Additionally, to ensure robust and safe AV operations, more redundancy is beneficial. A cost-effective and passive solution is essential to address this need effectively. In this research, we propose a new infrastructure information source (IIS), chip-enabled raised pavement markers (CERPMs), which provide environmental data to the AV while also decreasing the AV compute load and the associated increase in vehicle energy use. CERPMs are installed in place of traditional ubiquitous raised pavement markers along road lane lines to transmit geospatial information along with the speed limit using long range wide area network (LoRaWAN) protocol directly to nearby vehicles. This information is then compared to the Mobileye commercial off-the-shelf traditional system that uses computer vision processing of lane markings. Our perception subsystem processes the raw data from both CEPRMs and Mobileye to generate a viable path required for a lane centering (LC) application. To evaluate the detection performance of both systems, we consider three test routes with varying conditions. Our results show that the Mobileye system failed to detect lane markings when the road curvature exceeded ±0.016 m−1. For the steep curvature test scenario, it could only detect lane markings on both sides of the road for just 6.7% of the given test route. On the other hand, the CERPMs transmit the programmed geospatial information to the perception subsystem on the vehicle to generate a reference trajectory required for vehicle control. The CERPMs successfully generated the reference trajectory for vehicle control in all test scenarios. Moreover, the CERPMs can be detected up to 340 m from the vehicle’s position. Our overall conclusion is that CERPM technology is viable and that it has the potential to address the operational robustness and energy efficiency concerns plaguing the current generation of AVs.
Journal Article
Temporal and Fine-Grained Pedestrian Action Recognition on Driving Recorder Database
by
Satoh, Yutaka
,
Kataoka, Hirokatsu
,
Aoki, Yoshimitsu
in
advanced driver-assistance systems (ADAS)
,
driving recorder
,
fine-grained pedestrian action recognition
2018
The paper presents an emerging issue of fine-grained pedestrian action recognition that induces an advanced pre-crush safety to estimate a pedestrian intention in advance. The fine-grained pedestrian actions include visually slight differences (e.g., walking straight and crossing), which are difficult to distinguish from each other. It is believed that the fine-grained action recognition induces a pedestrian intention estimation for a helpful advanced driver-assistance systems (ADAS). The following difficulties have been studied to achieve a fine-grained and accurate pedestrian action recognition: (i) In order to analyze the fine-grained motion of a pedestrian appearance in the vehicle-mounted drive recorder, a method to describe subtle change of motion characteristics occurring in a short time is necessary; (ii) even when the background moves greatly due to the driving of the vehicle, it is necessary to detect changes in subtle motion of the pedestrian; (iii) the collection of large-scale fine-grained actions is very difficult, and therefore a relatively small database should be focused. We find out how to learn an effective recognition model with only a small-scale database. Here, we have thoroughly evaluated several types of configurations to explore an effective approach in fine-grained pedestrian action recognition without a large-scale database. Moreover, two different datasets have been collected in order to raise the issue. Finally, our proposal attained 91.01% on National Traffic Science and Environment Laboratory database (NTSEL) and 53.23% on the near-miss driving recorder database (NDRDB). The paper has improved +8.28% and +6.53% from baseline two-stream fusion convnets.
Journal Article