Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion
by
Ahn, Kyungjae
, Jang, Harin
, Jeon, Soo
, Kim, Taehyun
, Kang, Yeonsik
in
autonomous vehicles
/ Cameras
/ Deep learning
/ Driverless cars
/ Image processing
/ Methods
/ Neural networks
/ occupancy grid map
/ Optical radar
/ particle filters
/ Remote sensing
/ Robotics
/ semantic grid map
/ Semantics
/ sensor fusion
/ Sensors
/ Velocity
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion
by
Ahn, Kyungjae
, Jang, Harin
, Jeon, Soo
, Kim, Taehyun
, Kang, Yeonsik
in
autonomous vehicles
/ Cameras
/ Deep learning
/ Driverless cars
/ Image processing
/ Methods
/ Neural networks
/ occupancy grid map
/ Optical radar
/ particle filters
/ Remote sensing
/ Robotics
/ semantic grid map
/ Semantics
/ sensor fusion
/ Sensors
/ Velocity
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion
by
Ahn, Kyungjae
, Jang, Harin
, Jeon, Soo
, Kim, Taehyun
, Kang, Yeonsik
in
autonomous vehicles
/ Cameras
/ Deep learning
/ Driverless cars
/ Image processing
/ Methods
/ Neural networks
/ occupancy grid map
/ Optical radar
/ particle filters
/ Remote sensing
/ Robotics
/ semantic grid map
/ Semantics
/ sensor fusion
/ Sensors
/ Velocity
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion
Journal Article
Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion
2024
Request Book From Autostore
and Choose the Collection Method
Overview
In the field of robotics and autonomous driving, dynamic occupancy grid maps (DOGMs) are typically used to represent the position and velocity information of objects. Although three-dimensional light detection and ranging (LiDAR) sensor-based DOGMs have been actively researched, they have limitations, as they cannot classify types of objects. Therefore, in this study, a deep learning-based camera–LiDAR sensor fusion technique is employed as input to DOGMs. Consequently, not only the position and velocity information of objects but also their class information can be updated, expanding the application areas of DOGMs. Moreover, unclassified LiDAR point measurements contribute to the formation of a map of the surrounding environment, improving the reliability of perception by registering objects that were not classified by deep learning. To achieve this, we developed update rules on the basis of the Dempster–Shafer evidence theory, incorporating class information and the uncertainty of objects occupying grid cells. Furthermore, we analyzed the accuracy of the velocity estimation using two update models. One assigns the occupancy probability only to the edges of the oriented bounding box, whereas the other assigns the occupancy probability to the entire area of the box. The performance of the developed perception technique is evaluated using the public nuScenes dataset. The developed DOGM with object class information will help autonomous vehicles to navigate in complex urban driving environments by providing them with rich information, such as the class and velocity of nearby obstacles.
Publisher
MDPI AG
Subject
This website uses cookies to ensure you get the best experience on our website.