Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
83 result(s) for "Automated vehicles Data processing."
Sort by:
Lidar IMU fusion navigation system for AGVs in smart factories
Automated Guided Vehicles (AGVs) are vital to smart factories, enabling autonomous and efficient material transport. However, precise navigation is challenging because LiDAR provides high-dimensional, dynamic spatial data, while Inertial Measurement Unit (IMU) signals are often intermittent, leading to inconsistencies and navigation drift. This work proposes the Screened Inertial Data Fusion Method (SIDFM), a novel framework that systematically screens LiDAR data using a minimal differential function and fuses it with IMU intervals through linear regression learning. The SIDFM approach ensures that only consistent LiDAR points are integrated with IMU data, reducing mismatches and improving motion estimation. SIDFM was validated using a benchmark AGV dataset and compared against baseline LiDAR-IMU fusion methods under varying acceleration conditions. Results show that SIDFM reduces navigation errors by 12.09% at low acceleration and 11.43% at high acceleration while also significantly decreasing positioning errors. These improvements enhance the stability, precision, and safety of AGVs in dynamic manufacturing environments. The findings establish SIDFM as an effective and practical solution for robust AGV navigation, with potential applications in smart factories, warehouses, and autonomous mobility systems that demand both efficiency and reliability.
A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles
This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed.
Quantitative mapping and predictive modeling of Mn nodules' distribution from hydroacoustic and optical AUV data linked by random forests machine learning
In this study, high-resolution bathymetric multibeam and optical image data, both obtained within the Belgian manganese (Mn) nodule mining license area by the autonomous underwater vehicle (AUV) Abyss, were combined in order to create a predictive random forests (RF) machine learning model. AUV bathymetry reveals small-scale terrain variations, allowing slope estimations and calculation of bathymetric derivatives such as slope, curvature, and ruggedness. Optical AUV imagery provides quantitative information regarding the distribution (number and median size) of Mn nodules. Within the area considered in this study, Mn nodules show a heterogeneous and spatially clustered pattern, and their number per square meter is negatively correlated with their median size. A prediction of the number of Mn nodules was achieved by combining information derived from the acoustic and optical data using a RF model. This model was tuned by examining the influence of the training set size, the number of growing trees (ntree), and the number of predictor variables to be randomly selected at each node (mtry) on the RF prediction accuracy. The use of larger training data sets with higher ntree and mtry values increases the accuracy. To estimate the Mn-nodule abundance, these predictions were linked to ground-truth data acquired by box coring. Linking optical and hydroacoustic data revealed a nonlinear relationship between the Mn-nodule distribution and topographic characteristics. This highlights the importance of a detailed terrain reconstruction for a predictive modeling of Mn-nodule abundance. In addition, this study underlines the necessity of a sufficient spatial distribution of the optical data to provide reliable modeling input for the RF.
Airborne Drones for Water Quality Mapping in Inland, Transitional and Coastal Waters—MapEO Water Data Processing and Validation
Using airborne drones to monitor water quality in inland, transitional or coastal surface waters is an emerging research field. Airborne drones can fly under clouds at preferred times, capturing data at cm resolution, filling a significant gap between existing in situ, airborne and satellite remote sensing capabilities. Suitable drones and lightweight cameras are readily available on the market, whereas deriving water quality products from the captured image is not straightforward; vignetting effects, georeferencing, the dynamic nature and high light absorption efficiency of water, sun glint and sky glint effects require careful data processing. This paper presents the data processing workflow behind MapEO water, an end-to-end cloud-based solution that deals with the complexities of observing water surfaces and retrieves water-leaving reflectance and water quality products like turbidity and chlorophyll-a (Chl-a) concentration. MapEO water supports common camera types and performs a geometric and radiometric correction and subsequent conversion to reflectance and water quality products. This study shows validation results of water-leaving reflectance, turbidity and Chl-a maps derived using DJI Phantom 4 pro and MicaSense cameras for several lakes across Europe. Coefficients of determination values of 0.71 and 0.93 are obtained for turbidity and Chl-a, respectively. We conclude that airborne drone data has major potential to be embedded in operational monitoring programmes and can form useful links between satellite and in situ observations.
Analyzing Factors Influencing Situation Awareness in Autonomous Vehicles—A Survey
Autonomous driving of higher automation levels asks for optimal execution of critical maneuvers in all environments. A crucial prerequisite for such optimal decision-making instances is accurate situation awareness of automated and connected vehicles. For this, vehicles rely on the sensory data captured from onboard sensors and information collected through V2X communication. The classical onboard sensors exhibit different capabilities and hence a heterogeneous set of sensors is required to create better situation awareness. Fusion of the sensory data from such a set of heterogeneous sensors poses critical challenges when it comes to creating an accurate environment context for effective decision-making in AVs. Hence this exclusive survey analyses the influence of mandatory factors like data pre-processing preferably data fusion along with situation awareness toward effective decision-making in the AVs. A wide range of recent and related articles are analyzed from various perceptive, to pick the major hiccups, which can be further addressed to focus on the goals of higher automation levels. A section of the solution sketch is provided that directs the readers to the potential research directions for achieving accurate contextual awareness. To the best of our knowledge, this survey is uniquely positioned for its scope, taxonomy, and future directions.
Visualization system to identify structurally vulnerable links in OHT railway network in semiconductor FAB using betweenness centrality
In semiconductor fabrication (FAB), wafers are placed into carriers known as Front Opening Unified Pods (FOUPs), transported by the Overhead Hoist Transport (OHT). The OHT, a type of Automated Guided Vehicle (AGV), moves along a fixed railway network in the FAB. The routes of OHTs on the railway network are typically determined by a Single Source Shortest Path (SSSP) algorithm such as Dijkstra’s. However, the presence of hundreds of operating OHTs often leads to path interruptions, causing congestion or deadlocks that ultimately diminish the overall productivity of the FAB. This research focused on identifying structurally vulnerable links within the OHT railway network in semiconductor FAB and developing a visualization system for enhanced on-site decision-making. We employed betweenness centrality as a quantitative index to evaluate the structural vulnerability of the OHT railway network. Also, to accommodate the unique hierarchical node-port structure of this network, we modified the traditional Brandes algorithm, a widely-used method for calculating betweenness centrality. Our modification of the Brandes algorithm integrated node-port characteristics without increasing computation time while incorporating parallelization to reduce computation time further and improve usability. Ultimately, we developed an end-to-end web-based visualization system that enables users to perform betweenness centrality calculations on specific OHT railway layouts using our algorithm and view the results through a web interface. We validated our approach by comparing our results with historically vulnerable links provided by Samsung Electronics. The study had two main outcomes: the development of a new betweenness centrality calculation algorithm considering the node-port structure and the creation of a visualization system. The study demonstrated that the node-port structure betweenness centrality effectively identified vulnerable links in the OHT railway network. Presenting these findings through a visualization system greatly enhanced their practical applicability and relevance.
A method of vehicle-infrastructure cooperative perception based vehicle state information fusion using improved kalman filter
For the purpose of overcoming the technical bottlenecks and limitations of autonomous vehicles on the information perception, and improving the sensing range and performance of vehicle driving environment and traffic information, a framework of vehicle-infrastructure cooperative perception for the Cooperative Automated Driving System is proposed in this paper. Taking the vehicle state information as an example, it also introduced a calculation method of data fusion for vehicle-infrastructure cooperative perception. Besides, considering that the intelligent roadside equipment may appear short-term sensing failure, the proposed method improved the traditional Kalman Filter to output position information even when the roadside fails. Compared with the vehicle-only perception, the simulation experiments verified that the proposed method could improve the average positioning accuracy under the normal condition and the intelligent roadside failure by 18% and 19%, respectively. The proposed framework provided a solution for coordinating and fusing perception intelligence and functions between connected automated vehicles, intelligent infrastructure and intelligent control system. The proposed improved Kalman Filter method provides flexible strategies for practical application.
Scenario-Mining for Level 4 Automated Vehicle Safety Assessment from Real Accident Situations in Urban Areas Using a Natural Language Process
As the research and development activities of automated vehicles have been active in recent years, developing test scenarios and methods has become necessary to evaluate and ensure their safety. Based on the current context, this study developed an automated vehicle test scenario derivation methodology using traffic accident data and a natural language processing technique. The natural language processing technique-based test scenario mining methodology generated 16 functional test scenarios for urban arterials and 38 scenarios for intersections in urban areas. The proposed methodology was validated by determining the number of traffic accident records that can be explained by the resulting test scenarios. That is, the resulting test scenarios are valid and represent a matching rate between the test scenarios and the increased number of traffic accident records. The resulting functional scenarios generated by the proposed methodology account for 43.69% and 27.63% of the actual traffic accidents for urban arterial and intersection scenarios, respectively.
Fundamentals of Connected and Automated Vehicles
The automotive industry is transforming to a greater degree that has occurred since Henry Ford introduced mass production of the automobile with the Model T in 1913. Advances in computing, data processing, and artificial intelligence (deep learning in particular) are driving the development of new levels of automation that will impact all aspects of our lives including our vehicles. What are Connected and Automated Vehicles (CAVs)? What are the underlying technologies that need to mature and converge for them to be widely deployed? Fundamentals of Connected and Automated Vehicles is written to answer these questions, educating the reader with the information required to make informed predictions of how and when CAVs will impact their lives. Topics covered include: History of Connected and Automated Vehicles, Localization, Connectivity, Sensor and Actuator Hardware, Computer Vision, Sensor Fusion, Path Planning and Motion Control, Verification and Validation, and Outlook for future of CAVs.