Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
6,854 result(s) for "Autonomous navigation"
Sort by:
Autonomous UAV Navigation with Adaptive Control Based on Deep Reinforcement Learning
Unmanned aerial vehicle (UAV) navigation plays a crucial role in its ability to perform autonomous missions in complex environments. Most of the existing reinforcement learning methods to solve the UAV navigation problem fix the flight altitude and velocity, which largely reduces the difficulty of the algorithm. But the methods without adaptive control are not suitable in low-altitude environments with complex situations, generally suffering from weak obstacle avoidance. Some UAV navigation studies with adaptive flight only have weak obstacle avoidance capabilities. To address the problem of UAV navigation in low-altitude environments, we construct autonomous UAV navigation in 3D environments with adaptive control as a Markov decision process and propose a deep reinforcement learning algorithm. To solve the problem of weak obstacle avoidance, we creatively propose the guide attention method to make a UAV’s decision focus shift between the navigation task and obstacle avoidance task according to changes in the obstacle. We raise a novel velocity-constrained loss function and add it to the original actor loss to improve the UAV’s velocity control capability. Simulation experiment results demonstrate that our algorithm outperforms some of the state-of-the-art deep reinforcement learning algorithms performing UAV navigation tasks in a 3D environment and has outstanding performance in algorithm effectiveness, with the average reward increasing by 9.35, the success rate of navigation tasks increasing by 14%, and the collision rate decreasing by 14%.
Mapless autonomous navigation for UGV in cluttered off-road environment with the guidance of wayshowers using deep reinforcement learning
Navigating unmanned ground vehicle (UGV) through off-road environments is critical for various tasks like exploration and rescue. Unlike scenarios allowing offline global planning based on prior knowledge, online navigation becomes essential due to the dynamic nature of these tasks. Although deep reinforcement learning (DRL) offers promise for mapless autonomous navigation due to its end-to-end advantages, existing approaches often rely solely on goal positions. This neglects the complex distribution of obstacles along the path, leading to inefficient interactions with the environment during training. To address this challenge, a deep reinforcement learning framework is proposed for autonomous navigation guided by wayshowers. Initially, a new metric is developed based on multilevel analysis to generate elevation maps, aiding in the identification of optimal wayshowers. Upon integrating wayshower information with other inputs, a multi-head attention (MHA) module is incorporated into DRL network, which includes a length attention mechanism to enhance focus on recent historical observation sequences to promote model convergence. Furthermore, the reward function is reshaped to offer dense reward signals, thereby resolving the sparse reward problem inherent in goal-driven methods. To validate the proposed approach, experiments are conducted on several off-road maps in both the Carla and Gazebo simulators. The results demonstrate the superiority of our method not only in simple environments but also in more challenging scenarios.
Methodology for tunnel inspection using drone with autonomous navigation
A methodology is presented for the digital reconstruction of an underground tunnel geometry using UAV photogrammetry, which is first tested in a computer simulation environment using the MRS UAV System, the Robot Operating System (ROS), and the Gazebo open-source robotics simulator. An algorithm for UAV navigation inside the tunnel is proposed, aiming to maintain a centralized position relative to the walls, ceiling, and floor. The proposed navigation algorithm can be used in a tunnel environment with few obstacles and overhanging structures, characterized by a practically constant cross-section along its entire length, which renders localization algorithms that utilize LiDAR scanning and point clouds impractical. These characteristics are common to rail or road transport tunnels. The methodology is tested in a computer simulation, and photogrammetry results showed that it is possible to digitally reconstruct the reference underground tunnel and faithfully reproduce details in texture, shape, and color. Thus, experiments were carried out in sections of a highway tunnel under construction to apply the same methodology for reconstructing three-dimensional geometry using photogrammetry, the images of which come from cameras onboard a drone with autonomous navigation using the same algorithm. The quantitative evaluation revealed a cross-sectional area difference of 0.21% between the designed area (70.72 ) and the area obtained from the photogrammetric model (70.57 ), confirming the high precision. Qualitatively, the model effectively represented textures and colors, validating the methodology for real-world applications.
Game-Engine-Based 3D Simulation of Mobile Robot and its Application to Autonomous Navigation in Physical Environments
An online-usable digital twin system integrated into a control system is proposed and developed in this study to control an autonomous mobile robot operating on a complex three-dimensional (3D) terrain that includes slopes and uneven surfaces. The system is equipped with validated functions that are necessary for virtual or physical autonomous navigation. In this development, (1) 3D virtual environment, 3D virtual robot model, and 3D virtual sensor models are constructed to accurately reproduce the motion of a mobile robot on a computer. Additionally, (2) by connecting these virtual and physical models through an interface, we integrate them as a digital twin and implement an autonomous navigation control system that is synchronized with a motion simulation by switching between a physical and virtual robot (through a physics engine). This allows the same control system to be applied to both the virtual and physical models. The virtual model is created using the Unity 3D game engine, which integrates environmental terrain data and robot models to enable 3D physical simulations including road slopes and unevenness. Additionally, a path-setting method using Bézier curves and a path-following algorithm are implemented in the simulation system. Autonomous navigation of the mobile robot through the digital twin system is achieved by combining the functions above in a manner that allows online use during operation. Finally, autonomous navigation tests are conducted in a physical environment to confirm the effectiveness of the developed system.
Behavior-based Autonomous Navigation and Formation Control of Mobile Robots in Unknown Cluttered Dynamic Environments with Dynamic Target Tracking
While different species in nature have safely solved the problem of navigation in a dynamic environment, this remains a challenging task for researchers around the world. The paper addresses the problem of autonomous navigation in an unknown dynamic environment for a single and a group of three wheeled omnidirectional mobile robots (TWOMRs). The robot has to track a dynamic target while avoiding dynamic obstacles and dynamic walls in an unknown and very dense environment. It adopts a behavior-based controller that consists of four behaviors: “target tracking”, “obstacle avoidance”, “dynamic wall following” and “avoid robots”. The paper considers the problem of kinematic saturation. In addition, it introduces a strategy for predicting the velocity of dynamic obstacles based on two successive measurements of the ultrasonic sensors to calculate the velocity of the obstacle expressed in the sensor frame. Furthermore, the paper proposes a strategy to deal with dynamic walls even when they have U-like or V-like shapes. The approach can also deal with the formation control of a group of robots based on the leader-follower structure and the behavior-based control, where the robots have to get together and maintain a given formation while navigating toward the target, avoiding obstacles and walls in a dynamic environment. The effectiveness of the proposed approaches is demonstrated via simulation.
A Range-Based Algorithm for Autonomous Navigation of an Aerial Drone to Approach and Follow a Herd of Cattle
This paper proposes an algorithm that will allow an autonomous aerial drone to approach and follow a steady or moving herd of cattle using only range measurements. The algorithm is also insensitive to the complexity of the herd’s movement and the measurement noise. Once arrived at the herd of cattle, the aerial drone can follow it to a desired destination. The primary motivation for the development of this algorithm is to use simple, inexpensive and robust sensing hence range sensors. The algorithm does not depend on the accuracy of the range measurements, rather the rate of change of range measurements. The proposed method is based on sliding mode control which provides robustness. A mathematical analysis, simulations and experimental results with a real aerial drone are presented to demonstrate the effectiveness of the proposed method.
Image Acquisition of Critical Bridge Components Using Vision-guided Autonomous Vehicle
This research proposes a vision-guided autonomous navigation framework for unmanned vehicles performing image acquisition for bridge inspection. The proposed framework integrates visual SLAM with RGB-D image input with semantic segmentation to detect and localize critical structural components like columns. The detected components are converted to the parametric map to generate navigation goals for image collection. The proposed approach is first validated in the synthetic bridge inspection environment using an unmanned ground vehicle. The feasibility of the framework is further studied by the laboratory-scale prototyping and validation using TurtleBot3 equipped with Jetson TX2 onboard computer. In the simulation environment, the proposed framework can achieve autonomous navigation to up to 6 columns and acquisition of image data with 90% success rate for 3 columns. Furthermore, the performance evaluation in the real-world environment shows that the developed hardware-software prototype can navigate and collect image data of up to 2 columns, with more than 60% success rate navigating to the first column. The results indicate the significant potential of achieving autonomous navigation and image acquisition with limited onboard computational resources, contributing to the enhanced efficiency and reliability of bridge management.
Research on Multi-AGV Management System of Autonomous Navigation AGVs for Manufacturing Environment
This paper serves intelligent AGVs which can autonomous positioning, autonomous navigation and autonomous movement. And build a multi-AGV management system for manufacturing environment applications, which is relatively complete functions and good interaction between users and AGVs. In order to meet the requirements, the system development process highlights the functional design of the AGV management system and optimizes the data processing capabilities. Complete the construction of the interface and database on the Qt and My SQL software. The test was conducted after importing into the test database. The results showed that the built system operated smoothly and the data processing was correct and timely.
Model-based and machine learning-based high-level controller for autonomous vehicle navigation: lane centering and obstacles avoidance
Researchers have been attempting to make the car drive autonomously. The environment perception together with safe guidance and control is an important task and are one of the big challenges when developing this kind of system. Geometrical or physical based models, machine learning based models and those based on a mixture of both models, are the three types of navigation methods used to resolve this problem. The last method takes advantage of the learning capability of machine learning models and uses the safeness of geometric models in order to better perform the navigation task. This paper presents a hybrid autonomous navigation methodology, which takes advantage of the learning capability of machine learning and uses the safeness of the dynamic window approach geometric method. Using a single camera and a 2D lidar sensor, this method actuates as a high-level controller, where optimal vehicle velocities are found, then applied by a low-level controller. The final algorithm is validated on CARLA Simulator environment, where the system proved to be capable to guide the vehicle in order to achieve the following tasks: lane keeping and obstacle avoidance.
Development of an Autonomous Robotics Platform for Road Marks Painting Using Laser Simulator and Sensor Fusion Technique
The design and experimental works of an autonomous robotic platform for road marks painting are presented in this paper as the first autonomous system of its kind. The whole system involves two main sub-systems, namely: an autonomous mobile robot navigation system which is used for recognizing the roads and estimating the position of road marks, and automatic road marks painting system that is attached to the mobile robot platform to control the spray of the paint on the road’s surface. The experimental results show the capability of the proposed system to perform the task of autonomous road marks painting with accuracy of ±10 cm.