Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
71 result(s) for "underwater SLAM"
Sort by:
Underwater SLAM Meets Deep Learning: Challenges, Multi-Sensor Integration, and Future Directions
The underwater domain presents unique challenges and opportunities for scientific exploration, resource extraction, and environmental monitoring. Autonomous underwater vehicles (AUVs) rely on simultaneous localization and mapping (SLAM) for real-time navigation and mapping in these complex environments. However, traditional SLAM techniques face significant obstacles, including poor visibility, dynamic lighting conditions, sensor noise, and water-induced distortions, all of which degrade the accuracy and robustness of underwater navigation systems. Recent advances in deep learning (DL) have introduced powerful solutions to overcome these challenges. DL techniques enhance underwater SLAM by improving feature extraction, image denoising, distortion correction, and sensor fusion. This survey provides a comprehensive analysis of the latest developments in DL-enhanced SLAM for underwater applications, categorizing approaches based on their methodologies, sensor dependencies, and integration with deep learning models. We critically evaluate the benefits and limitations of existing techniques, highlighting key innovations and unresolved challenges. In addition, we introduce a novel classification framework for underwater SLAM based on its integration with underwater wireless sensor networks (UWSNs). UWSNs offer a collaborative framework that enhances localization, mapping, and real-time data sharing among AUVs by leveraging acoustic communication and distributed sensing. Our proposed taxonomy provides new insights into how communication-aware SLAM methodologies can improve navigation accuracy and operational efficiency in underwater environments. Furthermore, we discuss emerging research trends, including the use of transformer-based architectures, multi-modal sensor fusion, lightweight neural networks for real-time deployment, and self-supervised learning techniques. By identifying gaps in current research and outlining potential directions for future work, this survey serves as a valuable reference for researchers and engineers striving to develop robust and adaptive underwater SLAM solutions. Our findings aim to inspire further advancements in autonomous underwater exploration, supporting critical applications in marine science, deep-sea resource management, and environmental conservation.
RU-SLAM: A Robust Deep-Learning Visual Simultaneous Localization and Mapping (SLAM) System for Weakly Textured Underwater Environments
Accurate and robust simultaneous localization and mapping (SLAM) systems are crucial for autonomous underwater vehicles (AUVs) to perform missions in unknown environments. However, directly applying deep learning-based SLAM methods to underwater environments poses challenges due to weak textures, image degradation, and the inability to accurately annotate keypoints. In this paper, a robust deep-learning visual SLAM system is proposed. First, a feature generator named UWNet is designed to address weak texture and image degradation problems and extract more accurate keypoint features and their descriptors. Further, the idea of knowledge distillation is introduced based on an improved underwater imaging physical model to train the network in a self-supervised manner. Finally, UWNet is integrated into the ORB-SLAM3 to replace the traditional feature extractor. The extracted local and global features are respectively utilized in the feature tracking and closed-loop detection modules. Experimental results on public datasets and self-collected pool datasets verify that the proposed system maintains high accuracy and robustness in complex scenarios.
A Visual–Inertial Pressure Fusion-Based Underwater Simultaneous Localization and Mapping System
Detecting objects, particularly naval mines, on the seafloor is a complex task. In naval mine countermeasures (MCM) operations, sidescan or synthetic aperture sonars have been used to search large areas. However, a single sensor cannot meet the requirements of high-precision autonomous navigation. Based on the ORB-SLAM3-VI framework, we propose ORB-SLAM3-VIP, which integrates a depth sensor, an IMU sensor and an optical sensor. This method integrates the measurements of depth sensors and an IMU sensor into the visual SLAM algorithm through tight coupling, and establishes a multi-sensor fusion SLAM model. Depth constraints are introduced into the process of initialization, scale fine-tuning, tracking and mapping to constrain the position of the sensor in the z-axis and improve the accuracy of pose estimation and map scale estimate. The test on seven sets of underwater multi-sensor sequence data in the AQUALOC dataset shows that, compared with ORB-SLAM3-VI, the ORB-SLAM3-VIP system proposed in this paper reduces the scale error in all sequences by up to 41.2%, and reduces the trajectory error by up to 41.2%. The square root has also been reduced by up to 41.6%.
Real-time GAN-based image enhancement for robust underwater monocular SLAM
Underwater monocular visual simultaneous localization and mapping (SLAM) plays a vital role in underwater computer vision and robotic perception fields. Unlike the autonomous driving or aerial environment, performing robust and accurate underwater monocular SLAM is tough and challenging due to the complex aquatic environment and the collected critically degraded image quality. The underwater images’ poor visibility, low contrast, and color distortion result in ineffective and insufficient feature matching, leading to the poor or even failure of the existing SLAM algorithms. To address this issue, we propose introducing the generative adversarial network (GAN) to perform effective underwater image enhancement before conducting SLAM. Considering the inherent real-time requirement of SLAM, we conduct knowledge distillation to achieve GAN compression to reduce the inference cost, while achieving high-fidelity underwater image enhancement and real-time inference. The real-time underwater image enhancement acts as the image pre-processing to build a robust and accurate underwater monocular SLAM system. With the introduction of real-time underwater image enhancement, we can significantly promote underwater SLAM performance. The proposed method is a generic framework, which could be extended to various SLAM systems and achieve various scales of performance gain.
An Improved Underwater Visual SLAM through Image Enhancement and Sonar Fusion
To enhance the performance of visual SLAM in underwater environments, this paper presents an enhanced front-end method based on visual feature enhancement. The method comprises three modules aimed at optimizing and improving the matching capability of visual features from different perspectives. Firstly, to address issues related to insufficient underwater illumination and uneven distribution of artificial light sources, a brightness-consistency recovery method is proposed. This method employs an adaptive histogram equalization algorithm to balance the brightness of images. Secondly, a method for denoising underwater suspended particulates is introduced to filter out noise from images. After image-level processing, a combined underwater acousto–optic feature-association method is proposed, which associates acoustic features from sonar with visual features, thereby providing distance information for visual features. Finally, utilizing the AFRL dataset, the improved system incorporating the proposed enhancement methods is evaluated for its performance against the OKVIS framework. The system achieves a better trajectory estimation accuracy compared to OKVIS and demonstrates robustness in underwater environments.
YOLO-NeRFSLAM: underwater object detection for the visual NeRF-SLAM
Accurate and reliable dense mapping is crucial for understanding and utilizing the marine environment in applications such as ecological monitoring, archaeological exploration, and autonomous underwater navigation. However, the underwater environment is highly dynamic: fish and floating debris frequently appear in the field of view, causing traditional SLAM to be easily disturbed during localization and mapping. In addition, common depth sensors and depth estimation techniques based on deep learning tend to be impractical or significantly less accurate underwater, failing to meet the demands of dense reconstruction. This paper proposes a new underwater SLAM framework that combines neural radiance fields (NeRF) with a dynamic masking module to address these issues. Through a Marine Motion Fusion (MMF) strategy—leveraging YOLO to detect known marine organisms and integrating optical flow for pixel-level motion analysis—we effectively screen out all dynamic objects, thus maintaining stable camera pose estimation and pixel-level dense reconstruction even without relying on depth data. Further, to cope with severe light attenuation and the dynamic nature of underwater scenes, we introduce specialized loss functions, enabling the reconstruction of underwater environments with realistic appearance and geometric detail even under high turbidity conditions. Experimental results show that our method significantly reduces localization drift caused by moving entities, improves dense mapping accuracy, and achieves favorable runtime efficiency in multiple real underwater video datasets, demonstrating both its potential and advanced capabilities in dynamic underwater settings.
A Switching Framework for Robust Underwater Pose Estimation: Integrating IPNet with Vision-based SLAM
Autonomous underwater vehicles (AUVs) carry out a wide range of tasks in underwater environments. Maintaining continuous, high-precision pose estimation of the AUV over extended durations is critical for its mission success and operational safety. To address the limitations of vision-based simultaneous localization and mapping (SLAM) in underwater environments, particularly trajectory tracking failures caused by visual information scarcity, this paper proposes a novel pose estimator switching framework. The proposed framework enhances the robustness of AUV pose estimation by incorporating two key components. First, we designed a visual monitor that continuously assesses the work state of the SLAM system. Next, we developed a neural network-based pose estimator driven by Inertial Measurement Unit(IMU) data and combined with pressure measurements, which is called IPNet Estimator. When visual information becomes unavailable, the system seamlessly switches to this alternative estimator, ensuring uninterrupted pose estimation for the AUV. Finally, experiments conducted on a publicly available underwater dataset validate the feasibility and effectiveness of the proposed framework.
Graph Matching for Underwater Simultaneous Localization and Mapping Using Multibeam Sonar Imaging
This paper addresses the challenges of underwater Simultaneous Localization and Mapping (SLAM) using multibeam sonar imaging. The widely used Iterative Closest Point (ICP) often falls into local optima due to non-convexity and the lack of features for correct registration. To overcome this, we propose a novel registration algorithm based on Gaussian clustering and Graph Matching with maximal cliques. The proposed approach enhances feature-matching accuracy and robustness in complex underwater environments. Inertial measurements and velocity estimates are also fused for global state estimation. Comprehensive tests in simulated and real-world underwater environments have demonstrated that the proposed registration method effectively addresses the issue of the ICP algorithm easily falling into local optima while also exhibiting excellent inter-frame registration performance and robustness.
A Joint Graph-Based Approach for Simultaneous Underwater Localization and Mapping for AUV Navigation Fusing Bathymetric and Magnetic-Beacon-Observation Data
Accurate positioning is the necessary basis for autonomous underwater vehicles (AUV) to perform safe navigation in underwater tasks, such as port environment monitoring, target search, and seabed exploration. The position estimates of underwater navigation systems usually suffer from an error accumulation problem, which makes the AUVs difficult use to perform long-term and accurate underwater tasks. Underwater simultaneous localization and mapping (SLAM) approaches based on multibeam-bathymetric data have attracted much attention for being able to obtain error-bounded position estimates. Two problems limit the use of multibeam bathymetric SLAM in many scenarios. The first is that the loop closures only occur in the AUV path intersection areas. The second is that the data association is prone to failure in areas with gentle topographic changes. To overcome these problems, a joint graph-based underwater SLAM approach that fuses bathymetric and magnetic-beacon measurements is proposed in this paper. In the front-end, a robust dual-stage bathymetric data-association method is used to first detect loop closures on the multibeam bathymetric data. Then, a magnetic-beacon-detection method using Euler-deconvolution and optimization algorithms is designed to localize the magnetic beacons using a magnetic measurement sequence on the path. The loop closures obtained from both bathymetric and magnetic-beacon observations are fused to build a joint-factor graph. In the back-end, a diagnosis method is introduced to identify the potential false factors in the graph, thus improving the robustness of the joint SLAM system to outliers in the measurement data. Experiments based on field bathymetric datasets are performed to test the performance of the proposed approach. Compared with classic bathymetric SLAM algorithms, the proposed algorithm can improve the data-association accuracy by 50%, and the average positioning error after optimization converges to less than 10 m.
3D RECONSTRUCTION OF UNSTABLE UNDERWATER ENVIRONMENT WITH SFM USING SLAM
The underwater environment has substantial properties for underwater research such as marine archaeology, monitoring coral reefs, and shipwrecks. SfM, as a major step of photogrammetry, has been widely used in the field. For a high 3D construction quality, images must have a clear visual sight environment and known orientations of the images. However, underwater images have various types of visual disturbances, but also GPS/INS, commonly used on the ground, are not accepted. Finding more feature points or using more images for SfM are solutions to the problems. However, these methods take high computational costs. An alternative to this problem is to provide the known orientations of the images. For a solution to provide known orientations of images, the presented method in this study uses visual SLAM that processes the localization of a vehicle system and mapping of surroundings. The experiment aims to verify whether SLAM improves the quality of underwater 3D reconstruction and the computation efficiency of SfM. We examine the two aqualoc datasets with the results of the number of cloud points, SfM processing time, and the number of matched images/total images and mean reprojection errors. The outcome shows SLAM-determined orientations improved the quality of 3D reconstruction and the computation efficiency of SfM with results of the increased number of point clouds and the decreased processing time.