Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
822 result(s) for "underwater image enhancement"
Sort by:
Underwater image enhancement by maximum-likelihood based adaptive color correction and robust scattering removal
Underwater images often exhibit severe color deviations and degraded visibility, which limits many practical applications in ocean engineering. Although extensive research has been conducted into underwater image enhancement, little of which demonstrates the significant robustness and generalization for diverse real-world underwater scenes. In this paper, we propose an adaptive color correction algorithm based on the maximum likelihood estimation of Gaussian parameters, which effectively removes color casts of a variety of underwater images. A novel algorithm using weighted combination of gradient maps in HSV color space and absolute difference of intensity for accurate background light estimation is proposed, which circumvents the influence of white or bright regions that challenges existing physical model-based methods. To enhance contrast of resultant images, a piece-wise affine transform is applied to the transmission map estimated via background light differential. Finally, with the estimated background light and transmission map, the scene radiance is recovered by addressing an inverse problem of image formation model. Extensive experiments reveal that our results are characterized by natural appearance and genuine color, and our method achieves competitive performance with the state-of-the-art methods in terms of objective evaluation metrics, which further validates the better robustness and higher generalization ability of our enhancement model.
Underwater vision enhancement technologies: a comprehensive review, challenges, and recent trends
Cameras are integrated with various underwater vision systems for underwater object detection and marine biological monitoring. However, underwater images captured by cameras rarely achieve the desired visual quality, which may affect their further applications. Various underwater vision enhancement technologies have been proposed to improve the visual quality of underwater images in the past few decades, which is the focus of this paper. Specifically, we review the theory of underwater image degradations and the underwater image formation models. Meanwhile, this review summarizes various underwater vision enhancement technologies and reports the existing underwater image datasets. Further, we conduct extensive and systematic experiments to explore the limitations and superiority of various underwater vision enhancement methods. Finally, the recent trends and challenges of underwater vision enhancement are discussed. We wish this paper could serve as a reference source for future study and promote the development of this research field.
Underwater image enhancement: a comprehensive review, recent trends, challenges and applications
The mysteries of deep-sea ecosystems can be unlocked to reveal new sources, for developing medical drugs, food and energy resources, and products of renewable energy. Research in the area of underwater image processing has increased significantly in the last decade. This is primarily due to the dependence of human beings on the valuable resources existing underwater. Effective work of exploring the underwater environment is achievable by having excellent methods for underwater image enhancement. The work presented in this article highlights the survey of underwater image enhancement algorithms. This work presents an overview of various underwater image enhancement techniques and their broad classifications. The methods under each classification are briefly discussed. Underwater datasets required for performing experiments are summarized from the available literature. Attention is also drawn towards various evaluation metrics required for the quantitative assessment of underwater images and recent areas of application in the domain.
Underwater Image Restoration via Contrastive Learning and a Real-World Dataset
Underwater image restoration is of significant importance in unveiling the underwater world. Numerous techniques and algorithms have been developed in recent decades. However, due to fundamental difficulties associated with imaging/sensing, lighting, and refractive geometric distortions in capturing clear underwater images, no comprehensive evaluations have been conducted with regard to underwater image restoration. To address this gap, we constructed a large-scale real underwater image dataset, dubbed Heron Island Coral Reef Dataset (‘HICRD’), for the purpose of benchmarking existing methods and supporting the development of new deep-learning based methods. We employed an accurate water parameter (diffuse attenuation coefficient) to generate the reference images. There are 2000 reference restored images and 6003 original underwater images in the unpaired training set. Furthermore, we present a novel method for underwater image restoration based on an unsupervised image-to-image translation framework. Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images. Extensive experiments with comparisons to recent approaches further demonstrate the superiority of our proposed method. Our code and dataset are both publicly available.
Underwater image restoration and enhancement: a comprehensive review of recent trends, challenges, and applications
In recent years, underwater exploration for deep-sea resource utilization and development has a considerable interest. In an underwater environment, the obtained images and videos undergo several quality degradations resulting from light absorption and scattering, low contrast, color deviation, blurred details, and nonuniform illumination. Therefore, the restoration and enhancement of degraded images and videos are critical. Numerous techniques of image processing, pattern recognition, and computer vision have been proposed for image restoration and enhancement, but many challenges remain. This survey has been estimated to be superior to other reviews because it collects all their shortcomings and lacks and gives researchers many ideas for the future. This survey presents a comparison of the most prominent approaches in underwater image processing and analysis. It also discusses an overview of the underwater environment with a broad classification into enhancement and restoration techniques and introduces the main underwater image degradation reasons in addition to the underwater image model. The existing underwater image analysis techniques, methods, datasets, and evaluation metrics are presented in detail. Furthermore, the existing limitations are analyzed, which are classified into image-related and environment-related categories. In addition, the performance is validated on images from the UIEB dataset for qualitative, quantitative, and computational time assessment. Areas in which underwater images have recently been applied are briefly discussed. Finally, recommendations for future research are provided and the conclusion is presented.
Multiple attentional path aggregation network for marine object detection
Marine target detection is a challenging task because degraded underwater images cause unclear targets. Furthermore, marine targets are small in size and tend to live together. The popular object detection methods perform poorly in marine target detection. Thus, this paper proposes a novel multiple attentional path aggregation network named APAN to improve performance on marine object detection. Firstly, we design a path aggregation network structure which brings features from backbone network to bottom-up path augmentation. Each feature map is enhanced by the lower layer through the bottom-up downsampling pathway and incorporates the features from top-down upsampling layers. Specifically, the last layer fuses feature map from backbone network which enhances the semantic features and improve the ability of feature extraction. Then, a multi-attention which combines coordinate competing attention and spatial supplement attention applies to proposed path aggregation network. Multi-attention can further improve the accuracy of multiple marine object detection. Finally, a double transmission underwater image enhancement algorithm is proposed to enhance the underwater image datasets. The experiments show our method achieves 79.6% mAP in underwater image datasets and 79.03% mAP in enhanced underwater image datasets. Meanwhile, our method achieves 81.5% mAP in PASCAL VOC datasets. In addition, we also applly the method to the underwater robot. The experiments show our method achieves good performance compared with popular object detection methods. The source code is publicly available at https://github.com/yhf2022/APAN.
UIR-Net: A Simple and Effective Baseline for Underwater Image Restoration and Enhancement
Because of the unique physical and chemical properties of water, obtaining high-quality underwater images directly is not an easy thing. Hence, recovery and enhancement are indispensable steps in underwater image processing and have therefore become research hotspots. Nevertheless, existing image-processing methods generally have high complexity and are difficult to deploy on underwater platforms with limited computing resources. To tackle this issue, this paper proposes a simple and effective baseline named UIR-Net that can recover and enhance underwater images simultaneously. This network uses a channel residual prior to extract the channel of the image to be recovered as a prior, combined with a gradient strategy to reduce parameters and training time to make the operation more lightweight. This method can improve the color performance while maintaining the style and spatial texture of the contents. Through experiments on three datasets (MSRB, MSIRB and UIEBD-Snow), we confirm that UIR-Net can recover clear underwater images from original images with large particle impurities and ocean light spots. Compared to other state-of-the-art methods, UIR-Net can recover underwater images at a similar or higher quality with a significantly lower number of parameters, which is valuable in real-world applications.
Underwater SLAM Meets Deep Learning: Challenges, Multi-Sensor Integration, and Future Directions
The underwater domain presents unique challenges and opportunities for scientific exploration, resource extraction, and environmental monitoring. Autonomous underwater vehicles (AUVs) rely on simultaneous localization and mapping (SLAM) for real-time navigation and mapping in these complex environments. However, traditional SLAM techniques face significant obstacles, including poor visibility, dynamic lighting conditions, sensor noise, and water-induced distortions, all of which degrade the accuracy and robustness of underwater navigation systems. Recent advances in deep learning (DL) have introduced powerful solutions to overcome these challenges. DL techniques enhance underwater SLAM by improving feature extraction, image denoising, distortion correction, and sensor fusion. This survey provides a comprehensive analysis of the latest developments in DL-enhanced SLAM for underwater applications, categorizing approaches based on their methodologies, sensor dependencies, and integration with deep learning models. We critically evaluate the benefits and limitations of existing techniques, highlighting key innovations and unresolved challenges. In addition, we introduce a novel classification framework for underwater SLAM based on its integration with underwater wireless sensor networks (UWSNs). UWSNs offer a collaborative framework that enhances localization, mapping, and real-time data sharing among AUVs by leveraging acoustic communication and distributed sensing. Our proposed taxonomy provides new insights into how communication-aware SLAM methodologies can improve navigation accuracy and operational efficiency in underwater environments. Furthermore, we discuss emerging research trends, including the use of transformer-based architectures, multi-modal sensor fusion, lightweight neural networks for real-time deployment, and self-supervised learning techniques. By identifying gaps in current research and outlining potential directions for future work, this survey serves as a valuable reference for researchers and engineers striving to develop robust and adaptive underwater SLAM solutions. Our findings aim to inspire further advancements in autonomous underwater exploration, supporting critical applications in marine science, deep-sea resource management, and environmental conservation.
Understanding the Influence of Image Enhancement on Underwater Object Detection: A Quantitative and Qualitative Study
Underwater image enhancement is often perceived as a disadvantageous process to object detection. We propose a novel analysis of the interactions between enhancement and detection, elaborating on the potential of enhancement to improve detection. In particular, we evaluate object detection performance for each individual image rather than across the entire set to allow a direct performance comparison of each image before and after enhancement. This approach enables the generation of unique queries to identify the outperforming and underperforming enhanced images compared to the original images. To accomplish this, we first produce enhanced image sets of the original images using recent image enhancement models. Each enhanced set is then divided into two groups: (1) images that outperform or match the performance of the original images and (2) images that underperform. Subsequently, we create mixed original-enhanced sets by replacing underperforming enhanced images with their corresponding original images. Next, we conduct a detailed analysis by evaluating all generated groups for quality and detection performance attributes. Finally, we perform an overlap analysis between the generated enhanced sets to identify cases where the enhanced images of different enhancement algorithms unanimously outperform, equally perform, or underperform the original images. Our analysis reveals that, when evaluated individually, most enhanced images achieve equal or superior performance compared to their original counterparts. The proposed method uncovers variations in detection performance that are not apparent in a whole set as opposed to a per-image evaluation because the latter reveals that only a small percentage of enhanced images cause an overall negative impact on detection. We also find that over-enhancement may lead to deteriorated object detection performance. Lastly, we note that enhanced images reveal hidden objects that were not annotated due to the low visibility of the original images.
BG-YOLO: A Bidirectional-Guided Method for Underwater Object Detection
Degraded underwater images decrease the accuracy of underwater object detection. Existing research uses image enhancement methods to improve the visual quality of images, which may not be beneficial in underwater image detection and lead to serious degradation in detector performance. To alleviate this problem, we proposed a bidirectional guided method for underwater object detection, referred to as BG-YOLO. In the proposed method, a network is organized by constructing an image enhancement branch and an object detection branch in a parallel manner. The image enhancement branch consists of a cascade of an image enhancement subnet and object detection subnet. The object detection branch only consists of a detection subnet. A feature-guided module connects the shallow convolution layers of the two branches. When training the image enhancement branch, the object detection subnet in the enhancement branch guides the image enhancement subnet to be optimized towards the direction that is most conducive to the detection task. The shallow feature map of the trained image enhancement branch is output to the feature-guided module, constraining the optimization of the object detection branch through consistency loss and prompting the object detection branch to learn more detailed information about the objects. This enhances the detection performance. During the detection tasks, only the object detection branch is reserved so that no additional computational cost is introduced. Extensive experiments demonstrate that the proposed method significantly improves the detection performance of the YOLOv5s object detection network (the mAP is increased by up to 2.9%) and maintains the same inference speed as YOLOv5s (132 fps).