Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
8 result(s) for "artificial reef detection dataset"
Sort by:
Artificial Reef Detection Method for Multibeam Sonar Imagery Based on Convolutional Neural Networks
Artificial reef detection in multibeam sonar images is an important measure for the monitoring and assessment of biological resources in marine ranching. With respect to how to accurately detect artificial reefs in multibeam sonar images, this paper proposes an artificial reef detection framework for multibeam sonar images based on convolutional neural networks (CNN). First, a large-scale multibeam sonar image artificial reef detection dataset, FIO-AR, was established and made public to promote the development of artificial multibeam sonar image artificial reef detection. Then, an artificial reef detection framework based on CNN was designed to detect the various artificial reefs in multibeam sonar images. Using the FIO-AR dataset, the proposed method is compared with some state-of-the-art artificial reef detection methods. The experimental results show that the proposed method can achieve an 86.86% F1-score and a 76.74% intersection-over-union (IOU) and outperform some state-of-the-art artificial reef detection methods.
SCSFish2025: a large dataset from South China sea for coral reef fish identification
Coral reefs are one of the most biodiverse ecosystems on Earth and are extremely important for marine ecosystems. However, coral reefs are rapidly degrading globally, and for this reason, in-situ online monitoring systems are being used to monitor coral reef ecosystems in real time. At the same time, artificial intelligence technology, particularly deep learning technology, is playing an increasingly important role in the study of coral reef ecology, especially in the automatic detection and identification of coral reef fish. However, deep learning is essentially a data-driven technique that relies on high-quality datasets for training, while existing fish identification datasets suffer from low resolution and inaccurate labeling, which limits the application of deep learning techniques to coral reef fish identification. To better utilize deep learning techniques for real-time automatic detection and identification of coral reef fish from the data collected by the in-situ online monitoring system, this paper proposes a high-resolution, fish species-rich, and well-labeled coral reef fish dataset SCSFish2025, which is the first publicly available coral reef fish dataset in the waters of China’s Nansha Islands. SCSFish2025 contains 11,956 high-resolution underwater surveillance images and over 120,000 bounding boxes covering 30 species of fish that have been manually labelled by experienced fish identification experts, with sub-category labels for blurring, occlusion, and altered pose. Furthermore, this paper establishes a benchmark for the dataset by analyzing the detection performance of deep learning object detection techniques on this dataset using four state-of-the-art or typical object detection models as baseline models. The best baseline model RT-DETRv2 achieves mAP@50 performance of 0.9960 and 0.7486 respectively on the five-fold cross-validation of the training set and the independent test set. The release of this dataset will help promote the development of AI technology in the study of automatic detection and identification of coral reef fish, and provide strong support for the study of marine biodiversity and ecosystems. The project code and dataset are available at https://github.com/FudanZhengSYSU/SCSFish2025 .
YOLO-AR: An Improved Artificial Reef Segmentation Algorithm Based on YOLOv11
Artificial reefs serve as a crucial measure for preventing habitat degradation, enhancing primary productivity in marine areas, and restoring and increasing fishery resources, making them an essential component of marine ranching development. Accurate identification and detection of artificial reefs are vital for ecological conservation and fishery resource management. To achieve precise segmentation of artificial reefs in multibeam sonar images, this study proposes an improved YOLOv11-based model, YOLO-AR. Specifically, the DCCA (Dynamic Convolution Coordinate Attention) module is introduced into the backbone network to reduce the model’s sensitivity to complex seafloor environments. Additionally, a small-object detection layer is added to the neck network, along with the ultra-lightweight dynamic upsampling operator DySample (Dynamic Sampling), which enhances the model’s ability to segment small artificial reefs. Furthermore, some standard convolution layers in the backbone are replaced with ADown (Advanced Downsampling) to reduce the model’s complexity. Experimental results demonstrate that YOLO-AR achieves an mAP@0.5 of 0.912, an intersection-over-union (IOU) of 0.832, and an F1 score of 0.908. Meanwhile, the parameters and model size of YOLO-AR are 2.67 million and 5.58 MB. Compared to other advanced segmentation models, YOLO-AR maintains a more lightweight structure while delivering a superior segmentation performance. In real-world multibeam sonar images, YOLO-AR can accurately segment artificial reefs, making it highly effective for practical applications.
An Improved Machine Learning-Based Method for Unsupervised Characterisation for Coral Reef Monitoring in Earth Observation Time-Series Data
This study presents an innovative approach to automated coral reef monitoring using satellite imagery, addressing challenges in image quality assessment and correction. The method employs Principal Component Analysis (PCA) coupled with clustering for efficient image selection and quality evaluation, followed by a machine learning-based cloud removal technique using an XGBoost model trained to detect land and cloudy pixels over water. The workflow incorporates depth correction using Lyzenga’s algorithm and superpixel analysis, culminating in an unsupervised classification of reef areas using KMeans. Results demonstrate the effectiveness of this approach in producing consistent, interpretable classifications of reef ecosystems across different imaging conditions. This study highlights the potential for scalable, autonomous monitoring of coral reefs, offering valuable insights for conservation efforts and climate change impact assessment in shallow marine environments.
Estimation of Artificial Reef Pose Based on Deep Learning
Artificial reefs are man-made structures submerged in the ocean, and the design of these structures plays a crucial role in determining their effectiveness. Precisely measuring the configuration of artificial reefs is vital for creating suitable habitats for marine organisms. This study presents a novel approach for automated detection of artificial reefs by recognizing their key features and key points. Two enhanced models, namely, YOLOv8n-PoseRFSA and YOLOv8n-PoseMSA, are introduced based on the YOLOv8n-Pose architecture. The YOLOv8n-PoseRFSA model exhibits a 2.3% increase in accuracy in pinpointing target key points compared to the baseline YOLOv8n-Pose model, showcasing notable enhancements in recall rate, mean average precision (mAP), and other evaluation metrics. In response to the demand for swift identification in mobile fishing scenarios, a YOLOv8n-PoseMSA model is proposed, leveraging MobileNetV3 to replace the backbone network structure. This model reduces the computational burden to 33% of the original model while preserving recognition accuracy and minimizing the accuracy drop. The methodology outlined in this research enables real-time monitoring of artificial reef deployments, allowing for the precise quantification of their structural characteristics, thereby significantly enhancing monitoring efficiency and convenience. By better assessing the layout of artificial reefs and their ecological impact, this approach offers valuable data support for the future planning and implementation of reef projects.
GEOAI FOR MARINE ECOSYSTEM MONITORING: A COMPLETE WORKFLOW TO GENERATE MAPS FROM AI MODEL PREDICTIONS
Mapping and monitoring marine ecosystems imply several challenges for data collection and processing: water depth, restricted access to locations, instrumentation costs or weather constraints for sampling, among others. Nowadays, Artificial Intelligence (AI) and Geographic Information System (GIS) open source software can be combined in new kinds of workflows, to annotate and predict objects directly on georeferenced raster data (e.g. orthomosaics). Here, we describe and share the code of a generic method to train a deep learning model with spatial annotations and use it to directly generate model predictions as spatial features. This workflow has been tested and validated in three use cases related to marine ecosystem monitoring at different geographic scales: (i) segmentation of corals on orthomosaics made of underwater images to automate coral reef habitats mapping, (ii) detection and classification of fishing vessels on remote sensing satellite imagery to estimate a proxy of fishing effort (iii) segmentation of marine species and habitats on underwater images with a simple geolocation. Models have been successfully trained and the models predictions are displayed with maps in the three use cases.
Towards Low-Cost Classification for Novel Fine-Grained Datasets
Fine-grained categorization is an essential field in classification, a subfield of object recognition that aims to differentiate subordinate classes. Fine-grained image classification concentrates on distinguishing between similar, hard-to-differentiate types or species, for example, flowers, birds, or specific animals such as dogs or cats, and identifying airplane makes or models. An important step towards fine-grained classification is the acquisition of datasets and baselines; hence, we propose a holistic system and two novel datasets, including reef fish and butterflies, for fine-grained classification. The butterflies and fish can be imaged at various locations in the image plane; thus, causing image variations due to translation, rotation, and deformation in multiple directions can induce variations, and depending on the image acquisition device’s position, scales can be different. We evaluate the traditional algorithms based on quantized rotation and scale-invariant local image features and the convolutional neural networks (CNN) using their pre-trained models to extract features. The comprehensive evaluation shows that the CNN features calculated using the pre-trained models outperform the rest of the image representations. The proposed system can prove instrumental for various purposes, such as education, conservation, and scientific research. The codes, models, and dataset are publicly available.
Semiautomated Mapping of Benthic Habitats and Seagrass Species Using a Convolutional Neural Network Framework in Shallow Water Environments
Benthic habitats are structurally complex and ecologically diverse ecosystems that are severely vulnerable to human stressors. Consequently, marine habitats must be mapped and monitored to provide the information necessary to understand ecological processes and lead management actions. In this study, we propose a semiautomated framework for the detection and mapping of benthic habitats and seagrass species using convolutional neural networks (CNNs). Benthic habitat field data from a geo-located towed camera and high-resolution satellite images were integrated to evaluate the proposed framework. Features extracted from pre-trained CNNs and a “bagging of features” (BOF) algorithm was used for benthic habitat and seagrass species detection. Furthermore, the resultant correctly detected images were used as ground truth samples for training and validating CNNs with simple architectures. These CNNs were evaluated for their accuracy in benthic habitat and seagrass species mapping using high-resolution satellite images. Two study areas, Shiraho and Fukido (located on Ishigaki Island, Japan), were used to evaluate the proposed model because seven benthic habitats were classified in the Shiraho area and four seagrass species were mapped in Fukido cove. Analysis showed that the overall accuracy of benthic habitat detection in Shiraho and seagrass species detection in Fukido was 91.5% (7 classes) and 90.4% (4 species), respectively, while the overall accuracy of benthic habitat and seagrass mapping in Shiraho and Fukido was 89.9% and 91.2%, respectively.