Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,261
result(s) for
"ship detection"
Sort by:
SAR Ship Detection Dataset (SSDD): Official Release and Comprehensive Data Analysis
2021
SAR Ship Detection Dataset (SSDD) is the first open dataset that is widely used to research state-of-the-art technology of ship detection from Synthetic Aperture Radar (SAR) imagery based on deep learning (DL). According to our investigation, up to 46.59% of the total 161 public reports confidently select SSDD to study DL-based SAR ship detection. Undoubtedly, this situation reveals the popularity and great influence of SSDD in the SAR remote sensing community. Nevertheless, the coarse annotations and ambiguous standards of use of its initial version both hinder fair methodological comparisons and effective academic exchanges. Additionally, its single-function horizontal-vertical rectangle bounding box (BBox) labels can no longer satisfy the current research needs of the rotatable bounding box (RBox) task and the pixel-level polygon segmentation task. Therefore, to address the above two dilemmas, in this review, advocated by the publisher of SSDD, we will make an official release of SSDD based on its initial version. SSDD’s official release version will cover three types: (1) a bounding box SSDD (BBox-SSDD), (2) a rotatable bounding box SSDD (RBox-SSDD), and (3) a polygon segmentation SSDD (PSeg-SSDD). We relabel ships in SSDD more carefully and finely, and then explicitly formulate some strict using standards, e.g., (1) the training-test division determination, (2) the inshore-offshore protocol, (3) the ship-size reasonable definition, (4) the determination of the densely distributed small ship samples, and (5) the determination of the densely parallel berthing at ports ship samples. These using standards are all formulated objectively based on the using differences of existing 75 (161 × 46.59%) public reports. They will be beneficial for fair method comparison and effective academic exchanges in the future. Most notably, we conduct a comprehensive data analysis on BBox-SSDD, RBox-SSDD, and PSeg-SSDD. Our analysis results can provide some valuable suggestions for possible future scholars to further elaborately design DL-based SAR ship detectors with higher accuracy and stronger robustness when using SSDD.
Journal Article
Improved YOLOv3 Based on Attention Mechanism for Fast and Accurate Ship Detection in Optical Remote Sensing Images
by
Deng, Dexiang
,
Shi, Wenxuan
,
Chen, Liqiong
in
computer vision
,
data collection
,
dilated attention module
2021
Ship detection is an important but challenging task in the field of computer vision, partially due to the minuscule ship objects in optical remote sensing images and the interference of clouds occlusion and strong waves. Most of the current ship detection methods focus on boosting detection accuracy while they may ignore the detection speed. However, it is also indispensable to increase ship detection speed because it can provide timely ocean rescue and maritime surveillance. To solve the above problems, we propose an improved YOLOv3 (ImYOLOv3) based on attention mechanism, aiming to achieve the best trade-off between detection accuracy and speed. First, to realize high-efficiency ship detection, we adopt the off-the-shelf YOLOv3 as our basic detection framework due to its fast speed. Second, to boost the performance of original YOLOv3 for small ships, we design a novel and lightweight dilated attention module (DAM) to extract discriminative features for ship targets, which can be easily embedded into the basic YOLOv3. The integrated attention mechanism can help our model learn to suppress irrelevant regions while highlighting salient features useful for ship detection task. Furthermore, we introduce a multi-class ship dataset (MSD) and explicitly set supervised subclass according to the scales and moving states of ships. Extensive experiments verify the effectiveness and robustness of ImYOLOv3, and show that our method can accurately detect ships with different scales in different backgrounds, while at a real-time speed.
Journal Article
A Novel Detector Based on Convolution Neural Networks for Multiscale SAR Ship Detection in Complex Background
by
Liu, Yijing
,
Dai, Wenxin
,
Mao, Yuqing
in
complex background
,
convolutional neural network (CNN)
,
multiscale and small ship detection
2020
Convolution neural network (CNN)-based detectors have shown great performance on ship detections of synthetic aperture radar (SAR) images. However, the performance of current models has not been satisfactory enough for detecting multiscale ships and small-size ones in front of complex backgrounds. To address the problem, we propose a novel SAR ship detector based on CNN, which consist of three subnetworks: the Fusion Feature Extractor Network (FFEN), Region Proposal Network (RPN), and Refine Detection Network (RDN). Instead of using a single feature map, we fuse feature maps in bottom-up and top-down ways and generate proposals from each fused feature map in FFEN. Furthermore, we further merge features generated by the region-of-interest (RoI) pooling layer in RDN. Based on the feature representation strategy, the CNN framework constructed can significantly enhance the location and semantics information for the multiscale ships, in particular for the small ships. On the other hand, the residual block is introduced to increase the network depth, through which the detection precision could be further improved. The public SAR ship dataset (SSDD) and China Gaofen-3 satellite SAR image are used to validate the proposed method. Our method shows excellent performance for detecting the multiscale and small-size ships with respect to some competitive models and exhibits high potential in practical application.
Journal Article
Deep Learning-Based Hierarchical Ship Detection and Classification in Bad Weather Conditions
by
Becerikli, Yaşar
,
İzala, Yahya
in
Artificial Intelligence
,
Atmospheric aerosols
,
Classification
2025
This study focuses on the detection and classification of ships in satellite images under adverse weather conditions (rain, snow, and fog). To mitigate the negative impacts of weather conditions, the Two Stage Knowledge Learning and Multi-stage Progressive Refinement Network models were applied separately, and their effects on ship detection were compared. It was observed that reducing the impact of bad weather resulted in an approximate 8% increase in the mAP value during the ship detection phase. Extremely small ships, appearing tiny due to the satellite’s viewing distance, were successfully identified. The utilization of the Feature Pyramid Network for positioning small ships, combined with YOLOv8’s center point approach to address overlapping situations, seems crucial. To prevent the misclassification of very small ships as land masses or small islets, a new dataset was created. This dataset was used for training an enhanced variational autoencoder for eliminating false negative samples. This dataset also facilitated the elimination of potential land masses that could be erroneously identified as ships. In this study, the Detection, Localization, Recognition, and Identification phases were designed for independent optimization. The proposed model incorporates the Pyramid Residual Attention Inception Blocks architecture for the detection, classification, and identification phases, while YOLOv8 is employed for the positioning phase. The F1 score values achieved independently for the detection, localization, recognition and identification phases were found to be 94.3%, 84.0%, 74.1%, 88.2%, and 63.9%, respectively. Moreover, the overall F1 score of the model was determined to be 96.0%, 85.4%, 65.0%, 63.0%, and 55.0%.
Journal Article
H-YOLO: A Single-Shot Ship Detection Approach Based on Region of Interest Preselected Network
2020
Ship detection from high-resolution optical satellite images is still an important task that deserves optimal solutions. This paper introduces a novel high-resolution image network-based approach based on the preselection of a region of interest (RoI). This pre-selected network first identifies and extracts a region of interest from input images. In order to efficiently match ship candidates, the principle of our approach is to distinguish suspected areas from the images based on hue, saturation, value (HSV) differences between ships and the background. The whole approach is the basis of an experiment with a large ship dataset, consisting of Google Earth images and HRSC2016 datasets. The experiment shows that the H-YOLO network, which uses the same weight training from a set of remote sensing images, has a 19.01% higher recognition rate and a 16.19% higher accuracy than applying the you only look once (YOLO) network alone. After image preprocessing, the value of the intersection over union (IoU) is also greatly improved.
Journal Article
Enhancing YOLO-Based SAR Ship Detection with Attention Mechanisms
by
Rocha, Ranyeri do Lago
,
Figueiredo, Felipe A. P. de
in
Accuracy
,
Aircraft detection
,
Architecture
2025
This study enhances Synthetic Aperture Radar (SAR) ship detection by integrating attention mechanisms, Bi-Level Routing Attention (BRA), Swin Transformer, and a Convolutional Block Attention Module (CBAM) into state-of-the-art YOLO architectures (YOLOv11 and v12). Addressing challenges like small ship sizes and complex maritime backgrounds in SAR imagery, we systematically evaluate the impact of adding and replacing attention layers at strategic positions within the models. Experiments reveal that replacing the original attention layer at position 4 (C3k2 module) with the CBAM in YOLOv12 achieves optimal performance, attaining an mAP@0.5 of 98.0% on the SAR Ship Dataset (SSD), surpassing baseline YOLOv12 (97.8%) and prior works. The optimized CBAM-enhanced YOLOv12 also reduces computational costs (5.9 GFLOPS vs. 6.5 GFLOPS in the baseline). Cross-dataset validation on the SAR Ship Detection Dataset (SSDD) confirms consistent improvements, underscoring the efficacy of targeted attention-layer replacement for SAR-specific challenges. Additionally, tests on the SADD and MSAR datasets demonstrate that this optimization generalizes beyond ship detection, yielding gains in aircraft detection and multi-class SAR object recognition. This work establishes a robust framework for efficient, high-precision maritime surveillance using deep learning.
Journal Article
An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery
by
Xing, Xiangwei
,
Ji, Kefeng
,
Zhou, Shilin
in
adaptive ship detection
,
Confidence intervals
,
confidence probability
2016
With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way.
Journal Article
R-CNN-Based Ship Detection from High Resolution Remote Sensing Imagery
by
Zhang, Shaoming
,
Wang, Jianmei
,
Xu, Kunyuan
in
Algorithms
,
Artificial neural networks
,
Deep learning
2019
Offshore and inland river ship detection has been studied on both synthetic aperture radar (SAR) and optical remote sensing imagery. However, the classic ship detection methods based on SAR images can cause a high false alarm ratio and be influenced by the sea surface model, especially on inland rivers and in offshore areas. The classic detection methods based on optical images do not perform well on small and gathering ships. This paper adopts the idea of deep networks and presents a fast regional-based convolutional neural network (R-CNN) method to detect ships from high-resolution remote sensing imagery. First, we choose GaoFen-2 optical remote sensing images with a resolution of 1 m and preprocess the images with a support vector machine (SVM) to divide the large detection area into small regions of interest (ROI) that may contain ships. Then, we apply ship detection algorithms based on a region-based convolutional neural network (R-CNN) on ROI images. To improve the detection result of small and gathering ships, we adopt an effective target detection framework, Faster-RCNN, and improve the structure of its original convolutional neural network (CNN), VGG16, by using multiresolution convolutional features and performing ROI pooling on a larger feature map in a region proposal network (RPN). Finally, we compare the most effective classic ship detection method, the deformable part model (DPM), another two widely used target detection frameworks, the single shot multibox detector (SSD) and YOLOv2, the original VGG16-based Faster-RCNN, and our improved Faster-RCNN. Experimental results show that our improved Faster-RCNN method achieves a higher recall and accuracy for small ships and gathering ships. Therefore, it provides a very effective method for offshore and inland river ship detection based on high-resolution remote sensing imagery.
Journal Article
Automatic Ship Detection in Remote Sensing Images from Google Earth of Complex Scenes Based on Multiscale Rotation Dense Feature Pyramid Networks
by
Guo, Zhi
,
Yan, Menglong
,
Sun, Xian
in
convolution neural network
,
high-level semantic
,
multiscale detection networks
2018
Ship detection has been playing a significant role in the field of remote sensing for a long time, but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection, and the redundancy of the detection region. In order to solve these problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ships in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving problems resulting from the narrow width of the ship. Compared with previous multiscale detectors such as Feature Pyramid Network (FPN), DFPN builds high-level semantic feature-maps for all scales by means of dense connections, through which feature propagation is enhanced and feature reuse is encouraged. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multiscale region of interest (ROI) Align for the purpose of maintaining the completeness of the semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has state-of-the-art performance.
Journal Article
CRTransSar: A Visual Transformer Based on Contextual Joint Representation Learning for SAR Ship Detection
by
Yao, Baidong
,
Wu, Bocai
,
Xiang, Haibing
in
Algorithms
,
Artificial neural networks
,
data collection
2022
Synthetic-aperture radar (SAR) image target detection is widely used in military, civilian and other fields. However, existing detection methods have low accuracy due to the limitations presented by the strong scattering of SAR image targets, unclear edge contour information, multiple scales, strong sparseness, background interference, and other characteristics. In response, for SAR target detection tasks, this paper combines the global contextual information perception of transformers and the local feature representation capabilities of convolutional neural networks (CNNs) to innovatively propose a visual transformer framework based on contextual joint-representation learning, referred to as CRTransSar. First, this paper introduces the latest Swin Transformer as the basic architecture. Next, it introduces the CNN’s local information capture and presents the design of a backbone, called CRbackbone, based on contextual joint representation learning, to extract richer contextual feature information while strengthening SAR target feature attributes. Furthermore, the design of a new cross-resolution attention-enhancement neck, called CAENeck, is presented to enhance the characterizability of multiscale SAR targets. The mAP of our method on the SSDD dataset attains 97.0% accuracy, reaching state-of-the-art levels. In addition, based on the HISEA-1 commercial SAR satellite, which has been launched into orbit and in whose development our research group participated, we released a larger-scale SAR multiclass target detection dataset, called SMCDD, which verifies the effectiveness of our method.
Journal Article