Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
240
result(s) for
"ship object detection"
Sort by:
Research on the Coordinate Attention Mechanism Fuse in a YOLOv5 Deep Learning Detector for the SAR Ship Detection Task
by
Liu, Yingchun
,
Xie, Fang
,
Lin, Baojun
in
Algorithms
,
Artificial satellites in remote sensing
,
coordinate attention mechanism
2022
The real-time performance of ship detection is an important index in the marine remote sensing detection task. Due to the computing resources on the satellite being limited by the solar array size and the radiation-resistant electronic components, information extraction tasks are usually implemented after the image is transmitted to the ground. However, in recent years, the one-stage based target detector such as the You Only Look Once Version 5 (YOLOv5) deep learning framework shows powerful performance while being lightweight, and it provides an implementation scheme for on-orbit reasoning to shorten the time delay of ship detention. Optimizing the lightweight model has important research significance for SAR image onboard processing. In this paper, we studied the fusion problem of two lightweight models which are the Coordinate Attention (CA) mechanism module and the YOLOv5 detector. We propose a novel lightweight end-to-end object detection framework fused with a CA module in the backbone of a suitable position: YOLO Coordinate Attention SAR Ship (YOLO-CASS), for the SAR ship target detection task. The experimental results on the SSDD synthetic aperture radar (SAR) remote sensing imagery indicate that our method shows significant gains in both efficiency and performance, and it has the potential to be developed into onboard processing in the SAR satellite platform. The techniques we explored provide a solution to improve the performance of the lightweight deep learning-based object detection framework.
Journal Article
LHSDNet: A Lightweight and High-Accuracy SAR Ship Object Detection Algorithm
by
Wang, Yue
,
Wu, Hao
,
Ji, Penghui
in
Accuracy
,
Algorithms
,
Artificial satellites in remote sensing
2024
At present, the majority of deep learning-based ship object detection algorithms concentrate predominantly on enhancing recognition accuracy, often overlooking the complexity of the algorithm. These complex algorithms demand significant computational resources, making them unsuitable for deployment on resource-constrained edge devices, such as airborne and spaceborne platforms, thereby limiting their practicality. With the purpose of alleviating this problem, a lightweight and high-accuracy synthetic aperture radar (SAR) ship image detection network (LHSDNet) is proposed. Initially, GhostHGNetV2 was utilized as the feature extraction network, and the calculation amount of the network was reduced by GhostConv. Next, a lightweight feature fusion network was designed to combine shallow and deep features through lightweight convolutions, effectively preserving more information while minimizing computational requirements. Lastly, the feature extraction module was integrated through parameter sharing, and the detection head was lightweight to save computing resources further. The results from our experiments demonstrate that the proposed LHSDNet model increases mAP50 by 0.7% in comparison to the baseline model. Additionally, it illustrates a pronounced decrease in parameter count, computational demand, and model file size by 48.33%, 51.85%, and 41.26%, respectively, when contrasted with the baseline model. LHSDNet achieves a balance between precision and computing resources, rendering it more appropriate for edge device implementation.
Journal Article
An Anchor-Free Method Based on Adaptive Feature Encoding and Gaussian-Guided Sampling Optimization for Ship Detection in SAR Imagery
2022
Recently, deep-learning methods have yielded rapid progress for object detection in synthetic aperture radar (SAR) imagery. It is still a great challenge to detect ships in SAR imagery due to ships’ small size and confusable detail feature. This article proposes a novel anchor-free detection method composed of two modules to deal with these problems. First, for the lack of detailed information on small ships, we suggest an adaptive feature-encoding module (AFE), which gradually fuses deep semantic features into shallow layers and realizes the adaptive learning of the spatial fusion weights. Thus, it can effectively enhance the external semantics and improve the representation ability of small targets. Next, for the foreground–background imbalance, the Gaussian-guided detection head (GDH) is introduced according to the idea of soft sampling and exploits Gaussian prior to assigning different weights to the detected bounding boxes at different locations in the training optimization. Moreover, the proposed Gauss-ness can down-weight the predicted scores of bounding boxes far from the object center. Finally, the effect of the detector composed of the two modules is verified on the two SAR ship datasets. The results demonstrate that our method can effectively improve the detection performance of small ships in datasets.
Journal Article
Stepwise Attention-Guided Multiscale Fusion Network for Lightweight and High-Accurate SAR Ship Detection
2024
Many exceptional deep learning networks have demonstrated remarkable proficiency in general object detection tasks. However, the challenge of detecting ships in synthetic aperture radar (SAR) imagery increases due to the complex and various nature of these scenes. Moreover, sophisticated large-scale models necessitate substantial computational resources and hardware expenses. To address these issues, a new framework is proposed called a stepwise attention-guided multiscale feature fusion network (SAFN). Specifically, we introduce a stepwise attention mechanism designed to selectively emphasize relevant information and filter out irrelevant details of objects in a step-by-step manner. Firstly, a novel LGA-FasterNet is proposed, which incorporates a lightweight backbone FasterNet with lightweight global attention (LGA) to realize expressive feature extraction while reducing the model’s parameters. To effectively mitigate the impact of scale and complex background variations, a deformable attention bidirectional fusion network (DA-BFNet) is proposed, which introduces a novel deformable location attention (DLA) block and a novel deformable recognition attention (DRA) block, strategically integrating through bidirectional connections to achieve enhanced features fusion. Finally, we have substantiated the robustness of the new framework through extensive testing on the publicly accessible SAR datasets, HRSID and SSDD. The experimental outcomes demonstrate the competitive performance of our approach, showing a significant enhancement in ship detection accuracy compared to some state-of-the-art methods.
Journal Article
Improved Ship Object Detection in Low-Illumination Environments Using RetinaMFANet
2022
Video-based ship object detection has long been a popular research issue that has received attention in the water transportation industry. However, in low-illumination environments, such as at night or in fog, the water environment has a complex variety of light sources, video surveillance images are often accompanied by noise, and information on the details of objects in images is worsened. These problems cause high rates of false detection and missed detection when performing object detection for ships in low-illumination environments. Thus, this paper takes the detection of ship objects in low-illumination environments at night as the research object. The technical difficulties faced by object detection algorithms in low-illumination environments are analyzed, and a dataset of ship images is constructed by collecting images of ships (in the Nanjing section of Yangtze River in China) in low-illumination environments. In view of the outstanding performance of the RetinaNet model in general object detection, a new multiscale feature fusion network structure for a feature extraction module is proposed based on the same network architecture, in such a way that the extraction of more potential feature information from low-illumination images can be realized. In line with the feature detection network, the regression and classification detection network for anchor boxes is improved by means of the attention mechanism, guiding the network structure in the detection of object features. Moreover, the design and optimization of the augmentation of multiple random images and prior bounding boxes in the training process are also carried out. Finally, on the basis of experimental validation analysis, the optimized detection model was able to improve ship detection accuracy by 3.7% with a limited decrease in FPS (frames per second), and has better results in application.
Journal Article
Diffusion model for multi-scale ship object detection and recognition in remote sensing images
2025
Ship object detection and recognition in remote sensing images (RSIs) is a challenging task due to the multi-scale and complex background characteristics of ship objects. Currently, convolution-based methods cannot adequately solve these problems. Firstly, this paper first applies the diffusion model to the task of ship object detection and recognition in RSIs, and proposes a new diffusion model for multi-scale ship object detection and recognition in remote sensing images (MSDiffDet). Secondly, in order to reduce the loss of multi-scale information in the feature extraction process, this paper proposes the Channel Fusion FPN (CF-FPN) based on FPN and constructs the Large-Scale Feature Enhancement Module (LSFEM), which further enhances the algorithm’s ability to extract large-scale ship object features and improves the detection accuracy of ship objects in RSIs. Finally, this paper prunes and reconstructs MobileNetV2 to obtain the Sparse MobileNetV2, which is used as the backbone network of the image encoder, which enhances detection accuracy while reducing the overall parameter count of the algorithm. The experimental results demonstrate that the MSDiffDet algorithm is effective in detecting and recognizing four types of remote sensing ship objects: aircraft carriers, warships, commercial ships, and submarines. The
m
A
P
0.5
achieved a notable 89.8%. A significant improvement of 5.8% in
m
A
P
0.5
is observed compared to the DiffusionDet algorithm, indicating the potential of the MSDiffDet algorithm for applications in remote sensing ship object detection and recognition.
Journal Article
Multi-Object Detection for Inland Ship Situation Awareness Based on Few-Shot Learning
2023
With the rapid development of artificial intelligence technology and unmanned surface vehicle (USV) technology, object detection and tracking have wide applications in marine monitoring and intelligent ships. However, object detection and tracking tasks on small sample datasets often face challenges due to insufficient sample data. In this paper, we propose a ship detection and tracking model with high accuracy based on a few training samples with supervised information based on the few-shot learning framework. The transfer learning strategy is designed, innovatively using an open dataset of vehicles on highways to improve object detection accuracy for inland ships. The Shuffle Attention mechanism and smaller anchor boxes are introduced in the object detection network to improve the detection accuracy of different targets in different scenes. Compared with existing methods, the proposed method is characterized by fast training speed and high accuracy with small datasets, achieving 84.9% (mAP@0.5) with only 585 training images.
Journal Article
ECAP-YOLO: Efficient Channel Attention Pyramid YOLO for Small Object Detection in Aerial Image
2021
Detection of small targets in aerial images is still a difficult problem due to the low resolution and background-like targets. With the recent development of object detection technology, efficient and high-performance detector techniques have been developed. Among them, the YOLO series is a representative method of object detection that is light and has good performance. In this paper, we propose a method to improve the performance of small target detection in aerial images by modifying YOLOv5. The backbone is was modified by applying the first efficient channel attention module, and the channel attention pyramid method was proposed. We propose an efficient channel attention pyramid YOLO (ECAP-YOLO). Second, in order to optimize the detection of small objects, we eliminated the module for detecting large objects and added a detect layer to find smaller objects, reducing the computing power used for detecting small targets and improving the detection rate. Finally, we use transposed convolution instead of upsampling. Comparing the method proposed in this paper to the original YOLOv5, the performance improvement for the mAP was 6.9% when using the VEDAI dataset, 5.4% when detecting small cars in the xView dataset, 2.7% when detecting small vehicle and small ship classes from the DOTA dataset, and approximately 2.4% when finding small cars in the Arirang dataset.
Journal Article
Contextual Region-Based Convolutional Neural Network with Multilayer Fusion for SAR Ship Detection
by
Ji, Kefeng
,
Lin, Zhao
,
Kang, Miao
in
Artificial neural networks
,
context information
,
convolutional neural network (CNN)
2017
Synthetic aperture radar (SAR) ship detection has been playing an increasingly essential role in marine monitoring in recent years. The lack of detailed information about ships in wide swath SAR imagery poses difficulty for traditional methods in exploring effective features for ship discrimination. Being capable of feature representation, deep neural networks have achieved dramatic progress in object detection recently. However, most of them suffer from the missing detection of small-sized targets, which means that few of them are able to be employed directly in SAR ship detection tasks. This paper discloses an elaborately designed deep hierarchical network, namely a contextual region-based convolutional neural network with multilayer fusion, for SAR ship detection, which is composed of a region proposal network (RPN) with high network resolution and an object detection network with contextual features. Instead of using low-resolution feature maps from a single layer for proposal generation in a RPN, the proposed method employs an intermediate layer combined with a downscaled shallow layer and an up-sampled deep layer to produce region proposals. In the object detection network, the region proposals are projected onto multiple layers with region of interest (ROI) pooling to extract the corresponding ROI features and contextual features around the ROI. After normalization and rescaling, they are subsequently concatenated into an integrated feature vector for final outputs. The proposed framework fuses the deep semantic and shallow high-resolution features, improving the detection performance for small-sized ships. The additional contextual features provide complementary information for classification and help to rule out false alarms. Experiments based on the Sentinel-1 dataset, which contains twenty-seven SAR images with 7986 labeled ships, verify that the proposed method achieves an excellent performance in SAR ship detection.
Journal Article
YOLO-Lite: An Efficient Lightweight Network for SAR Ship Detection
by
Liu, Gang
,
Bai, Yanwen
,
Ren, Xiaozhen
in
Accuracy
,
Artificial intelligence
,
automatic detection
2023
Automatic ship detection in SAR images plays an essential role in both military and civilian fields. However, most of the existing deep learning detection methods introduce complex models and huge calculations while improving the detection accuracy, which is not conducive to the application of real-time ship detection. To solve this problem, an efficient lightweight network YOLO-Lite is proposed for SAR ship detection in this paper. First, a lightweight feature enhancement backbone (LFEBNet) is designed to reduce the amount of calculation. Additionally, a channel and position enhancement attention (CPEA) module is constructed and embedded into the backbone network to more accurately locate the target location by capturing the positional information. Second, an enhanced spatial pyramid pooling (EnSPP) module is customized to enhance the expression ability of features and address the position information loss of small SAR ships in high-level features. Third, we construct an effective multi-scale feature fusion network (MFFNet) with two feature fusion channels to obtain feature maps with more position and semantic information. Furthermore, a novel confidence loss function is proposed to effectively improve the SAR ship target detection accuracy. Extensive experiments on SSDD and SAR ship datasets verify the effectiveness of our YOLO-Lite, which can not only accurately detect SAR ships in different backgrounds but can also realize a lightweight architecture with low computation cost.
Journal Article