Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
711 result(s) for "small object detection"
Sort by:
Improved Object Detection Method Utilizing YOLOv7-Tiny for Unmanned Aerial Vehicle Photographic Imagery
In unmanned aerial vehicle photographs, object detection algorithms encounter challenges in enhancing both speed and accuracy for objects of different sizes, primarily due to complex backgrounds and small objects. This study introduces the PDWT-YOLO algorithm, based on the YOLOv7-tiny model, to improve the effectiveness of object detection across all sizes. The proposed method enhances the detection of small objects by incorporating a dedicated small-object detection layer, while reducing the conflict between classification and regression tasks through the replacement of the YOLOv7-tiny model’s detection head (IDetect) with a decoupled head. Moreover, network convergence is accelerated, and regression accuracy is improved by replacing the Complete Intersection over Union (CIoU) loss function with a Wise Intersection over Union (WIoU) focusing mechanism in the loss function. To assess the proposed model’s effectiveness, it was trained and tested on the VisDrone-2019 dataset comprising images captured by various drones across diverse scenarios, weather conditions, and lighting conditions. The experiments show that mAP@0.5:0.95 and mAP@0.5 increased by 5% and 6.7%, respectively, with acceptable running speed compared with the original YOLOv7-tiny model. Furthermore, this method shows improvement over other datasets, confirming that PDWT-YOLO is effective for multiscale object detection.
UAV-YOLOv8: A Small-Object-Detection Model Based on Improved YOLOv8 for UAV Aerial Photography Scenarios
Unmanned aerial vehicle (UAV) object detection plays a crucial role in civil, commercial, and military domains. However, the high proportion of small objects in UAV images and the limited platform resources lead to the low accuracy of most of the existing detection models embedded in UAVs, and it is difficult to strike a good balance between detection performance and resource consumption. To alleviate the above problems, we optimize YOLOv8 and propose an object detection model based on UAV aerial photography scenarios, called UAV-YOLOv8. Firstly, Wise-IoU (WIoU) v3 is used as a bounding box regression loss, and a wise gradient allocation strategy makes the model focus more on common-quality samples, thus improving the localization ability of the model. Secondly, an attention mechanism called BiFormer is introduced to optimize the backbone network, which improves the model’s attention to critical information. Finally, we design a feature processing module named Focal FasterNet block (FFNB) and propose two new detection scales based on this module, which makes the shallow features and deep features fully integrated. The proposed multiscale feature fusion network substantially increased the detection performance of the model and reduces the missed detection rate of small objects. The experimental results show that our model has fewer parameters compared to the baseline model and has a mean detection accuracy higher than the baseline model by 7.7%. Compared with other mainstream models, the overall performance of our model is much better. The proposed method effectively improves the ability to detect small objects. There is room to optimize the detection effectiveness of our model for small and feature-less objects (such as bicycle-type vehicles), as we will address in subsequent research.
An improved Yolov5 real-time detection method for small objects captured by UAV
The object detection algorithm is mainly focused on detection in general scenarios, when the same algorithm is applied to drone-captured scenes, and the detection performance of the algorithm will be significantly reduced. Our research found that small objects are the main reason for this phenomenon. In order to verify this finding, we choose the yolov5 model and propose four methods to improve the detection precision of small object based on it. At the same time, considering that the model needs to be small in size, speed fast, low cost and easy to deploy in actual application, therefore, when designing these four methods, we also fully consider the impact of these methods on the detection speed. The model integrating all the improved methods not only greatly improves the detection precision, but also effectively reduces the loss of detection speed. Finally, based on VisDrone-2020, the mAP of our model is increased from 12.7 to 37.66%, and the detection speed is up to 55FPS. It is to outperform the earlier state of the art in detection speed and promote the progress of object detection algorithms on drone platforms.
Research Challenges, Recent Advances, and Popular Datasets in Deep Learning-Based Underwater Marine Object Detection: A Review
Underwater marine object detection, as one of the most fundamental techniques in the community of marine science and engineering, has been shown to exhibit tremendous potential for exploring the oceans in recent years. It has been widely applied in practical applications, such as monitoring of underwater ecosystems, exploration of natural resources, management of commercial fisheries, etc. However, due to complexity of the underwater environment, characteristics of marine objects, and limitations imposed by exploration equipment, detection performance in terms of speed, accuracy, and robustness can be dramatically degraded when conventional approaches are used. Deep learning has been found to have significant impact on a variety of applications, including marine engineering. In this context, we offer a review of deep learning-based underwater marine object detection techniques. Underwater object detection can be performed by different sensors, such as acoustic sonar or optical cameras. In this paper, we focus on vision-based object detection due to several significant advantages. To facilitate a thorough understanding of this subject, we organize research challenges of vision-based underwater object detection into four categories: image quality degradation, small object detection, poor generalization, and real-time detection. We review recent advances in underwater marine object detection and highlight advantages and disadvantages of existing solutions for each challenge. In addition, we provide a detailed critical examination of the most extensively used datasets. In addition, we present comparative studies with previous reviews, notably those approaches that leverage artificial intelligence, as well as future trends related to this hot topic.
RSOD: Real-time small object detection algorithm in UAV-based traffic monitoring
The prevailing applications of Unmanned Aerial Vehicles (UAVs) in transportation systems promote the development of object detection methods to collect real-time traffic information through UAVs. However, due to the small size and high density of objects from the aerial perspective, most existing algorithms are difficult to accurately process and extract informative features from the traffic images collected by UAVs. To address the challenges, this paper proposes a new real-time small object detection (RSOD) algorithm based on YOLOv3, which improves the small object detection accuracy by (i) using feature maps of a shallower layer containing more fine-grained information for location prediction; (ii) fusing local and global features of shallow and deep feature maps in Feature Pyramid Network(FPN) to enhance the ability to extract more representative features; (iii)assigning weights to output features of FPN and fusing them adaptively; and(iv) improving the excitation layer in Squeeze-and-Excitation attention mechanism to adjust the feature responses of each channel more precisely. Experimental results show that, when the input size is 608 × 608 × 3, the precision of the proposed RSOD algorithm measured by mAP@0.5 is 43.3% and 52.7% on the Visdrone-DET2018 and UAVDT datasets, which is 3.4% and 5.1% higher than those of YOLOv3, respectively.
Metal Defect Detection Based on Yolov5
Metal surface defect detection has been a challenge in the industrial field. The current metal surface defect algorithms target only at a few types of defects and fail to perform well on defects with different scales. In this paper, a large number of metal surface defects are studied based on GC10-DET data set. An improved yolov5 detection network is designed targeting defects of various scales, especially of small-scaled objects, using a specific data enhancement method to regularize and an effective loss function to address data imbalance caused by small-scaled object defects. Finally, the comparative experiment on GC10-DET data set proves the major improvements on accuracy superiority of the proposed method.
YOLO-Fine: One-Stage Detector of Small Objects Under Various Backgrounds in Remote Sensing Images
Object detection from aerial and satellite remote sensing images has been an active research topic over the past decade. Thanks to the increase in computational resources and data availability, deep learning-based object detection methods have achieved numerous successes in computer vision, and more recently in remote sensing. However, the ability of current detectors to deal with (very) small objects still remains limited. In particular, the fast detection of small objects from a large observed scene is still an open question. In this work, we address this challenge and introduce an enhanced one-stage deep learning-based detection model, called You Only Look Once (YOLO)-fine, which is based on the structure of YOLOv3. Our detector is designed to be capable of detecting small objects with high accuracy and high speed, allowing further real-time applications within operational contexts. We also investigate its robustness to the appearance of new backgrounds in the validation set, thus tackling the issue of domain adaptation that is critical in remote sensing. Experimental studies that were conducted on both aerial and satellite benchmark datasets show some significant improvement of YOLO-fine as compared to other state-of-the art object detectors.
Drone-DETR: Efficient Small Object Detection for Remote Sensing Image Using Enhanced RT-DETR Model
Performing low-latency, high-precision object detection on unmanned aerial vehicles (UAVs) equipped with vision sensors holds significant importance. However, the current limitations of embedded UAV devices present challenges in balancing accuracy and speed, particularly in the analysis of high-precision remote sensing images. This challenge is particularly pronounced in scenarios involving numerous small objects, intricate backgrounds, and occluded overlaps. To address these issues, we introduce the Drone-DETR model, which is based on RT-DETR. To overcome the difficulties associated with detecting small objects and reducing redundant computations arising from complex backgrounds in ultra-wide-angle images, we propose the Effective Small Object Detection Network (ESDNet). This network preserves detailed information about small objects, reduces redundant computations, and adopts a lightweight architecture. Furthermore, we introduce the Enhanced Dual-Path Feature Fusion Attention Module (EDF-FAM) within the neck network. This module is specifically designed to enhance the network’s ability to handle multi-scale objects. We employ a dynamic competitive learning strategy to enhance the model’s capability to efficiently fuse multi-scale features. Additionally, we incorporate the P2 shallow feature layer from the ESDNet into the neck network to enhance the model’s ability to fuse small-object features, thereby enhancing the accuracy of small object detection. Experimental results indicate that the Drone-DETR model achieves an mAP50 of 53.9% with only 28.7 million parameters on the VisDrone2019 dataset, representing an 8.1% enhancement over RT-DETR-R18.
A review of small object detection based on deep learning
Small object detection is widely used in a variety of fields such as automatic driving, UAV-based object detection, and aerial image detection. However, small objects carry limited information, making it difficult for detectors to detect small objects. In recent years, the development of deep learning has significantly improved the performance of small object detection. This paper provides a comprehensive review to help further the development of small target detection. We summarize the challenges related to small object detection and analyze solutions to these challenges in existing works, including integrating the feature at different layers, enriching available information, balancing the number of positive and negative samples for small objects, and increasing sufficient small object instances. We discuss related methods developed in three application areas, including automatic driving, UAV search and rescue, and aerial image detection. In addition, we thoroughly analyze the performance of typical small object detection methods on popular datasets. Finally, based on the comprehensive review of small object detection methods, we point out possible research directions for future studies.
A Small Object Detection Algorithm for Traffic Signs Based on Improved YOLOv7
Traffic sign detection is a crucial task in computer vision, finding wide-ranging applications in intelligent transportation systems, autonomous driving, and traffic safety. However, due to the complexity and variability of traffic environments and the small size of traffic signs, detecting small traffic signs in real-world scenes remains a challenging problem. In order to improve the recognition of road traffic signs, this paper proposes a small object detection algorithm for traffic signs based on the improved YOLOv7. First, the small target detection layer in the neck region was added to augment the detection capability for small traffic sign targets. Simultaneously, the integration of self-attention and convolutional mix modules (ACmix) was applied to the newly added small target detection layer, enabling the capture of additional feature information through the convolutional and self-attention channels within ACmix. Furthermore, the feature extraction capability of the convolution modules was enhanced by replacing the regular convolution modules in the neck layer with omni-dimensional dynamic convolution (ODConv). To further enhance the accuracy of small target detection, the normalized Gaussian Wasserstein distance (NWD) metric was introduced to mitigate the sensitivity to minor positional deviations of small objects. The experimental results on the challenging public dataset TT100K demonstrate that the SANO-YOLOv7 algorithm achieved an 88.7% mAP@0.5, outperforming the baseline model YOLOv7 by 5.3%.