Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,182 result(s) for "UAV imagery"
Sort by:
A Review on Unmanned Aerial Vehicle Remote Sensing: Platforms, Sensors, Data Processing Methods, and Applications
In recent years, UAV remote sensing has gradually attracted the attention of scientific researchers and industry, due to its broad application prospects. It has been widely used in agriculture, forestry, mining, and other industries. UAVs can be flexibly equipped with various sensors, such as optical, infrared, and LIDAR, and become an essential remote sensing observation platform. Based on UAV remote sensing, researchers can obtain many high-resolution images, with each pixel being a centimeter or millimeter. The purpose of this paper is to investigate the current applications of UAV remote sensing, as well as the aircraft platforms, data types, and elements used in each application category; the data processing methods, etc.; and to study the advantages of the current application of UAV remote sensing technology, the limitations, and promising directions that still lack applications. By reviewing the papers published in this field in recent years, we found that the current application research of UAV remote sensing research can be classified into four categories according to the application field: (1) Precision agriculture, including crop disease observation, crop yield estimation, and crop environmental observation; (2) Forestry remote sensing, including forest disease identification, forest disaster observation, etc.; (3) Remote sensing of power systems; (4) Artificial facilities and the natural environment. We found that in the papers published in recent years, image data (RGB, multi-spectral, hyper-spectral) processing mainly used neural network methods; in crop disease monitoring, multi-spectral data are the most studied type of data; for LIDAR data, current applications still lack an end-to-end neural network processing method; this review examines UAV platforms, sensors, and data processing methods, and according to the development process of certain application fields and current implementation limitations, some predictions are made about possible future development directions.
Deep Object Detection of Crop Weeds: Performance of YOLOv7 on a Real Case Dataset from UAV Images
Weeds are a crucial threat to agriculture, and in order to preserve crop productivity, spreading agrochemicals is a common practice with a potential negative impact on the environment. Methods that can support intelligent application are needed. Therefore, identification and mapping is a critical step in performing site-specific weed management. Unmanned aerial vehicle (UAV) data streams are considered the best for weed detection due to the high resolution and flexibility of data acquisition and the spatial explicit dimensions of imagery. However, with the existence of unstructured crop conditions and the high biological variation of weeds, it remains a difficult challenge to generate accurate weed recognition and detection models. Two critical barriers to tackling this challenge are related to (1) a lack of case-specific, large, and comprehensive weed UAV image datasets for the crop of interest, (2) defining the most appropriate computer vision (CV) weed detection models to assess the operationality of detection approaches in real case conditions. Deep Learning (DL) algorithms, appropriately trained to deal with the real case complexity of UAV data in agriculture, can provide valid alternative solutions with respect to standard CV approaches for an accurate weed recognition model. In this framework, this paper first introduces a new weed and crop dataset named Chicory Plant (CP) and then tests state-of-the-art DL algorithms for object detection. A total of 12,113 bounding box annotations were generated to identify weed targets (Mercurialis annua) from more than 3000 RGB images of chicory plantations, collected using a UAV system at various stages of crop and weed growth. Deep weed object detection was conducted by testing the most recent You Only Look Once version 7 (YOLOv7) on both the CP and publicly available datasets (Lincoln beet (LB)), for which a previous version of YOLO was used to map weeds and crops. The YOLOv7 results obtained for the CP dataset were encouraging, outperforming the other YOLO variants by producing value metrics of 56.6%, 62.1%, and 61.3% for the mAP@0.5 scores, recall, and precision, respectively. Furthermore, the YOLOv7 model applied to the LB dataset surpassed the existing published results by increasing the mAP@0.5 scores from 51% to 61%, 67.5% to 74.1%, and 34.6% to 48% for the total mAP, mAP for weeds, and mAP for sugar beets, respectively. This study illustrates the potential of the YOLOv7 model for weed detection but remarks on the fundamental needs of large-scale, annotated weed datasets to develop and evaluate models in real-case field circumstances.
Lightweight Object Detection Algorithm for UAV Aerial Imagery
Addressing the challenges of low detection precision and excessive parameter volume presented by the high resolution, significant scale variations, and complex backgrounds in UAV aerial imagery, this paper introduces MFP-YOLO, a lightweight detection algorithm based on YOLOv5s. Initially, a multipath inverse residual module is designed, and an attention mechanism is incorporated to manage the issues associated with significant scale variations and abundant interference from complex backgrounds. Then, parallel deconvolutional spatial pyramid pooling is employed to extract scale-specific information, enhancing multi-scale target detection. Furthermore, the Focal-EIoU loss function is utilized to augment the model’s focus on high-quality samples, consequently improving training stability and detection accuracy. Finally, a lightweight decoupled head replaces the original model’s detection head, accelerating network convergence speed and enhancing detection precision. Experimental results demonstrate that MFP-YOLO improved the mAP50 on the VisDrone 2019 validation and test sets by 12.9% and 8.0%, respectively, compared to the original YOLOv5s. At the same time, the model’s parameter volume and weight size were reduced by 79.2% and 73.7%, respectively, indicating that MFP-YOLO outperforms other mainstream algorithms in UAV aerial imagery detection tasks.
Biomass prediction and shoot growth characterization of single-staked yam plants using UAV imagery
This study presents an unmanned aerial vehicle (UAV)-based approach for estimating shoot biomass and characterizing growth patterns in single-staked white Guinea yams (Dioscorea rotundata). Multi-angle aerial images from nadir and oblique views were used to extract vegetation- and height-related indices that served as predictors in machine learning models. Support vector regression using combined-view imagery provided the highest prediction accuracy (R² = 0.79) and remained robust across growth stages, years, fertilizer treatments, and genotypes. Notably, the combined-view configuration outperformed single-view imaging, demonstrating the advantage of capturing complementary canopy-structure information in complex staked-vine canopies. Time-series biomass estimates enabled the fitting of genotype-specific Richards growth curves using Bayesian inference. Significant genotypic variations were observed in parameters associated with maximum biomass and early growth rate, whereas phenology-related parameters showed comparatively minimal differences. These parameter differences may reflect variation in canopy architecture and growth allocation among genotypes. Overall, this integrated workflow provides a scalable tool for nondestructive monitoring of yam growth dynamics and for summarizing biomass trajectories with interpretable parameters, supporting breeding efforts aimed at improving yam productivity and yield stability across diverse cultivation conditions.
YOLOv5 with ConvMixer Prediction Heads for Precise Object Detection in Drone Imagery
The potency of object detection techniques using Unmanned Aerial Vehicles (UAVs) is unprecedented due to their mobility. This potency has stimulated the use of UAVs with object detection functionality in numerous crucial real-life applications. Additionally, more efficient and accurate object detection techniques are being researched and developed for usage in UAV applications. However, object detection in UAVs presents challenges that are not common to general object detection. First, as UAVs fly at varying altitudes, the objects imaged via UAVs vary vastly in size, making the task at hand more challenging. Second due to the motion of the UAVs, there could be a presence of blur in the captured images. To deal with these challenges, we present a You Only Look Once v5 (YOLOv5)-like architecture with ConvMixers in its prediction heads and an additional prediction head to deal with minutely-small objects. The proposed architecture has been trained and tested on the VisDrone 2021 dataset, and the acquired results are comparable with the existing state-of-the-art methods.
Real-Time Vehicle Detection from UAV Aerial Images Based on Improved YOLOv5
Aerial vehicle detection has significant applications in aerial surveillance and traffic control. The pictures captured by the UAV are characterized by many tiny objects and vehicles obscuring each other, significantly increasing the detection challenge. In the research of detecting vehicles in aerial images, there is a widespread problem of missed and false detections. Therefore, we customize a model based on YOLOv5 to be more suitable for detecting vehicles in aerial images. Firstly, we add one additional prediction head to detect smaller-scale objects. Furthermore, to keep the original features involved in the training process of the model, we introduce a Bidirectional Feature Pyramid Network (BiFPN) to fuse the feature information from various scales. Lastly, Soft-NMS (soft non-maximum suppression) is employed as a prediction frame filtering method, alleviating the missed detection due to the close alignment of vehicles. The experimental findings on the self-made dataset in this research indicate that compared with YOLOv5s, the mAP@0.5 and mAP@0.5:0.95 of YOLOv5-VTO increase by 3.7% and 4.7%, respectively, and the two indexes of accuracy and recall are also improved.
Deep Learning Approach for Car Detection in UAV Imagery
This paper presents an automatic solution to the problem of detecting and counting cars in unmanned aerial vehicle (UAV) images. This is a challenging task given the very high spatial resolution of UAV images (on the order of a few centimetres) and the extremely high level of detail, which require suitable automatic analysis methods. Our proposed method begins by segmenting the input image into small homogeneous regions, which can be used as candidate locations for car detection. Next, a window is extracted around each region, and deep learning is used to mine highly descriptive features from these windows. We use a deep convolutional neural network (CNN) system that is already pre-trained on huge auxiliary data as a feature extraction tool, combined with a linear support vector machine (SVM) classifier to classify regions into “car” and “no-car” classes. The final step is devoted to a fine-tuning procedure which performs morphological dilation to smooth the detected regions and fill any holes. In addition, small isolated regions are analysed further using a few sliding rectangular windows to locate cars more accurately and remove false positives. To evaluate our method, experiments were conducted on a challenging set of real UAV images acquired over an urban area. The experimental results have proven that the proposed method outperforms the state-of-the-art methods, both in terms of accuracy and computational time.
UAV-DETR: An Enhanced RT-DETR Architecture for Efficient Small Object Detection in UAV Imagery
To mitigate the technical challenges associated with small-object detection, feature degradation, and spatial-contextual misalignment in UAV-acquired imagery, this paper proposes UAV-DETR, an enhanced Transformer-based object detection model designed for aerial scenarios. Specifically, UAV imagery often suffers from feature degradation due to low resolution and complex backgrounds and from semantic-spatial misalignment caused by dynamic shooting conditions. This work addresses these challenges by enhancing feature perception, semantic representation, and spatial alignment. Architecturally extending the RT-DETR framework, UAV-DETR incorporates three novel modules: the Channel-Aware Sensing Module (CAS), the Scale-Optimized Enhancement Pyramid Module (SOEP), and the newly designed Context-Spatial Alignment Module (CSAM), which integrates the functionalities of contextual and spatial calibration. These components collaboratively strengthen multi-scale feature extraction, semantic representation, and spatial-contextual alignment. The CAS module refines the backbone to improve multi-scale feature perception, while SOEP enhances semantic richness in shallow layers through lightweight channel-weighted fusion. CSAM further optimizes the hybrid encoder by simultaneously correcting contextual inconsistencies and spatial misalignments during feature fusion, enabling more precise cross-scale integration. Comprehensive comparisons with mainstream detectors, including Faster R-CNN and YOLOv5, demonstrate that UAV-DETR achieves superior small-object detection performance in complex aerial scenarios. The performance is thoroughly evaluated in terms of mAP@0.5, parameter count, and computational complexity (GFLOPs). Experiments on the VisDrone2019 dataset benchmark demonstrate that UAV-DETR achieves an mAP@0.5 of 51.6%, surpassing RT-DETR by 3.5% while reducing the number of model parameters from 19.8 million to 16.8 million.
Comparison of Classical Methods and Mask R-CNN for Automatic Tree Detection and Mapping Using UAV Imagery
Detecting and mapping individual trees accurately and automatically from remote sensing images is of great significance for precision forest management. Many algorithms, including classical methods and deep learning techniques, have been developed and applied for tree crown detection from remote sensing images. However, few studies have evaluated the accuracy of different individual tree detection (ITD) algorithms and their data and processing requirements. This study explored the accuracy of ITD using local maxima (LM) algorithm, marker-controlled watershed segmentation (MCWS), and Mask Region-based Convolutional Neural Networks (Mask R-CNN) in a young plantation forest with different test images. Manually delineated tree crowns from UAV imagery were used for accuracy assessment of the three methods, followed by an evaluation of the data processing and application requirements for three methods to detect individual trees. Overall, Mask R-CNN can best use the information in multi-band input images for detecting individual trees. The results showed that the Mask R-CNN model with the multi-band combination produced higher accuracy than the model with a single-band image, and the RGB band combination achieved the highest accuracy for ITD (F1 score = 94.68%). Moreover, the Mask R-CNN models with multi-band images are capable of providing higher accuracies for ITD than the LM and MCWS algorithms. The LM algorithm and MCWS algorithm also achieved promising accuracies for ITD when the canopy height model (CHM) was used as the test image (F1 score = 87.86% for LM algorithm, F1 score = 85.92% for MCWS algorithm). The LM and MCWS algorithms are easy to use and lower computer computational requirements, but they are unable to identify tree species and are limited by algorithm parameters, which need to be adjusted for each classification. It is highlighted that the application of deep learning with its end-to-end-learning approach is very efficient and capable of deriving the information from multi-layer images, but an additional training set is needed for model training, robust computer resources are required, and a large number of accurate training samples are necessary. This study provides valuable information for forestry practitioners to select an optimal approach for detecting individual trees.
Assessment of Vegetation Indices Derived by UAV Imagery for Durum Wheat Phenotyping under a Water Limited and Heat Stressed Mediterranean Environment
There is growing interest for using Spectral Vegetation Indices (SVI) derived by Unmanned Aerial Vehicle (UAV) imagery as a fast and cost-efficient tool for plant phenotyping. The development of such tools is of paramount importance to continue progress through plant breeding, especially in the Mediterranean basin, where climate change is expected to further increase yield uncertainty. In the present study, Normalized Difference Vegetation Index (NDVI), Simple Ratio (SR) and Green Normalized Difference Vegetation Index (GNDVI) derived from UAV imagery were calculated for two consecutive years in a set of twenty durum wheat varieties grown under a water limited and heat stressed environment. Statistically significant differences between genotypes were observed for SVIs. GNDVI explained more variability than NDVI and SR, when recorded at booting. GNDVI was significantly correlated with grain yield when recorded at booting and anthesis during the 1st and 2nd year, respectively, while NDVI was correlated to grain yield when recorded at booting, but only for the 1st year. These results suggest that GNDVI has a better discriminating efficiency and can be a better predictor of yield when recorded at early reproductive stages. The predictive ability of SVIs was affected by plant phenology. Correlations of grain yield with SVIs were stronger as the correlations of SVIs with heading were weaker or not significant. NDVIs recorded at the experimental site were significantly correlated with grain yield of the same set of genotypes grown in other environments. Both positive and negative correlations were observed indicating that the environmental conditions during grain filling can affect the sign of the correlations. These findings highlight the potential use of SVIs derived by UAV imagery for durum wheat phenotyping under low yielding Mediterranean conditions.