Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
612 result(s) for "RGB imagery"
Sort by:
Three-Dimensional Modeling of Weed Plants Using Low-Cost Photogrammetry
Sensing advances in plant phenotyping are of vital importance in basic and applied plant research. Plant phenotyping enables the modeling of complex shapes, which is useful, for example, in decision-making for agronomic management. In this sense, 3D processing algorithms for plant modeling is expanding rapidly with the emergence of new sensors and techniques designed to morphologically characterize. However, there are still some technical aspects to be improved, such as an accurate reconstruction of end-details. This study adapted low-cost techniques, Structure from Motion (SfM) and MultiView Stereo (MVS), to create 3D models for reconstructing plants of three weed species with contrasting shape and plant structures. Plant reconstruction was developed by applying SfM algorithms to an input set of digital images acquired sequentially following a track that was concentric and equidistant with respect to the plant axis and using three different angles, from a perpendicular to top view, which guaranteed the necessary overlap between images to obtain high precision 3D models. With this information, a dense point cloud was created using MVS, from which a 3D polygon mesh representing every plants’ shape and geometry was generated. These 3D models were validated with ground truth values (e.g., plant height, leaf area (LA) and plant dry biomass) using regression methods. The results showed, in general, a good consistency in the correlation equations between the estimated values in the models and the actual values measured in the weed plants. Indeed, 3D modeling using SfM algorithms proved to be a valuable methodology for weed phenotyping, since it accurately estimated the actual values of plant height and LA. Additionally, image processing using the SfM method was relatively fast. Consequently, our results indicate the potential of this budget system for plant reconstruction at high detail, which may be usable in several scenarios, including outdoor conditions. Future research should address other issues, such as the time-cost relationship and the need for detail in the different approaches.
Combining Color Indices and Textures of UAV-Based Digital Imagery for Rice LAI Estimation
Leaf area index (LAI) is a fundamental indicator of plant growth status in agronomic and environmental studies. Due to rapid advances in unmanned aerial vehicle (UAV) and sensor technologies, UAV-based remote sensing is emerging as a promising solution for monitoring crop LAI with great flexibility and applicability. This study aimed to determine the feasibility of combining color and texture information derived from UAV-based digital images for estimating LAI of rice (Oryza sativa L.). Rice field trials were conducted at two sites using different nitrogen application rates, varieties, and transplanting methods during 2016 to 2017. Digital images were collected using a consumer-grade UAV after sampling at key growth stages of tillering, stem elongation, panicle initiation and booting. Vegetation color indices (CIs) and grey level co-occurrence matrix-based textures were extracted from mosaicked UAV ortho-images for each plot. As a solution of using indices composed by two different textures, normalized difference texture indices (NDTIs) were calculated by two randomly selected textures. The relationships between rice LAIs and each calculated index were then compared using simple linear regression. Multivariate regression models with different input sets were further used to test the potential of combining CIs with various textures for rice LAI estimation. The results revealed that the visible atmospherically resistant index (VARI) based on three visible bands and the NDTI based on the mean textures derived from the red and green bands were the best for LAI retrieval in the CI and NDTI groups, respectively. Independent accuracy assessment showed that random forest (RF) exhibited the best predictive performance when combining CI and texture inputs (R2 = 0.84, RMSE = 0.87, MAE = 0.69). This study introduces a promising solution of combining color indices and textures from UAV-based digital imagery for rice LAI estimation. Future studies are needed on finding the best operation mode, suitable ground resolution, and optimal predictive methods for practical applications.
A Systematic Review of the Factors Influencing the Estimation of Vegetation Aboveground Biomass Using Unmanned Aerial Systems
Interest in the use of unmanned aerial systems (UAS) to estimate the aboveground biomass (AGB) of vegetation in agricultural and non-agricultural settings is growing rapidly but there is no standardized methodology for planning, collecting and analyzing UAS data for this purpose. We synthesized 46 studies from the peer-reviewed literature to provide the first-ever review on the subject. Our analysis showed that spectral and structural data from UAS imagery can accurately estimate vegetation biomass in a variety of settings, especially when both data types are combined. Vegetation-height metrics are useful for trees, while metrics of variation in structure or volume are better for non-woody vegetation. Multispectral indices using NIR and red-edge wavelengths normally have strong relationships with AGB but RGB-based indices often outperform them in models. Including measures of image texture can improve model accuracy for vegetation with heterogeneous canopies. Vegetation growth structure and phenological stage strongly influence model accuracy and the selection of useful metrics and should be considered carefully. Additional factors related to the study environment, data collection and analytical approach also impact biomass estimation and need to be considered throughout the workflow. Our review shows that UASs provide a capable tool for fine-scale, spatially explicit estimations of vegetation AGB and are an ideal complement to existing ground- and satellite-based approaches. We recommend future studies aimed at emerging UAS technologies and at evaluating the effect of vegetation type and growth stages on AGB estimation.
Spectral Reconstruction from RGB Imagery: A Potential Option for Infinite Spectral Data?
Spectral imaging has revolutionisedvarious fields by capturing detailed spatial and spectral information. However, its high cost and complexity limit the acquisition of a large amount of data to generalise processes and methods, thus limiting widespread adoption. To overcome this issue, a body of the literature investigates how to reconstruct spectral information from RGB images, with recent methods reaching a fairly low error of reconstruction, as demonstrated in the recent literature. This article explores the modification of information in the case of RGB-to-spectral reconstruction beyond reconstruction metrics, with a focus on assessing the accuracy of the reconstruction process and its ability to replicate full spectral information. In addition to this, we conduct a colorimetric relighting analysis based on the reconstructed spectra. We investigate the information representation by principal component analysis and demonstrate that, while the reconstruction error of the state-of-the-art reconstruction method is low, the nature of the reconstructed information is different. While it appears that the use in colour imaging comes with very good performance to handle illumination, the distribution of information difference between the measured and estimated spectra suggests that caution should be exercised before generalising the use of this approach.
Evaluation of RGB and Multispectral Unmanned Aerial Vehicle (UAV) Imagery for High-Throughput Phenotyping and Yield Prediction in Barley Breeding
With advances in plant genomics, plant phenotyping has become a new bottleneck in plant breeding and the need for reliable high-throughput plant phenotyping techniques has emerged. In the face of future climatic challenges, it does not seem appropriate to continue to solely select for grain yield and a few agronomically important traits. Therefore, new sensor-based high-throughput phenotyping has been increasingly used in plant breeding research, with the potential to provide non-destructive, objective and continuous plant characterization that reveals the formation of the final grain yield and provides insights into the physiology of the plant during the growth phase. In this context, we present the comparison of two sensor systems, Red-Green-Blue (RGB) and multispectral cameras, attached to unmanned aerial vehicles (UAV), and investigate their suitability for yield prediction using different modelling approaches in a segregating barley introgression population at three environments with weekly data collection during the entire vegetation period. In addition to vegetation indices, morphological traits such as canopy height, vegetation cover and growth dynamics traits were used for yield prediction. Repeatability analyses and genotype association studies of sensor-based traits were compared with reference values from ground-based phenotyping to test the use of conventional and new traits for barley breeding. The relative height estimation of the canopy by UAV achieved high precision (up to r = 0.93) and repeatability (up to R2 = 0.98). In addition, we found a great overlap of detected significant genotypes between the reference heights and sensor-based heights. The yield prediction accuracy of both sensor systems was at the same level and reached a maximum prediction accuracy of r2 = 0.82 with a continuous increase in precision throughout the entire vegetation period. Due to the lower costs and the consumer-friendly handling of image acquisition and processing, the RGB imagery seems to be more suitable for yield prediction in this study.
Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery
Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m . Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages.
Automated crop plant counting from very high-resolution aerial imagery
Knowing before harvesting how many plants have emerged and how they are growing is key in optimizing labour and efficient use of resources. Unmanned aerial vehicles (UAV) are a useful tool for fast and cost efficient data acquisition. However, imagery need to be converted into operational spatial products that can be further used by crop producers to have insight in the spatial distribution of the number of plants in the field. In this research, an automated method for counting plants from very high-resolution UAV imagery is addressed. The proposed method uses machine vision—Excess Green Index and Otsu’s method—and transfer learning using convolutional neural networks to identify and count plants. The integrated methods have been implemented to count 10 weeks old spinach plants in an experimental field with a surface area of 3.2 ha. Validation data of plant counts were available for 1/8 of the surface area. The results showed that the proposed methodology can count plants with an accuracy of 95% for a spatial resolution of 8 mm/pixel in an area up to 172 m2. Moreover, when the spatial resolution decreases with 50%, the maximum additional counting error achieved is 0.7%. Finally, a total amount of 170 000 plants in an area of 3.5 ha with an error of 42.5% was computed. The study shows that it is feasible to count individual plants using UAV-based off-the-shelf products and that via machine vision/learning algorithms it is possible to translate image data in non-expert practical information.
Monitoring Wheat Leaf Rust and Stripe Rust in Winter Wheat Using High-Resolution UAV-Based Red-Green-Blue Imagery
During the past decade, imagery data acquired from unmanned aerial vehicles (UAVs), thanks to their high spatial, spectral, and temporal resolutions, have attracted increasing attention for discriminating healthy from diseased plants and monitoring the progress of such plant diseases in fields. Despite the well-documented usage of UAV-based hyperspectral remote sensing for discriminating healthy and diseased plant areas, employing red-green-blue (RGB) imagery for a similar purpose has yet to be fully investigated. This study aims at evaluating UAV-based RGB imagery to discriminate healthy plants from those infected by stripe and wheat leaf rusts in winter wheat (Triticum aestivum L.), with a focus on implementing an expert system to assist growers in improved disease management. RGB images were acquired at four representative wheat-producing sites in the Grand Duchy of Luxembourg. Diseased leaf areas were determined based on the digital numbers (DNs) of green and red spectral bands for wheat stripe rust (WSR), and the combination of DNs of green, red, and blue spectral bands for wheat leaf rust (WLR). WSR and WLR caused alterations in the typical reflectance spectra of wheat plants between the green and red spectral channels. Overall, good agreements between UAV-based estimates and observations were found for canopy cover, WSR, and WLR severities, with statistically significant correlations (p-value (Kendall) < 0.0001). Correlation coefficients were 0.92, 0.96, and 0.86 for WSR severity, WLR severity, and canopy cover, respectively. While the estimation of canopy cover was most often less accurate (correlation coefficients < 0.20), WSR and WLR infected leaf areas were identified satisfactorily using the RGB imagery-derived indices during the critical period (i.e., stem elongation and booting stages) for efficacious fungicide application, while disease severities were also quantified accurately over the same period. Using such a UAV-based RGB imagery method for monitoring fungal foliar diseases throughout the cropping season can help to identify any new disease outbreak and efficaciously control its spread.
An open source workflow for weed mapping in native grassland using unmanned aerial vehicle: using Rumex obtusifolius as a case study
Weed control is one of the biggest challenges in organic farms or nature reserve areas where mass spraying is prohibited. Recent advancements in remote sensing and airborne technologies provide a fast and efficient means to support environmental monitoring and management, allowing early detection of invasive species. However, in order to perform weed classification, current studies mostly relied on object-based image analysis (OBIA) and proprietary software which required substantial human inputs. This paper proposes an open-source workflow for automated weed mapping using a commercially available unmanned aerial vehicle (UAV). The UAV was flown at a low altitude between 10 m and 20 m, and collected true-colour RGB imagery over a weed-infested nature reserve. The aim of this study is to develop a repeatable and robust system for early weed detection, with minimum human intervention, for classification of Rumex obtusifolius (R. obtusifolius). Preliminary results of the proposed workflow achieved an overall accuracy of 92.1% with an F1 score of 78.7%. The approach also demonstrated the capability to map R. obtusifolius in datasets collected at various flight altitudes, camera settings and light conditions. This shows the potential to perform semi- or fully automated early weed detection system in grasslands using UAV-imagery.
Integrating UAV-Based RGB Imagery with Semi-Supervised Learning for Tree Species Identification in Heterogeneous Forests
The integration of unmanned aerial vehicle (UAV) remote sensing and deep learning has emerged as a highly effective strategy for inventorying forest resources. However, the spatiotemporal variability of forest environments and the scarcity of annotated data hinder the performance of conventional supervised deep-learning models. To overcome these challenges, this study has developed efficient tree (ET), a semi-supervised tree detector designed for forest scenes. ET employed an enhanced YOLO model (YOLO-Tree) as a base detector and incorporated a teacher–student semi-supervised learning (SSL) framework based on pseudo-labeling, effectively leveraging abundant unlabeled data to bolster model robustness. The results revealed that SSL significantly improved outcomes in scenarios with sparse labeled data, specifically when the annotation proportion was below 50%. Additionally, employing overlapping cropping as a data augmentation strategy mitigated instability during semi-supervised training under conditions of limited sample size. Notably, introducing unlabeled data from external sites enhances the accuracy and cross-site generalization of models trained on diverse datasets, achieving impressive results with F1, mAP50, and mAP50-95 scores of 0.979, 0.992, and 0.871, respectively. In conclusion, this study highlights the potential of combining UAV-based RGB imagery with SSL to advance tree species identification in heterogeneous forests.