Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
29 result(s) for "Chen, Riqiang"
Sort by:
Apple Tree Branch Information Extraction from Terrestrial Laser Scanning and Backpack-LiDAR
The branches of fruit trees provide support for the growth of leaves, buds, flowers, fruits, and other organs. The number and length of branches guarantee the normal growth, flowering, and fruiting of fruit trees and are thus important indicators of tree growth and yield. However, due to their low height and the high number of branches, the precise management of fruit trees lacks a theoretical basis and data support. In this paper, we introduce a method for extracting topological and structural information on fruit tree branches based on LiDAR (Light Detection and Ranging) point clouds and proved its feasibility for the study of fruit tree branches. The results show that based on Terrestrial Laser Scanning (TLS), the relative errors of branch length and number are 7.43% and 12% for first-order branches, and 16.75% and 9.67% for second-order branches. The accuracy of total branch information can reach 15.34% and 2.89%. We also evaluated the potential of backpack-LiDAR by comparing field measurements and quantitative structural models (QSMs) evaluations of 10 sample trees. This comparison shows that in addition to the first-order branch information, the information about other orders of branches is underestimated to varying degrees. The root means square error (RMSE) of the length and number of the first-order branches were 3.91 and 1.30 m, and the relative root means square error (NRMSE) was 14.62% and 11.96%, respectively. Our work represents the first automated classification of fruit tree branches, which can be used in support of precise fruit tree pruning, quantitative forecast of yield, evaluation of fruit tree growth, and the modern management of orchards.
Estimation of Soybean Yield by Combining Maturity Group Information and Unmanned Aerial Vehicle Multi-Sensor Data Using Machine Learning
Accurate and rapid estimation of the crop yield is essential to precision agriculture. Critical to crop improvement, yield is a primary index for selecting excellent genotypes in crop breeding. Recently developed unmanned aerial vehicle (UAV) platforms and advanced algorithms can provide powerful tools for plant breeders. Genotype category information such as the maturity group information (M) can significantly influence soybean yield estimation using remote sensing data. The objective of this study was to improve soybean yield prediction by combining M with UAV-based multi-sensor data using machine learning methods. We investigated three types of maturity groups (Early, Median and Late) of soybean, and collected the UAV-based hyperspectral and red–green–blue (RGB) images at three key growth stages. Vegetation indices (VI) and texture features (Te) were extracted and combined with M to predict yield using partial least square regression (PLSR), Gaussian process regression (GPR), random forest regression (RFR) and kernel ridge regression (KRR). The results showed that (1) the method of combining M with remote sensing data could significantly improve the estimation performances of soybean yield. (2) The combinations of three variables (VI, Te and M) gave the best estimation accuracy. Meanwhile, the flowering stage was the optimal single time point for yield estimation (R2 = 0.689, RMSE = 408.099 kg/hm2), while using multiple growth stages produced the best estimation performance (R2 = 0.700, RMSE = 400.946 kg/hm2). (3) By comparing the models constructed by different algorithms for different growth stages, it showed that the models built by GPR showed the best performances. Overall, the results of this study provide insights into soybean yield estimation based on UAV remote sensing data and maturity information.
Maize Ear Height and Ear–Plant Height Ratio Estimation with LiDAR Data and Vertical Leaf Area Profile
Ear height (EH) and ear–plant height ratio (ER) are important agronomic traits in maize that directly affect nutrient utilization efficiency and lodging resistance and ultimately relate to maize yield. However, challenges in executing large-scale EH and ER measurements severely limit maize breeding programs. In this paper, we propose a novel, simple method for field monitoring of EH and ER based on the relationship between ear position and vertical leaf area profile. The vertical leaf area profile was estimated from Terrestrial Laser Scanner (TLS) and Drone Laser Scanner (DLS) data by applying the voxel-based point cloud method. The method was validated using two years of data collected from 128 field plots. The main factors affecting the accuracy were investigated, including the LiDAR platform, voxel size, and point cloud density. The EH using TLS data yielded R2 = 0.59 and RMSE = 16.90 cm for 2019, R2 = 0.39 and RMSE = 18.40 cm for 2021. In contrast, the EH using DLS data had an R2 = 0.54 and RMSE = 18.00 cm for 2019, R2 = 0.46 and RMSE = 26.50 cm for 2021 when the planting density was 67,500 plants/ha and below. The ER estimated using 2019 TLS data has R2 = 0.45 and RMSE = 0.06. In summary, this paper proposed a simple method for measuring maize EH and ER in the field, the results will also offer insights into the structure-related traits of maize cultivars, further aiding selection in molecular breeding.
Automatic Rice Early-Season Mapping Based on Simple Non-Iterative Clustering and Multi-Source Remote Sensing Images
Timely and accurate rice spatial distribution maps play a vital role in food security and social stability. Early-season rice mapping is of great significance for yield estimation, crop insurance, and national food policymaking. Taking Tongjiang City in Heilongjiang Province with strong spatial heterogeneity as study area, a hierarchical K-Means binary automatic rice classification method based on phenological feature optimization (PFO-HKMAR) is proposed, using Google Earth Engine platform and Sentinel-1/2, and Landsat 7/8 data. First, a SAR backscattering intensity time series is reconstructed and used to construct and optimize polarization characteristics. A new SAR index named VH-sum is built, which is defined as the summation of VH backscattering intensity for specific time periods based on the temporal changes in VH polarization characteristics of different land cover types. Then comes feature selection, optimization, and reconstruction of optical data. Finally, the PFO-HKMAR classification method is established based on Simple Non-Iterative Clustering. PFO-HKMAR can achieve early-season rice mapping one month before harvest, with overall accuracy, Kappa, and F1 score reaching 0.9114, 0.8240 and 0.9120, respectively (F1 score is greater than 0.9). Compared with the two crop distribution datasets in Northeast China and ARM-SARFS, overall accuracy, Kappa, and F1 scores of PFO-HKMAR are improved by 0.0507–0.1957, 0.1029–0.3945, and 0.0611–0.1791, respectively. The results show that PFO-HKMAR can be promoted in Northeast China to enable early-season rice mapping, and provide valuable and timely information to different stakeholders and decision makers.
Estimation of potato leaf area index based on spectral information and Haralick textures from UAV hyperspectral images
The Leaf Area Index (LAI) is a crucial parameter for evaluating crop growth and informing fertilization management in agricultural fields. Compared to traditional methods, UAV-based hyperspectral imaging technology offers significant advantages for non-destructive, rapid monitoring of crop LAI by simultaneously capturing both spectral information and two-dimensional images of the crop canopy, which reflect changes in its structure. While numerous studies have demonstrated that various texture features, such as the Gray-Level Co-occurrence Matrix (GLCM), can be used independently or in combination with crop canopy spectral data for LAI estimation, limited research exists on the application of Haralick textures for evaluating crop LAI across multiple growth stages. In this study, experiments were conducted on two early-maturing potato varieties, subjected to different treatments (e.g., planting density and nitrogen levels) at the Xiaotangshan base in Beijing, during three key growth stages. Crop canopy spectral reflectance and Haralick textures were extracted from ultra-low-altitude UAV hyperspectral imagery, while LAI was measured using ground-based methods. Three types of spectral data—original spectral reflectance (OSR), first-order differential spectral reflectance (FDSR), and vegetation indices (VIs)—along with three types of Haralick textures—simple, advanced, and higher-order—were analyzed for their correlation with LAI across multiple growth stages. A model for LAI estimation in potato at multiple growth stages based on spectral and textural features screened by the successive projection algorithm (SPA) was constructed using partial least squares regression (PLSR), random forest regression (RFR) and gaussian process regression (GPR) machine learning methods. The results indicated that: (1) Spectral data demonstrate greater sensitivity to LAI than Haralick textures, with sensitivity decreasing in the order of VIs, FDSR and OSR; (2) spectral data alone provide more accurate LAI estimates than Haralick textures, with VIs achieving an accuracy of R² = 0.63, RMSE = 0.38, NRMSE = 28.36%; and (3) although Haralick textures alone were not effective for LAI estimation, they can enhance LAI prediction when combined with spectral data, with the GPR method achieving R ² = 0.70, RMSE = 0.30, NRMSE = 20.28%. These findings offer a valuable reference for large-scale, accurate monitoring of potato LAI.
Identification of the Initial Anthesis of Soybean Varieties Based on UAV Multispectral Time-Series Images
Accurate and high-throughput identification of the initial anthesis of soybean varieties is important for the breeding and screening of high-quality soybean cultivars in field trials. The objectives of this study were to identify the initial day of anthesis (IADAS) of soybean varieties based on remote sensing multispectral time-series images acquired by unmanned aerial vehicles (UAVs), and analyze the differences in the initial anthesis of the same soybean varieties between two different climatic regions, Shijiazhuang (SJZ) and Xuzhou (XZ). First, the temporal dynamics of several key crop growth indicators and spectral indices were analyzed to find an effective indicator that favors the identification of IADAS, including leaf area index (LAI), above-ground biomass (AGB), canopy height (CH), normalized-difference vegetation index (NDVI), red edge chlorophyll index (CIred edge), green normalized-difference vegetation index (GNDVI), enhanced vegetation index (EVI), two-band enhanced vegetation index (EVI2) and normalized-difference red-edge index (NDRE). Next, this study compared several functions, like the symmetric gauss function (SGF), asymmetric gauss function (AGF), double logistic function (DLF), and fourier function (FF), for time-series curve fitting, and then estimated the IADAS of soybean varieties with the first-order derivative maximal feature (FDmax) of the CIred edge phenology curves. The relative thresholds of the CIred edge curves were also used to estimate IADAS, in two ways: a single threshold for all of the soybean varieties, and three different relative thresholds for early, middle, and late anthesis varieties, respectively. Finally, this study presented the variations in the IADAS of the same soybean varieties between two different climatic regions and discussed the probable causal factors. The results showed that CIred edge was more suitable for soybean IADAS identification compared with the other investigated indicators because it had no saturation during the whole crop lifespan. Compared with DLF, AGF and FF, SGF provided a better fitting of the CIred edge time-series curves without overfitting problems, although the coefficient of determination (R2) and root mean square error (RMSE) were not the best. The FDmax of the SGF-fitted CIred edge curve (SGF_CIred edge) provided good estimates of the IADAS, with an RMSE and mean average error (MAE) of 3.79 days and 3.00 days, respectively. The SGF-fitted_CIred edge curve can be used to group the soybean varieties into early, middle and late groups. Additionally, the accuracy of the IADAS was improved (RMSE = 3.69 days and MAE = 3.09 days) by using three different relative thresholds (i.e., RT50, RT55, RT60) for the three flowering groups compared to when using a single threshold (RT50). In addition, it was found that the IADAS of the same soybean varieties varied greatly when planted in two different climatic regions due to the genotype–environment interactions. Overall, this study demonstrated that the IADAS of soybean varieties can be identified efficiently and accurately based on UAV remote sensing multispectral time-series data.
Land-Use Mapping with Multi-Temporal Sentinel Images Based on Google Earth Engine in Southern Xinjiang Uygur Autonomous Region, China
Land-use maps are thematic materials reflecting the current situation, geographical diversity, and classification of land use and are an important scientific foundation that can assist decision-makers in adjusting land-use structures, agricultural zoning, regional planning, and territorial improvement according to local conditions. Spectral reflectance and radar signatures of time series are important in distinguishing land-use types. However, their impact on the accuracy of land-use mapping and decision making remains unclear. Also, the many spatial and temporal heterogeneous landscapes in southern Xinjiang limit the accuracy of existing land-use classification products. Therefore, our objective herein is to develop reliable land-use products for the highly heterogeneous environment of the southern Xinjiang Uygur Autonomous Region using the freely available public Sentinel image datasets. Specifically, to determine the effect of temporal features on classification, several classification scenarios with different temporal features were developed using multi-temporal Sentinel-1, Sentinel-2, and terrain data in order to assess the importance, contribution, and impact of different temporal features (spectral and radar) on land-use classification models and determine the optimal time for land-use classification. Furthermore, to determine the optimal method and parameters suitable for local land-use classification research, we evaluated and compared the performance of three decision-tree-related classifiers (classification and regression tree, random forest, and gradient tree boost) with respect to classifying land use. Yielding the highest average overall accuracy (95%), kappa (95%), and F1 score (98%), we determined that the gradient tree boost model was the most suitable for land-use classification. Of the four individual periods, the image features in autumn (25 September to 5 November) were the most accurate for all three classifiers in relation to identifying land-use classes. The results also show that the inclusion of multi-temporal image features consistently improves the classification of land-use products, with pre-summer (28 May–20 June) images providing the most significant improvement (the average OA, kappa, and F1 score of all the classifiers were improved by 6%, 7%, and 3%, respectively) and fall images the least (the average OA, kappa, and F1 score of all the classifiers were improved by 2%, 3%, and 2%, respectively). Overall, these analyses of how classifiers and image features affect land-use maps provide a reference for similar land-use classifications in highly heterogeneous areas. Moreover, these products are designed to describe the highly heterogeneous environments in the study area, for example, identifying pear trees that affect local economic development, and allow for the accurate mapping of alpine wetlands in the northwest.
Estimation of Maize Biomass at Multi-Growing Stage Using Stem and Leaf Separation Strategies with 3D Radiative Transfer Model and CNN Transfer Learning
The precise estimation of above-ground biomass (AGB) is imperative for the advancement of breeding programs. Optical variables, such as vegetation indices (VI), have been extensively employed in monitoring AGB. However, the limited robustness of inversion models remains a significant impediment to the widespread application of UAV-based multispectral remote sensing in AGB inversion. In this study, a novel stem–leaf separation strategy for AGB estimation is delineated. Convolutional neural network (CNN) and transfer learning (TL) methodologies are integrated to estimate leaf biomass (LGB) across multiple growth stages, followed by the development of an allometric growth model for estimating stem biomass (SGB). To enhance the precision of LGB inversion, the large-scale remote sensing data and image simulation framework over heterogeneous scenes (LESS) model, which is a three-dimensional (3D) radiative transfer model (RTM), was utilized to simulate a more extensive canopy spectral dataset, characterized by a broad distribution of canopy spectra. The CNN model was pre-trained in order to gain prior knowledge, and this knowledge was transferred to a re-trained model with a subset of field-observed samples. Finally, the allometric growth model was utilized to estimate SGB across various growth stages. To further validate the generalizability, transferability, and predictive capability of the proposed method, field samples from 2022 and 2023 were employed as target tasks. The results demonstrated that the 3D RTM + CNN + TL method outperformed best in LGB estimation, achieving an R² of 0.73 and an RMSE of 72.5 g/m² for the 2022 dataset, and an R² of 0.84 and an RMSE of 56.4 g/m² for the 2023 dataset. In contrast, the PROSAIL method yielded an R² of 0.45 and an RMSE of 134.55 g/m² for the 2022 dataset, and an R² of 0.74 and an RMSE of 61.84 g/m² for the 2023 dataset. The accuracy of LGB inversion was poor when using only field-measured samples to train a CNN model without simulated data, with R² values of 0.30 and 0.74. Overall, learning prior knowledge from the simulated dataset and transferring it to a new model significantly enhanced LGB estimation accuracy and model generalization. Additionally, the allometric growth model’s estimation of SGB resulted in an accuracy of 0.87 and 120.87 g/m² for the 2022 dataset, and 0.74 and 86.87 g/m² for the 2023 dataset, exhibiting satisfactory results. Separate estimation of both LGB and SGB based on stem and leaf separation strategies yielded promising results. This method can be extended to the monitor and inversion of other critical variables.
Quantifying the severity of Marssonina blotch on apple leaves: development and validation of a novel spectral index
Apple Marssonina blotch (AMB) is a major disease causing pre-mature defoliation. The occurrence of AMB will lead to serious production decline and economic losses. The precise identification of AMB outbreaks and the measurement of its severity are essential for limiting the spread of the disease, yet this issue remains unaddressed to this day. Given these, we conducted experiments in Qian County, Shaanxi, China, to develop an Apple Marssonina Blotch Index (AMBI) based on hyperspectral imaging, aimed to quantify disease severity at the leaf scale and to monitor infection at the canopy scale. Based on the separability and combination of individual band, characteristic wavelengths were identified in green band, red edge band and near-infrared band to construct AMBI = (R 762nm - R 534nm )/(R 534nm + R 690nm ). The results demonstrated that AMBI exhibited high overall accuracies (R 2  = 0.89, RMSE = 9.67%) in estimating the disease ratio at the leaf scale compared to commonly used indices. At the canopy scale, AMBI enabled effective classification of healthy and diseased trees, yielding an overall accuracy (OA) of 89.09% and a Kappa coefficient of 0.78. Furthermore, analysis of unmanned aerial vehicle (UAV) acquired hyperspectral imagery using AMBI enabled the spatial mapping of diseased tree distribution, highlighting its potential as a scalable and timely tool for precision orchard disease surveillance.
Estimating potassium in potato plants based on multispectral images acquired from unmanned aerial vehicles
Plant potassium content (PKC) is a crucial indicator of crop potassium nutrient status and is vital in making informed fertilization decisions in the field. This study aims to enhance the accuracy of PKC estimation during key potato growth stages by using vegetation indices (VIs) and spatial structure features derived from UAV-based multispectral sensors. Specifically, the fraction of vegetation coverage (FVC), gray-level co-occurrence matrix texture, and multispectral VIs were extracted from multispectral images acquired at the potato tuber formation, tuber growth, and starch accumulation stages. Linear regression and stepwise multiple linear regression analyses were conducted to investigate how VIs, both individually and in combination with spatial structure features, affect potato PKC estimation. The findings lead to the following conclusions: (1) Estimating potato PKC using multispectral VIs is feasible but necessitates further enhancements in accuracy. (2) Augmenting VIs with either the FVC or texture features makes potato PKC estimation more accurate than when using single VIs. (3) Finally, integrating VIs with both the FVC and texture features improves the accuracy of potato PKC estimation, resulting in notable R 2 values of 0.63, 0.84, and 0.80 for the three fertility periods, respectively, with corresponding root mean square errors of 0.44%, 0.29%, and 0.25%. Overall, these results highlight the potential of integrating canopy spectral information and spatial-structure information obtained from multispectral sensors mounted on unmanned aerial vehicles for monitoring crop growth and assessing potassium nutrient status. These findings thus have significant implications for agricultural management.