Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
14,409 result(s) for "sampling design"
Sort by:
Assessing the Effect of Training Sampling Design on the Performance of Machine Learning Classifiers for Land Cover Mapping Using Multi-Temporal Remote Sensing Data and Google Earth Engine
Machine learning classifiers are being increasingly used nowadays for Land Use and Land Cover (LULC) mapping from remote sensing images. However, arriving at the right choice of classifier requires understanding the main factors influencing their performance. The present study investigated firstly the effect of training sampling design on the classification results obtained by Random Forest (RF) classifier and, secondly, it compared its performance with other machine learning classifiers for LULC mapping using multi-temporal satellite remote sensing data and the Google Earth Engine (GEE) platform. We evaluated the impact of three sampling methods, namely Stratified Equal Random Sampling (SRS(Eq)), Stratified Proportional Random Sampling (SRS(Prop)), and Stratified Systematic Sampling (SSS) upon the classification results obtained by the RF trained LULC model. Our results showed that the SRS(Prop) method favors major classes while achieving good overall accuracy. The SRS(Eq) method provides good class-level accuracies, even for minority classes, whereas the SSS method performs well for areas with large intra-class variability. Toward evaluating the performance of machine learning classifiers, RF outperformed Classification and Regression Trees (CART), Support Vector Machine (SVM), and Relevance Vector Machine (RVM) with a >95% confidence level. The performance of CART and SVM classifiers were found to be similar. RVM achieved good classification results with a limited number of training samples.
Optimal sampling design for spatial capture–recapture
Spatial capture–recapture (SCR) has emerged as the industry standard for estimating population density by leveraging information from spatial locations of repeat encounters of individuals. The precision of density estimates depends fundamentally on the number and spatial configuration of traps. Despite this knowledge, existing sampling design recommendations are heuristic and their performance remains untested for most practical applications. To address this issue, we propose a genetic algorithm that minimizes any sensible, criteria-based objective function to produce near-optimal sampling designs. To motivate the idea of optimality, we compare the performance of designs optimized using three model-based criteria related to the probability of capture. We use simulation to show that these designs outperform those based on existing recommendations in terms of bias, precision, and accuracy in the estimation of population size. Our approach, available as a function in the R package oSCR, allows conservation practitioners and researchers to generate customized and improved sampling designs for wildlife monitoring.
Trends in Remote Sensing Accuracy Assessment Approaches in the Context of Natural Resources
The utility of land cover maps for natural resources management relies on knowing the uncertainty associated with each map. The continuous advances typical of remote sensing, including the increasing availability of higher spatial and temporal resolution satellite data and data analysis capabilities, have created both opportunities and challenges for improving the application of accuracy assessment. There are well established accuracy assessment methods, but their underlying assumptions have not changed much in the last couple decades. Consequently, revisiting how map error and accuracy have been performed and reported over the last two decades is timely, to highlight areas where there is scope for better utilization of emerging opportunities. We conducted a quantitative literature review on accuracy assessment practices for mapping via remote sensing classification methods, in both terrestrial and marine environments. We performed a structured search for land and benthic cover mapping, limiting our search to journals within the remote sensing field, and papers published between 1998–2017. After an initial screening process, we assembled a database of 282 papers, and extracted and standardized information on various components of their reported accuracy assessments. We discovered that only 56% of the papers explicitly included an error matrix, and a very limited number (14%) reported overall accuracy with confidence intervals. The use of kappa continues to be standard practice, being reported in 50.4% of the literature published on or after 2012. Reference datasets used for validation were collected using a probability sampling design in 54% of the papers. For approximately 11% of the studies, the sampling design used could not be determined. No association was found between classification complexity (i.e. number of classes) and measured accuracy, independent from the size of the study area. Overall, only 32% of papers included an accuracy assessment that could be considered reproducible; that is, they included a probability-based sampling scheme to collect the reference dataset, a complete error matrix, and provided sufficient characterization of the reference datasets and sampling unit. Our findings indicate that considerable work remains to identify and adopt more statistically rigorous accuracy assessment practices to achieve transparent and comparable land and benthic cover maps.
SDM meets eDNA: optimal sampling of environmental DNA to estimate species–environment relationships in stream networks
Species distribution models (SDMs) are frequently data‐limited. In aquatic habitats, emerging environmental DNA (eDNA) sampling methods can be quicker and more cost‐efficient than traditional count and capture surveys, but their utility for fitting SDMs is complicated by dilution, transport, and loss processes that modulate DNA concentrations and mix eDNA from different locations. Past models for estimating organism densities from measured species‐specific eDNA concentrations have accounted for how these processes affect expected concentrations. We built off this previous work to construct a linear hierarchical model that also accounts for how they give rise to spatially correlated concentration errors. We applied our model to 60 simulated stream networks and three types of species niches in order to answer two questions: 1) what is the D‐optimal sampling design, i.e. where should eDNA samples be positioned to most precisely estimate species–environment relationships? and 2) How does parameter estimation accuracy depend on the stream network's topological and hydrologic properties? We found that correcting for eDNA dynamics was necessary to obtain consistent parameter estimates, and that relative to a heuristic benchmark design, optimizing sampling locations improved design efficiency by an average of 41.5%. Samples in the D‐optimal design tended to be positioned near downstream ends of stream reaches high in the watershed, where eDNA concentration was high and mostly from homogeneous source areas, and they collectively spanned the full ranges of covariates. When measurement error was large, it was often optimal to collect replicate samples from high‐information reaches. eDNA‐based estimates of species–environment regression parameters were most precise in stream networks that had many reaches, large geographic size, slow flows, and/or high eDNA loss rates. Our study demonstrates the importance and viability of accounting for eDNA dilution, transport, and loss in order to optimize sampling designs and improve the accuracy of eDNA‐based species distribution models.
Pressure Sampling Design for Estimating Nodal Water Demand in Water Distribution Systems
The water distribution system (WDS) hydraulic model is extensively used for design and management of WDS. The nodal water demand is the crucial parameter of the model that requires accurate estimating by the pressure measurements. Proper pressure sampling design is essential for estimating nodal water demand and improving model accuracy. Existing research has emphasized the need to enhance the observability of monitoring systems and mitigate the adverse effects of monitoring noise. However, methods that simultaneously consider both of these factors in sampling design have not been adequately studied. In this study, a novel two-objective sampling design method is developed to improve the system observability and mitigate the adverse effects of monitoring noise. The approach is applied to a realistic network and results demonstrate that the developed approach can effectively improve the observability and robustness of the system especially when considerable measurement noise is considered.
Maximizing dataset variability in agricultural surveys with spatial sampling based on MaxVol matrix approximation
Soil sampling is crucial for capturing soil variability and obtaining comprehensive soil information for agricultural planning. This article evaluates the potential of MaxVol, an optimal design method for soil sampling based on selecting locations with significant dissimilarities. We compared MaxVol with conditional Latin hypercube sampling (cLHS), simple random sampling (SRS) and Kennard-Stone algorithm (KS) to evaluate their ability to capture soil data distribution. We modeled spatial distributions of soil properties using simple kriging (SK) and regression kriging (RK) interpolation techniques and assessed the interpolation quality using Root Mean Square Error. According to the results, MaxVol performs similarly or better than popular sampling designs in describing soil distributions, particularly with a smaller number of points. This is valuable for costly and time-consuming field surveys. Both MaxVol and Kennard-Stone are deterministic algorithms, unlike cLHS and random sampling, providing a reliable sampling scheme. Thus, the proposed MaxVol algorithm enables obtaining soil property distributions based on environmental features.
Towards a robust baseline for long-term monitoring of Antarctic coastal benthos
The Southern Ocean represents one of the world regions most sensitive to warming and there is an urgent need for quantitative data to understand changes in coastal communities. This goal can be achieved through the establishment of permanent monitoring sites and robust sampling designs. In this study, we used an emerging, photogrammetry-based technique to simulate a pilot study and test the efficiency of different sampling schemes (Simple Random—SRS-, Systematic—SyS- and Strip—SS-) for estimating the abundances of megabenthic taxa. For taxa showing an aggregated distribution, we also applied an adaptive cluster sampling (ACS) design. In almost the totality of cases, the best accuracy of estimates was achieved with SyS combined with plots of 0.0625 m2. ACS design gave better performances but required a calibration of both the initial sample size and the threshold value to increase efficiency. The ‘one-size-fits-all’ 1 m2 plot size never emerged as the best in any sampling schemes, hence the previously published literature data can be biased. This study represents a fine-scale reference baseline for the study area and the simulations performed will be pivotal in establishing sound-monitoring programmes with sufficient statistical power to detect significative changes in the Antarctic benthos.
Training data in satellite image classification for land cover mapping: a review
The current land cover (LC) mapping paradigm relies on automatic satellite imagery classification, predominantly through supervised methods, which depend on training data to calibrate classification algorithms. Hence, training data have a critical influence on classification accuracy. Although research on specific aspects of training data in the LC classification context exists, a study that organizes and synthetizes the multiplicity of aspects and findings of these researches is needed. In this article, we review the training data used for LC classification of satellite imagery. A protocol of identification and selection of relevant documents was followed, resulting in 114 peer-reviewed studies included. Main research topics were identified and documents were characterized according to their contribution to each topic, which allowed uncovering subtopics and categories and synthetizing the main findings regarding different aspects of the training dataset. The analysis found four research topics, namely construction of the training dataset, sample quality, sampling design and advanced learning techniques. Subtopics included sample collection method, sample cleaning procedures, sample size, sampling method, class balance and distribution, among others. A summary of the main findings and approaches provided an overview of the research in this area, which may serve as a starting point for new LC mapping initiatives.
The potential of image segmentation applied to sampling design for improving farm-level multi-soil property mapping accuracy
Sampling design plays a critical role in farm-level digital soil mapping (DSM). In many cases, a soil mapping model may not have been decided upon at the sample design stage. Design-based sampling may be more appropriate than model-based sampling because it is independent of subsequent soil mapping models. However, existing sampling methods optimize the sample size and locations in geographical space or feature space without considering the impacts of environmental similarity in local geographical space. In this paper, a novel sampling design method based on local environmental similarity was developed. Image segmentation was introduced into the sampling design by partitioning agricultural soil into subregions with good spatial continuity, within-region homogeneity, and between-region heterogeneity to determine the optimal sample size and locations. First, the environmental similarity between adjacent soils was calculated. Second, the merging process was iteratively conducted, and a series of segmentations was generated. Finally, the optimal sample size and locations were determined based on the optimal segmentation results. To validate the proposed method, it was compared with stratified random sampling, k-means sampling, and spatially balanced sampling methods. Two mapping models, ordinary kriging and sandwich estimation, were employed to map five soil properties, including pH, soil organic matter, total nitrogen, available phosphorus, and available potassium. These comparative experiments showed that the proposed method had better potential to generate farm-level muti-soil property mapping results with good accuracy than the competing sampling methods. In conclusion, consideration of local environmental similarity and the use of image segmentation for soil sampling were helpful in determining the optimal sample size and key sample locations.