Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
13,052 result(s) for "Sampling error"
Sort by:
Estimation of Variances using the Generalized Variance Function: Labor and Population Indicators in the Colombian Household Survey 2022
This study addresses the challenge of estimating variances in household surveys, particularly when sampling design variables are absent in publicly available microdata. By implementing the Generalized Variance Function (GVF), Colombia's Household Survey for 2022 serves as a case study. GVF models were developed and validated using the standard errors published by the National Administrative Department of Statistics (DANE) of Colombia. These models demonstrated high accuracy and robustness for estimates across various levels of disaggregation and periodicities. Additionally, their validation with 2023 data confirmed their predictive capacity and applicability in similar contexts, underscoring their effectiveness as tools for evaluating the quality of estimates in complex surveys. Este estudio aborda el desafío de estimar la varianza en encuestas de hogares, causado por la ausencia de variables del diseño muestral en los microdatos públicos. A través de la implementación de la Función Generalizada de Varianza (FGV), se utiliza como caso de estudio la Encuesta de Hogares de Colombia para 2022. Los modelos de FGV se desarrollaron y validaron con base en los errores estándar publicados por el Departamento Administrativo Nacional de Estadística (DANE) de Colombia. Estos modelos demostraron alta precisión y robustez en estimaciones a diferentes niveles de desagregación y periodicidades. Asimismo, su validación con datos de 2023 confirmó su capacidad predictiva y aplicabilidad en contextos similares, destacando su eficacia como herramienta para evaluar la calidad de las estimaciones en encuestas complejas.
Can incentives improve survey data quality in developing countries?
We report results of an experiment designed to assess whether the payment of contingent incentives to respondents in Karnataka, India, impacts the quality of survey data. Of 2276 households sampled at the city block level, 934 were randomly assigned to receive a small one-time payment at the time of the survey, whereas the remaining households did not receive this incentive. We analyse the effects of incentives across a range of questions that are common in survey research in less developed countries. Our study suggests that incentives reduced unit non-response. Conditionally on participation, we also find little impact of incentives on a broad range of sociodemographic, behavioural and attitudinal questions. In contrast, we consistently find that households that received incentives reported substantially lower consumption and income levels and fewer assets. Given random assignment and very high response rates, the most plausible interpretation of this finding is that incentivizing respondents in this setting may increase their motivation to present themselves as more needy, whether to justify the current payment or to increase the chance of receiving resources in the future. Therefore, despite early indications that contingent incentives may raise response rates, the net effect on data quality must be carefully considered.
Compensation method of average current sampling error under the operating condition of low sampling-to-fundamental frequency ratio
This paper describes the compensation method for average current sampling error under the operating condition of low sampling-to-fundamental frequency ratio. When the sampling-to-fundamental frequency ratio is lowered, the current ripple is very large, and an error between the sampled current and the average current during one sampling period occurs. The actual average current can be obtained from the relationship between the average voltage and current of the synchronous reference frame. The actual average voltage and reference voltage match when the inverter output voltage has no error. Thus, the proposed compensation method of average current sampling error can be implemented based on the reference voltage and sampled current. The proposed current compensation method is verified by simulations and experiments.
Communicating Uncertainty in Official Economic Statistics: An Appraisal Fifty Years after Morgenstern
Federal statistical agencies in the United States and analogous agencies elsewhere commonly report official economic statistics as point estimates, without accompanying measures of error. Users of the statistics may incorrectly view them as error free or may incorrectly conjecture error magnitudes. This paper discusses strategies to mitigate misinterpretation of official statistics by communicating uncertainty to the public. Sampling error can be measured using established statistical principles. The challenge is to satisfactorily measure the various forms of nonsampling error. I find it useful to distinguish transitory statistical uncertainty, permanent statistical uncertainty, and conceptual uncertainty. I illustrate how each anses as the Bureau of Economic Analysis penodically revises GDP estimates, the Census Bureau generates household income statistics from surveys with nonresponse, and the Bureau of Labor Statistics seasonally adjusts employment statistics. I anchor my discussion of communication of uncertainty in the contribution of Oskar Morgenstern (1963a), who argued forcefully for agency publication of error estimates for official economic statistics.
Quantification of Aquarius, SMAP, SMOS and Argo-Based Gridded Sea Surface Salinity Product Sampling Errors
Evaluating and validating satellite sea surface salinity (SSS) measurements is fundamental. There are two types of errors in satellite SSS: measurement error due to the instrument’s inaccuracy and problems in retrieval, and sampling error due to unrepresentativeness in the way that the sea surface is sampled in time and space by the instrument. In this study, we focus on sampling errors, which impact both satellite and in situ products. We estimate the sampling errors of Level 3 satellite SSS products from Aquarius, SMOS and SMAP, and in situ gridded products. To do that, we use simulated L2 and L3 Aquarius, SMAP and SMOS SSS data, individual Argo observations and gridded Argo products derived from a 12-month high-resolution 1/48° ocean model. The use of the simulated data allows us to quantify the sampling error and eliminate the measurement error. We found that the sampling errors are high in regions of high SSS variability and are globally about 0.02/0.03 psu at weekly time scales and 0.01/0.02 psu at monthly time scales for satellite products. The in situ-based product sampling error is significantly higher than that of the three satellite products at monthly scales (0.085 psu) indicating the need to be cautious when using in situ-based gridded products to validate satellite products. Similar results are found using a Correlated Triple Collocation method that quantifies the standard deviation of products’ errors acquired with different instruments. By improving our understanding and quantifying the effect of sampling errors on satellite-in situ SSS consistency over various spatial and temporal scales, this study will help to improve the validation of SSS, the robustness of scientific applications and the design of future salinity missions.
An Innovative Method for the Spatial Sampling Analysis of Sea Surface Temperature in the Pacific-Indian Oceans
Zhu, Y.; Zhang, J., and Tang, Q., 2021. An innovative method for the spatial sampling analysis of sea surface temperature in the Pacific-Indian Oceans. Journal of Coastal Research, 37(5), 1053–1062. Coconut Creek (Florida), ISSN 0749-0208. In this study, a pragmatic approach is presented for evaluating the spatial representativeness of the point-scale drifting buoy sea-surface temperature (SST) over the joined area of Asia and the Pacific-Indian Ocean with that help of the high-resolution satellite-derived SST at multipixel scales. The relative spatial sampling error (RSSE) and coefficient of sill (CS) are selected to investigate consistency between the drifting buoy SST and pixel mean value of satellite-derived SST and representativeness of drifting buoy SST at pixel scales of 25 km, 50 km, and 100 km, respectively. The results show that at the 25-km scale, the consistency between the drifting buoy SST and the satellite-derived SST is high, and the spatial heterogeneity within the pixel scale is not obvious, i.e. only two measurements with RSSE are larger than the critical value of all 43,748 measurements. But with the increase in the scale, the number of the drifting buoy SST, which is inconsistent with the satellite-derived SST, is obviously increased (the number of the inconsistent point measurement at the scale of 50 km increased to 42, and at the scale of 100 km it is 59), and the spatial heterogeneity is enhanced. This spatial variation along the direction of latitude change is more obvious. Combining spatial consistency and spatial heterogeneity, no point-scale measurements of the worst spatial representativeness occur at the 25-km pixel scale, in which RSSE and CS are above the critical value. Although drifting buoy SST keeps a high consistency with satellite-derived SST, there are measurements of obvious local variation within pixel scales, and this variation is more obvious along the direction of latitude change. The spatial representativeness of the drifting buoy SST is unstable. This method is helpful to remove the poor-quality reference points during the validation of satellite-derived products, taking point-scale measurements as reference.
A Comparison between the Standard Heterogeneity Test and the Simplified Segregation Free Analysis for Sampling Protocol Optimisation
Estimating the heterogeneity of base and precious metal mineralisation is a great challenge for mining engineers and geologists who undertake resource evaluation, grade control and reconciliation. The calculation of the minimum broken sample mass to represent a given lot of mineralisation at a given comminution size is based on the estimation of IHL, the constant factor of constitution heterogeneity. IHL can be derived by different heterogeneity testwork or calibration approaches. Three methodologies are well known in the mining industry: the standard heterogeneity test, the segregation free analysis, and the sampling tree experiment or duplicate sample analysis. However, the methodologies often show different results, especially when it comes to gold. These differences are due to many reasons. Assuming the variances added by sample preparation and analysis to be equivalent for all tests, the reasons for the differences may include the nugget effect (particularly the presence of coarse gold), the segregation effect and the procedure of collecting/splitting the samples when performing the tests. This paper analyses and compares two heterogeneity tests: the original heterogeneity test and the simplified segregation free analysis, both performed on mineralisation from different Brazilian operations. The results show clear differences between the tests, highlighting the complexity of estimating the heterogeneity of mineral deposits. The study reports the importance of using proper methodologies for constitution heterogeneity estimation so that minimum sample masses and relative standard deviations of the fundamental sampling error can be relied upon. It also provides recommendations for practitioners on the application of testwork/calibration studies.
A Standard Criterion for Measuring Turbulence Quantities Using the Four-Receiver Acoustic Doppler Velocimetry
Acoustic Doppler velocimetry (ADV) enables three-dimensional turbulent flow fields to be obtained with high spatial and temporal resolutions in the laboratory, rivers, and oceans. Although such advantages have led ADV to become a typical approach for analyzing various fluid dynamics mechanisms, the vagueness of ADV system operation methods has reduced its accuracy and efficiency. Accordingly, the present work suggests a proper measurement strategy for a four-receiver ADV system to obtain reliable turbulence quantities by performing laboratory experiments under two flow conditions. Firstly, in still water, the magnitude of noises was evaluated and a proper operation method was developed to obtain the Reynolds stress with lower noises. Secondly, in channel flows, an optimal sampling period was determined based on the integral time scale by applying the bootstrap sampling method and reverse arrangement test. The results reveal that the noises of the streamwise and transverse velocity components are an order of magnitude larger than those of the vertical velocity components. The orthogonally paired receivers enable the estimation of almost-error-free Reynolds stresses and the optimal sampling period is 150–200 times the integral time scale, regardless of the measurement conditions.
Discrepancy in efficiency scores due to sampling error in data envelopment analysis methodology: evidence from the banking sector version 2; peer review: 1 approved, 1 approved with reservations
Background Data Envelopment Analysis (DEA) methodology is considered the most suitable approach for relative performance efficiency calculation for banks as it is believed to be superior to traditional ratio-based analysis and other conventional performance evaluations. This study provides statistical evidence on the sampling error that can creep into performance evaluation studies using the DEA methodology. Inferences are drawn based on samples, and various preventive measures must be taken to eliminate or avoid sampling errors and misleading results. This study demonstrates the possibility of sampling error in DEA with the secondary data available in financial statements and reports from a sample set of banks. Methods The samples included 15 public sectors and five leading private sector banks in India based on their market share, and the data for calculating efficiencies were retrieved from the published audited reports. The sample data was collected from 2014 to 2017 because the banking sector in India witnessed a series of mergers of public sector banks post-2017, and the data after that would be skewed and not comparable due to the demonetization policy implementation and merger process-related consolidation implemented by the Government of India. The efficiency measures thus computed are further analyzed using non-parametric statistical tests. Results We found statistically significant discrepancies in the efficiency score calculations using DEA approach when specific outlier values. Evidence is provided on statistically significant differences in the efficiencies due to the inclusion and exclusion of particular samples in the DEA. Conclusion The study offers a novel contribution along with statistical evidence on the possible sampling error that can creep into the performance evaluation of organizations while applying the DEA methodology.