Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
15
result(s) for
"Non‐sampling error"
Sort by:
Can incentives improve survey data quality in developing countries?
2018
We report results of an experiment designed to assess whether the payment of contingent incentives to respondents in Karnataka, India, impacts the quality of survey data. Of 2276 households sampled at the city block level, 934 were randomly assigned to receive a small one-time payment at the time of the survey, whereas the remaining households did not receive this incentive. We analyse the effects of incentives across a range of questions that are common in survey research in less developed countries. Our study suggests that incentives reduced unit non-response. Conditionally on participation, we also find little impact of incentives on a broad range of sociodemographic, behavioural and attitudinal questions. In contrast, we consistently find that households that received incentives reported substantially lower consumption and income levels and fewer assets. Given random assignment and very high response rates, the most plausible interpretation of this finding is that incentivizing respondents in this setting may increase their motivation to present themselves as more needy, whether to justify the current payment or to increase the chance of receiving resources in the future. Therefore, despite early indications that contingent incentives may raise response rates, the net effect on data quality must be carefully considered.
Journal Article
Making statistical inferences about linkage errors
2024
Record linkage aims to identify records that are from the same unit, in one or many sources. Sometimes, it is imperfect because the available identifying information is limited and erroneous. In such cases, it is important to report the linkage accuracy, which may be measured according to one of many proposed statistical models. These models offer clear advantages over clerical reviews, in terms of costs and timeliness. They also apply where clerical reviews are impossible, e.g., when two parties need to link their respective data sets, such that neither party can see the record pairs in the clear. For obvious reasons, these models must be validated before they are used, by performing goodness-to-fit tests. Unfortunately, this is currently difficult because all existing models rely on observations that are correlated. Thus, the Chi-squared and likelihood ratio tests are biased. In fact, it is challenging to perform any kind of statistical inference about these models or their parameters. In this work, this long-standing problem is addressed when modeling the linkage errors through the number of links of a record. The proposed solution bases the inferences on a subset of observations that are approximately independent.
Journal Article
The Deleterious Effects of Non-Response in Team-Level Research: What Every is Researcher Should Know to Avoid Bias
by
Hardgrave, Bill C.
,
Nesterkin, Dmitriy A.
,
Jones, Thomas W.
in
Bias
,
group level
,
Impact analysis
2010
Teams are an important part of the information systems (IS) field, in both practice and research. In practice, IS teams are the norm in software development. IS researchers have studied various aspects of IS teams, such as productivity and composition, among many others. In studying teams, researchers make the assumption that, in most cases, the aggregated responses from individual team members provide the 'team-level' data needed to examine the phenomenon of interest. Bias due to non-response is acknowledged, but rarely controlled or explicitly considered in the analysis. This study examines the effects that individual within-team non-response and the factors that condition its effect have in biasing team-level research. In our examination, a list of maxims to be considered or followed by researchers is produced. The explicit consideration of non-response bias in team-level research should help strengthen the research which, in turn, will help the IS field better utilize teams.
Journal Article
Measurement error evaluation of self-reported drug use: a latent class analysis of the US National Household Survey on Drug Abuse
2002
Latent class analysis (LCA) is a statistical tool for evaluating the error in categorical data when two or more repeated measurements of the same survey variable are available. This paper illustrates an application of LCA for evaluating the error in self-reports of drug use using data from the 1994, 1995 and 1996 implementations of the US National Household Survey on Drug Abuse. In our application, the LCA approach is used for estimating classification errors which in turn leads to identifying problems with the questionnaire and adjusting estimates of prevalence of drug use for classification error bias. Some problems in using LCA when the indicators of the use of a particular drug are embedded in a single survey questionnaire, as in the National Household Survey on Drug Abuse, are also discussed.
Journal Article
A primer for nonresponse in the US forest inventory and analysis program
by
Patterson, Paul L.
,
Hill, Andrew D.
,
Roesch, Francis A.
in
Atmospheric Protection/Air Quality Control/Air Pollution
,
Bias
,
classification
2012
Nonresponse caused by denied access and hazardous conditions are a concern for the USDA Forest Service, Forest Inventory and Analysis (FIA) program, whose mission is to quantify status and trends in forest resources across the USA. Any appreciable amount of nonresponse can cause bias in FIA’s estimates of population parameters. This paper will quantify the magnitude of nonresponse and describe the mechanisms that result in nonresponse, describe and qualitatively evaluate FIA’s assumptions regarding nonresponse, provide a recommendation concerning plot replacement strategies, and identify appropriate strategies to pursue that minimize bias. The nonresponse rates ranged from 0% to 21% and differed by land owner group; with denied access to private land the leading cause of nonresponse. Current FIA estimators assume that nonresponse occurs at random. Although in most cases this assumption appears tenable, a qualitative assessment indicates a few situations where the assumption is not tenable. In the short-term, we recommend that FIA use stratification schemes that make the missing at random assumption tenable. We recommend the examination of alternative estimation techniques that use appropriate weighting and auxiliary information to mitigate the effects of nonresponse. We recommend the replacement of nonresponse sample locations not be used.
Journal Article
An index of non-sampling error in area frame sampling based on remote sensing data
by
Peng, Dailiang
,
Zhang, Chunyang
,
Qin, Yuchu
in
Agricultural engineering
,
Agricultural Science
,
Agriculture
2018
Agricultural areas are often surveyed using area frame sampling. Using non-updated area sampling frame causes significant non-sampling errors when land cover and usage changes between updates. To address this problem, a novel method is proposed to estimate non-sampling errors in crop area statistics. Three parameters used in stratified sampling that are affected by land use changes were monitored using satellite remote sensing imagery: (1) the total number of sampling units; (2) the number of sampling units in each stratum; and (3) the mean value of selected sampling units in each stratum. A new index, called the non-sampling error by land use change index (NELUCI), was defined to estimate non-sampling errors. Using this method, the sizes of cropping areas in Bole, Xinjiang, China, were estimated with a coefficient of variation of 0.0237 and NELUCI of 0.0379. These are 0.0474 and 0.0994 lower, respectively, than errors calculated by traditional methods based on non-updated area sampling frame and selected sampling units.
Journal Article
Lies, damned lies and statistics: the accuracy of survey responses
2009
That survey research is error prone is not a new idea and different varieties of non-sampling error have been investigated in the literature as well as consideration being given in many statistics textbooks to the issue of sampling error. The paper here considers research upon corporate environmental reporting. It compares information provided by corporate environmental reports with information that survey respondents claim their organization’s environmental report contains. This enables the accuracy of the claims to be assessed. Consideration is given to two different industries the Water industry and the Energy industry. Errors due to inaccurate reporting by survey respondents are shown to be relatively infrequent and respondents appear just about as likely to claim they report information that they do not, in fact, report as to fail to indicate that they report information that is, in fact, actually reported.
Journal Article
Zum Vertrauen in die Statistik / As to the Confidence in Statistics
Es wird erörtert, ob das Vertrauen in die Statistik bei der ersten Station statistischen Arbeitens, nämlich der Datengewinnung, gerechtfertigt ist. Es sind Aussagen über die Verfahren und deren Genauigkeit bzw. Fehler bei wirtschafts- und sozialstatistischen Daten zu machen. Wahrscheinlichkeitstheoretisch fundierte Feststellungen können bezüglich des Stichprobenfehlers getroffen werden. Die Quantifizierung des systematischen Fehlers erfolgt mittels Nacherhebung, die der Praxis einige Schwierigkeiten bereitet. Deswegen werden in der amtlichen Statistik, die Hauptträger der Datengewinnung in der Wirtschafts- und Sozialstatistik ist, von vornherein fehlerreduzierende Methoden angewandt. Nicht quantifizierbar ist allerdings der Adäquations- bzw. Interpretationsfehler, der am häufigsten von den Datennutzern begangen wird. Vertrauen fördernde Maßnahmen werden in der Zusammenarbeit der amtlichen Statistik mit der Wissenschaft gesehen.
It will be discussed if the confidence in the results of the production of economic and social data will be justified. Probability based conclusions are possible concerning the sampling error. In order to quantify the non-sampling error an accuracy check is necessary which causes practical problems. Therefore special methods are applied to reduce the errors in surveys before the results are published. The confidence in statistics on this level depends on the collaboration of the official statistics with the applied statistical science.
Journal Article
As to the Confidence in Statistics
2004
It will be discussed if the confidence in the results of the production of economic and social data will be justified. Probability based conclusions are possible concerning the sampling error. In order to quantify the non-sampling error an accuracy check is necessary which causes practical problems. Therefore special methods are applied to reduce the errors in surveys before the results are published. The confidence in statistics on this level depends on the collaboration of the official statistics with the applied statistical science.
Journal Article
Acceptance Sampling with Rectification When Inspection Errors are Present
by
Anderson, Michael T.
,
Greenberg, Betsy S.
,
Stokes, S. Lynne
in
Bias
,
Estimating techniques
,
Information systems
2001
In this paper we consider the problem of estimating the number of nonconformances remaining in outgoing lots after acceptance sampling with rectification when inspection errors can occur. We show that inspection errors can have a serious biasing effect on the predictor that Greenberg and Stokes (1992) propose for the number of undetected nonconformances. We develop a new bias-corrected predictor of this quantity. A simulation study shows that this predictor performs well with respect to mean square error under a wide range of scenarios. We also propose a predictor for the number of conforming items rejected.
Journal Article