Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
676
result(s) for
"Margin of error"
Sort by:
Disentangling Bias and Variance in Election Polls
by
Gelman, Andrew
,
Shirani-Mehr, Houshmand
,
Rothschild, David
in
Applications and Case Studies
,
Averages
,
Bias
2018
It is well known among researchers and practitioners that election polls suffer from a variety of sampling and nonsampling errors, often collectively referred to as total survey error. Reported margins of error typically only capture sampling variability, and in particular, generally ignore nonsampling errors in defining the target population (e.g., errors due to uncertainty in who will vote). Here, we empirically analyze 4221 polls for 608 state-level presidential, senatorial, and gubernatorial elections between 1998 and 2014, all of which were conducted during the final three weeks of the campaigns. Comparing to the actual election outcomes, we find that average survey error as measured by root mean square error is approximately 3.5 percentage points, about twice as large as that implied by most reported margins of error. We decompose survey error into election-level bias and variance terms. We find that average absolute election-level bias is about 2 percentage points, indicating that polls for a given election often share a common component of error. This shared error may stem from the fact that polling organizations often face similar difficulties in reaching various subgroups of the population, and that they rely on similar screening rules when estimating who will vote. We also find that average election-level variance is higher than implied by simple random sampling, in part because polling organizations often use complex sampling designs and adjustment procedures. We conclude by discussing how these results help explain polling failures in the 2016 U.S. presidential election, and offer recommendations to improve polling practice.
Journal Article
Spatial Variation in the Quality of American Community Survey Estimates
by
Folch, David C.
,
Arribas-Bel, Daniel
,
Spielman, Seth E.
in
Allocations
,
Cartography
,
Census of Population
2016
Social science research, public and private sector decisions, and allocations of federal resources often rely on data from the American Community Survey (ACS). However, this critical data source has high uncertainty in some of its most frequently used estimates. Using 2006-2010 ACS median household income estimates at the census tract scale as a test case, we explore spatial and nonspatial patterns in ACS estimate quality. We find that spatial patterns of uncertainty in the northern United States differ from those in the southern United States, and they are also different in suburbs than in urban cores. In both cases, uncertainty is lower in the former than the latter. In addition, uncertainty is higher in areas with lower incomes. We use a series of multivariate spatial regression models to describe the patterns of association between uncertainty in estimates and economic, demographic, and geographic factors, controlling for the number of responses. We find that these demographic and geographic patterns in estimate quality persist even after we account for the number of responses. Our results indicate that data quality varies across places, making cross-sectional analysis both within and across regions less reliable. Finally, we present advice for data users and potential solutions to the challenges identified.
Journal Article
Studying Neighborhoods Using Uncertain Data from the American Community Survey: A Contextual Approach
2015
In 2010 the American Community Survey (ACS) replaced the long form of the decennial census as the sole national source of demographic and economic data for small geographic areas such as census tracts. These small area estimates suffer from large margins of error, however, which makes the data difficult to use for many purposes. The value of a large and comprehensive survey like the ACS is that it provides a richly detailed, multivariate, composite picture of small areas. This article argues that one solution to the problem of large margins of error in the ACS is to shift from a variable-based mode of inquiry to one that emphasizes a composite multivariate picture of census tracts. Because the margin of error in a single ACS estimate, like household income, is assumed to be a symmetrically distributed random variable, positive and negative errors are equally likely. Because the variable-specific estimates are largely independent from each other, when looking at a large collection of variables these random errors average to zero. This means that although single variables can be methodologically problematic at the census tract scale, a large collection of such variables provides utility as a contextual descriptor of the place(s) under investigation. This idea is demonstrated by developing a geodemographic typology of all U.S. census tracts. The typology is firmly rooted in the social scientific literature and is organized around a framework of concepts, domains, and measures. The typology is validated using public domain data from the City of Chicago and the U.S. Federal Election Commission. The typology, as well as the data and methods used to create it, is open source and published freely online.
Journal Article
Data reporting and visualization in ecology
2016
The reporting and graphing of ecological data and statistical results often leave a lot to be desired. One reason can be a misunderstanding or confusion of some basic concepts in statistics such as standard deviation, standard error, margin of error, confidence interval, skewness of distribution and correlation. The implications of having small sample sizes are also often glossed over. In several situations, statistics and associated graphical representations are made for comparing groups of samples, where the issues become even more complex. Here, I aim to clarify these basic concepts and ways of reporting and visualizing summaries of variables in ecological research, both for single variables and for pairs of variables. Specific recommendations about better practice are made, for example describing precision of the mean by the margin of error and bootstrapping to obtain confidence intervals. The role of the logarithmic transformation of positive data is described, as well as its implications in the reporting of results in multiplicative rather than additive form. Comments are also made about ordination plots derived from multivariate analyses, such as principal component analysis and canonical correspondence analysis, with suggested improvements. Some data sets from this Kongsfjord special issue are amongst those used as examples.
Journal Article
Measurement Uncertainty and Probability
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
The Impact of Covariance on American Community Survey Margins of Error: Computational Alternatives
by
Spielman, Seth
,
Graber, Molly
,
Folch, David C
in
Analysis of covariance
,
Censuses
,
Construction standards
2023
The American Community Survey (ACS) is an indispensable tool for studying the United States (US) population. Each year the US Census Bureau (BOC) publishes approximately 11 billion ACS estimates, each of which is accompanied by a margin of error (MOE) specific to that estimate. Researchers, policy makers, and government agencies combine these estimates in myriad ways, which requires an accurate measurement of the MOE on that combined estimate. We compare three options for computing this MOE: the analytic approach uses standard statistically derived formulas, the simulation approach builds an empirical distribution of the combined estimate based on simulated values of the inputs, and the replicate approach uses simulated values published by the BOC based on their internal model that statistically replicates the entire ACS 80 times. We find that since the replicate approach is the only one of the three to incorporate covariance between the input variables, it performs the best. We further find that the simulation and analytic approaches generally match one another and can both overestimate and underestimate the MOE; they have their places when the replicate approach is not feasible.
Journal Article
The Statistical Literacy of Mathematics Education Students: An Investigation on Understanding the Margin of Error
by
Purbani, Widyastuti
,
Apino, Ezi
,
Hidayati, Kana
in
Colleges & universities
,
Education
,
Error analysis
2024
Understanding the margin of error (MoE) as a part of statistical literacy which is useful for the public to select credible information from various surveys and polls. The study aims to reveal the levels of statistical literacy of mathematics education students, especially in understanding MoE, and compare them based on four variables: gender, enrollment in a statistics course, year in the program, and type of university. The online survey research involved undergraduate students of the mathematics education study program from 21 universities in Indonesia’s western, central, and eastern regions as the sample (n = 970). Descriptive statistics was used to describe the literacy levels and inferential statistics (t-test and F-test) to compare them based on the four variables. The results of the study reveal that: (1) student literacy in understanding the MoE concept is dominant at the non-literate level; and (2) there are significant differences in students’ literacy levels in terms of gender, enrollment in a statistics course, year in the program, and type of university. The study indicates that the literacy of mathematics education students is still low, so the statistics course is expected to focus more on developing statistical literacy.
Journal Article
Wheat Quantity Monitoring Methods Based on Inventory Measurement and SVR Prediction Model
2023
Due to the influences of the storage environment, water content change, particle settlement, natural loss, and other factors, the distribution density of wheat and the volume of grain pile in the storage process are gradually changed so that the single weight calculation method cannot objectively evaluate the storage quantity of wheat and also causes difficulties to the regular inspection of the quantity of wheat stock. To meet the practical needs of wheat inventory monitoring, a wheat inventory monitoring method based on inventory measurement and the support vector machine regression (SVR) prediction model is proposed. By collecting the working papers for the physical inspection of wheat in grain warehouses in Shanxi province, Hebei province, Henan province, Jiangsu province, and other places, the storage time, storage weight, storage moisture content, measured moisture content, measured volume weight, measured net volume, and measured weight for inspection were selected as training samples for the SVR prediction model, and kernel function selection and parameter optimization were carried out. We developed an optimal prediction model for the amount of wheat in the grain depots. In the actual grain store measurement process, the net volume of wheat in the current grain store was obtained by a laser volumetric measuring apparatus, the actual bulk density of wheat was sampled, and the actual moisture content of wheat was measured by sampling. The three samples, their storage time, their storage moisture content, and their storage weight were fed into the trained SVR prediction model as new samples, and the predicted weight of the wheat in the current grain store was obtained from the output. The error rate calculation procedure was introduced to achieve an anomalous judgment error rate for grain depots. The experimental results showed that the SVR prediction model based on the linear kernel function had a very low mean squared error and high determination coefficient, and the average prediction accuracy of the grain stock error rate reached 93.2 percent, which can meet the requirements of wheat quantity monitoring in grain warehouses.
Journal Article
Agential Free Choice
2021
The Free Choice effect—whereby
♢
(
p
or
q
)
seems to entail both
♢
p
and
♢
q
—has traditionally been characterized as a phenomenon affecting the deontic modal ‘may’. This paper presents an extension of the semantic account of free choice defended by Fusco (
Philosophers’ Imprint
,
15
, 1–27,
2015
) to the agentive modal ‘can’, the ‘can’ which, intuitively, describes an agent’s powers. On this account, free choice is a nonspecific de re phenomenon (Bäuerle
1983
; Fodor
1970
) that—unlike typical cases—affects disjunction. I begin by sketching a model of inexact ability, which grounds a modal approach to agency (Belnap
Theoria
,
54
, 175–199,
1998
; Perloff
2001
) in a Williamson (
Mind, 101
, 217–242,
1992
;
Erkenntnis
,
79
, 971–999,
2014
)-style margin of error. A classical propositional semantics combined with this framework can reflect the intuitions highlighted by Kenny (
1976
)’s dartboard cases, as well as the counterexamples to simple conditional views recently discussed by Mandelkern et al. (
Philosophical Review
,
126
, 301–343,
2017
). In Section 3, I turn to an independently motivated actual-world-sensitive account of disjunction, and show how it extends free choice inferences into an object language for propositional modal logic.
Journal Article