Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,069
result(s) for
"Frequentism"
Sort by:
Revised standards for statistical evidence
2013
Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25–50:1, and to 100–200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.
Journal Article
How Many Countries for Multilevel Modeling? A Comparison of Frequentist and Bayesian Approaches
2013
Researchers in comparative research increasingly use multilevel models to test effects of country-level factors on individual behavior and preferences. However, the asymptotic justification of widely employed estimation strategies presumes large samples and applications in comparative politics routinely involve only a small number of countries. Thus, researchers and reviewers often wonder if these models are applicable at all. In other words, how many countries do we need for multilevel modeling? I present results from a large-scale Monte Carlo experiment comparing the performance of multilevel models when few countries are available. I find that maximum likelihood estimates and confidence intervals can be severely biased, especially in models including cross-level interactions. In contrast, the Bayesian approach proves to be far more robust and yields considerably more conservative tests.
Journal Article
A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research
by
Asendorpf, Jens B.
,
Kaplan, David
,
van de Schoot, Rens
in
Bayes Theorem
,
Bayesian analysis
,
Bayesian method
2014
Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided.
Journal Article
Abducting Economics
2017
Abduction is the process of generating and choosing models, hypotheses, and data analyzed in response to surprising findings. All good empirical economists abduct. Explanations usually evolve as studies evolve. The abductive approach challenges economists to step outside the framework of received notions about the “identification problem” that rigidly separates the act of model and hypothesis creation from the act of inference from data. It asks the analyst to engage models and data in an iterative dynamic process, using multiple models and sources of data in a back and forth where both models and data are augmented as learning evolves.
Journal Article
GENERALIZED DOUBLE PARETO SHRINKAGE
by
Dunson, David B.
,
Lee, Jaeyong
,
Armagan, Artin
in
A posteriori knowledge
,
Analytical estimating
,
Density estimation
2013
We propose a generalized double Pareto prior for Bayesian shrinkage estimation and inferences in linear models. The prior can be obtained via a scale mixture of Laplace or normal distributions, forming a bridge between the Laplace and Normal-Jeffreys' priors. While it has a spike at zero like the Laplace density, it also has a Student's t-like tail behavior. Bayesian computation is straightforward via a simple Gibbs sampling algorithm. We investigate the properties of the maximum a posteriori estimator, as sparse estimation plays an important role in many problems, reveal connections with some well-established regularization procedures, and show some asymptotic results. The performance of the prior is tested through simulations and an application.
Journal Article
Commentary on Richard K. Atkins’ Peirce on Inference
2025
Atkins boldly and resourcefully challenges the consensus that Peirce rejects Bayesianism. This commentary situates Atkins’ approach within the broad problem of reconciling Peirce’s theory of inquiry with his philosophy of probability. It finds much to appreciate in Atkins’ approach, but it also raises some textual worries about the proposal and offers an alternative conception of belief and degrees of belief.
Journal Article
ON THE BERNSTEIN-VON MISES PHENOMENON FOR NONPARAMETRIC BAYES PROCEDURES
2014
We continue the investigation of Bernstein-von Mises theorems for nonparametric Bayes procedures from [Ann. Statist. 41 (2013) 1999-2028]. We introduce multiscale spaces on which nonparametric priors and posteriors are naturally defined, and prove Bernstein-von Mises theorems for a variety of priors in the setting of Gaussian nonparametric regression and in the i.i.d. sampling model. From these results we deduce several applications where posterior-based inference coincides with efficient frequentist procedures, including Donsker- and Kolmogorov-Smirnov theorems for the random posterior cumulative distribution functions. We also show that multiscale posterior credible bands for the regression or density function are optimal frequentist confidence bands.
Journal Article
What Is Meant by \Missing at Random\?
by
Carlin, John
,
Seaman, Shaun
,
Galati, John
in
Bayesian inference
,
Conditional probabilities
,
direct-likelihood inference
2013
The concept of missing at random is central in the literature on statistical analysis with missing data. In general, inference using incomplete data should be based not only on observed data values but should also take account of the pattern of missing values. However, it is often said that if data are missing at random, valid inference using likelihood approaches (including Bayesian) can be obtained ignoring the missingness mechanism. Unfortunately, the term \"missing at random\" has been used inconsistently and not always clearly; there has also been a lack of clarity around the meaning of \"valid inference using likelihood\". These issues have created potential for confusion about the exact conditions under which the missingness mechanism can be ignored, and perhaps fed confusion around the meaning of \"analysis ignoring the missingness mechanism\". Here we provide standardised precise definitions of \"missing at random\" and \"missing completely at random\", in order to promote unification of the theory. Using these definitions we clarify the conditions that suffice for \"valid inference\" to be obtained under a variety of inferential paradigms.
Journal Article
NONPARAMETRIC BERNSTEIN—VON MISES THEOREMS IN GAUSSIAN WHITE NOISE
2013
Bernstein—von Mises theorems for nonparametric Bayes priors in the Gaussian white noise model are proved. It is demonstrated how such results justify Bayes methods as efficient frequentist inference procedures in a variety of concrete nonparametric problems. Particularly Bayesian credible sets are constructed that have asymptotically exact 1 — α frequentist coverage level and whose L 2 -diameter shrinks at the minimax rate of convergence (within logarithmic factors) over Hölder balls. Other applications include general classes of linear and nonlinear functionals and credible bands for auto-convolutions. The assumptions cover nonconjugate product priors defined on general orthonormal bases of L 2 satisfying weak conditions.
Journal Article
Microarrays, Empirical Bayes and the Two-Groups Model
2008
The classic frequentist theory of hypothesis testing developed by Neyman, Pearson and Fisher has a claim to being the twentieth century's most influential piece of applied mathematics. Something new is happening in the twenty-first century: high-throughput devices, such as microarrays, routinely require simultaneous hypothesis tests for thousands of individual cases, not at all what the classical theory had in mind. In these situations empirical Bayes information begins to force itself upon frequentists and Bayesians alike. The two-groups model is a simple Bayesian construction that facilitates empirical Bayes analysis. This article concerns the interplay of Bayesian and frequentist ideas in the two-groups setting, with particular attention focused on Benjamini and Hochberg's False Discovery Rate method. Topics include the choice and meaning of the null hypothesis in large-scale testing situations, power considerations, the limitations of permutation methods, significance testing for groups of cases (such as pathways in microarray studies), correlation effects, multiple confidence intervals and Bayesian competitors to the two-groups model.
Journal Article