Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
85
result(s) for
"Morey, Richard D."
Sort by:
The fallacy of placing confidence in confidence intervals
by
Hoekstra, Rink
,
Morey, Richard D.
,
Lee, Michael D.
in
Bayes Theorem
,
Behavioral Science and Psychology
,
Cognitive Psychology
2016
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true parameter value in some known proportion of repeated samples, on average. The width of confidence intervals is thought to index the precision of an estimate; CIs are thought to be a guide to which parameter values are plausible or reasonable; and the confidence coefficient of the interval (e.g., 95 %) is thought to index the plausibility that the true parameter is included in the interval. We show in a number of examples that CIs do not necessarily have any of these properties, and can lead to unjustified or arbitrary inferences. For this reason, we caution against relying upon confidence interval theory to justify interval estimates, and suggest that other theories of interval estimation should be used instead.
Journal Article
Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications
by
Verhagen, Josine
,
Morey, Richard D.
,
Matzke, Dora
in
Bayes Theorem
,
Bayesian analysis
,
Behavioral Science and Psychology
2018
Bayesian parameter estimation and Bayesian hypothesis testing present attractive alternatives to classical inference using confidence intervals and
p
values. In part I of this series we outline ten prominent advantages of the Bayesian approach. Many of these advantages translate to concrete opportunities for pragmatic researchers. For instance, Bayesian hypothesis testing allows researchers to quantify evidence and monitor its progression as data come in, without needing to know the intention with which the data were collected. We end by countering several objections to Bayesian hypothesis testing. Part II of this series discusses JASP, a free and open source software program that makes it easy to conduct Bayesian estimation and testing for a range of popular statistical scenarios (Wagenmakers et al.
this issue
).
Journal Article
Robust misinterpretation of confidence intervals
by
Hoekstra, Rink
,
Morey, Richard D.
,
Rouder, Jeffrey N.
in
Behavioral Research
,
Behavioral Science and Psychology
,
Biological and medical sciences
2014
Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual. Nevertheless, little is known about how researchers interpret CIs. In this study, 120 researchers and 442 students—all in the field of psychology—were asked to assess the truth value of six particular statements involving different interpretations of a CI. Although all six statements were false, both researchers and students endorsed, on average, more than three statements, indicating a gross misunderstanding of CIs. Self-declared experience with statistics was not related to researchers’ performance, and, even more surprisingly, researchers hardly outperformed the students, even though the students had not received any education on statistical inference whatsoever. Our findings suggest that many researchers do not know the correct interpretation of a CI. The misunderstandings surrounding
p
-values and CIs are particularly unfortunate because they constitute the main tools by which psychologists draw conclusions from data.
Journal Article
How to measure working memory capacity in the change detection paradigm
by
Cowan, Nelson
,
Morey, Richard D.
,
Morey, Candice C.
in
Aptitude
,
Behavioral Science and Psychology
,
Biological and medical sciences
2011
Although the measurement of working memory capacity is crucial to understanding working memory and its interaction with other cognitive faculties, there are inconsistencies in the literature on how to measure capacity. We address the measurement in the
change detection
paradigm, popularized by Luck and Vogel (
Nature, 390
, 279–281,
1997
). Two measures for this task—from Pashler (
Perception & Psychophysics, 44
, 369–378,
1988
) and Cowan (
The Behavioral and Brain Sciences, 24
, 87–114,
2001
), respectively—have been used interchangeably, even though they may yield qualitatively different conclusions. We show that the choice between these two measures is not arbitrary. Although they are motivated by the same underlying discrete-slots working memory model, each is applicable only to a specific task; the two are never interchangeable. In the course of deriving these measures, we discuss subtle but consequential flaws in the underlying discrete-slots model. These flaws motivate revision in the modal model and capacity measures.
Journal Article
assessment of fixed-capacity models of visual working memory
by
Cowan, Nelson
,
Morey, Richard D
,
Zwilling, Christopher E
in
accuracy
,
Experimentation
,
Fall lines
2008
Visual working memory is often modeled as having a fixed number of slots. We test this model by assessing the receiver operating characteristics (ROC) of participants in a visual-working-memory change-detection task. ROC plots yielded straight lines with a slope of 1.0, a tell-tale characteristic of all-or-none mnemonic representations. Formal model assessment yielded evidence highly consistent with a discrete fixed-capacity model of working memory for this task.
Journal Article
Bayesian Benefits for the Pragmatic Researcher
by
Wagenmakers, Eric-Jan
,
Morey, Richard D.
,
Lee, Michael D.
in
Bayesian analysis
,
Criminals
,
Hypotheses
2016
The practical advantages of Bayesian inference are demonstrated here through two concrete examples. In the first example, we wish to learn about a criminal's IQ: a problem of parameter estimation. In the second example, we wish to quantify and track support in favor of the null hypothesis that Adam Sandier movies are profitable regardless of their quality: a problem of hypothesis testing. The Bayesian approach unifies both problems within a coherent predictive framework, in which parameters and models that predict the data successfully receive a boost in plausibility, whereas parameters and models that predict poorly suffer a decline. Our examples demonstrate how Bayesian analyses can be more informative, more elegant, and more flexible than the orthodox methodology that remains dominant within the field of psychology.
Journal Article
A Bayes factor meta-analysis of Bem’s ESP claim
by
Rouder, Jeffrey N.
,
Morey, Richard D.
in
Bayes Theorem
,
Behavioral Science and Psychology
,
Biological and medical sciences
2011
In recent years, statisticians and psychologists have provided the critique that
p
-values do not capture the evidence afforded by data and are, consequently, ill suited for analysis in scientific endeavors. The issue is particular salient in the assessment of the recent evidence provided for ESP by Bem (
2011
) in the mainstream
Journal of Personality and Social Psychology
. Wagenmakers, Wetzels, Borsboom, and van der Maas (
Journal of Personality and Social Psychology, 100
, 426–432,
2011
) have provided an alternative Bayes factor assessment of Bem’s data, but their assessment was limited to examining each experiment in isolation. We show here that the variant of the Bayes factor employed by Wagenmakers et al. is inappropriate for making assessments across multiple experiments, and cannot be used to gain an accurate assessment of the total evidence in Bem’s data. We develop a meta-analytic Bayes factor that describes how researchers should update their prior beliefs about the odds of hypotheses in light of data across several experiments. We find that the evidence that people can feel the future with neutral and erotic stimuli to be slight, with Bayes factors of 3.23 and 1.57, respectively. There is some evidence, however, for the hypothesis that people can feel the future with emotionally valenced nonerotic stimuli, with a Bayes factor of about 40. Although this value is certainly noteworthy, we believe it is orders of magnitude lower than what is required to overcome appropriate skepticism of ESP.
Journal Article
Bayesian inference for psychology. Part II: Example applications with JASP
by
Derks, Koen
,
Matzke, Dora
,
Marsman, Maarten
in
Bayes Theorem
,
Bayesian analysis
,
Behavioral Science and Psychology
2018
Bayesian hypothesis testing presents an attractive alternative to
p
value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the
t
-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP (
http://www.jasp-stats.org
), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder’s BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.
Journal Article
Model comparison in ANOVA
by
Morey, Richard D.
,
Rouder, Jeffrey N.
,
Engelhardt, Christopher R.
in
Analysis of Variance
,
Asymmetry
,
Behavioral Science and Psychology
2016
Analysis of variance (ANOVA), the workhorse analysis of experimental designs, consists of
F
-tests of main effects and interactions. Yet, testing, including traditional ANOVA, has been recently critiqued on a number of theoretical and practical grounds. In light of these critiques, model comparison and model selection serve as an attractive alternative. Model comparison differs from testing in that one can support a null or nested model vis-a-vis a more general alternative by penalizing more flexible models. We argue this ability to support more simple models allows for more nuanced theoretical conclusions than provided by traditional ANOVA
F
-tests. We provide a model comparison strategy and show how ANOVA models may be reparameterized to better address substantive questions in data analysis.
Journal Article
On Making the Right Choice: A Meta-Analysis and Large-Scale Replication Attempt of the Unconscious thought Advantage
by
Richard D. Morey
,
Jelte M. Wicherts
,
Mark R. Nieuwenstein
in
Analysis
,
Bayes factor.NAKeywords
,
Bayesian analysis
2015
Are difficult decisions best made after a momentary diversion of thought? Previous research addressing this important question has yielded dozens of experiments in which participants were asked to choose the best of several options (e.g., cars or apartments) either after conscious deliberation, or after a momentary diversion of thought induced by an unrelated task. The results of these studies were mixed. Some found that participants who had first performed the unrelated task were more likely to choose the best option, whereas others found no evidence for this so-called unconscious thought advantage (UTA). The current study examined two accounts of this inconsistency in previous findings. According to the reliability account, the UTA does not exist and previous reports of this effect concern nothing but spurious effects obtained with an unreliable paradigm. In contrast, the moderator account proposes that the UTA is a real effect that occurs only when certain conditions are met in the choice task. To test these accounts, we conducted a meta-analysis and a large-scale replication study (N = 399) that met the conditions deemed optimal for replicating the UTA. Consistent with the reliability account, the large-scale replication study yielded no evidence for the UTA, and the meta-analysis showed that previous reports of the UTA were confined to underpowered studies that used relatively small sample sizes. Furthermore, the results of the large-scale study also dispelled the recent suggestion that the UTA might be gender-specific. Accordingly, we conclude that there exists no reliable support for the claim that a momentary diversion of thought leads to better decision making than a period of deliberation.
Journal Article