Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
45
result(s) for
"Simonsohn, Uri"
Sort by:
Specification curve analysis
2020
Empirical results hinge on analytical decisions that are defensible, arbitrary and motivated. These decisions probably introduce bias (towards the narrative put forward by the authors), and they certainly involve variability not reflected by standard errors. To address this source of noise and bias, we introduce specification curve analysis, which consists of three steps: (1) identifying the set of theoretically justified, statistically valid and non-redundant specifications; (2) displaying the results graphically, allowing readers to identify consequential specifications decisions; and (3) conducting joint inference across all specifications. We illustrate the use of this technique by applying it to three findings from two different papers, one investigating discrimination based on distinctively Black names, the other investigating the effect of assigning female versus male names to hurricanes. Specification curve analysis reveals that one finding is robust, one is weak and one is not robust at all.
Specification curve analysis enables large numbers of alternative empirical analyses to be performed on the same data, showcasing how analytical decisions influence results and allowing joint inference over all analyses.
Journal Article
False-Positive Citations
by
Simmons, Joseph P.
,
Nelson, Leif D.
,
Simonsohn, Uri
in
Citations
,
Experimental psychology
,
False positive results
2018
We describe why we wrote “False-Positive Psychology,” analyze how it has been cited, and explain why the integrity of experimental psychology hinges on the full disclosure of methods, the sharing of materials and data, and, especially, the preregistration of analyses.
Journal Article
Small Telescopes: Detectability and the Evaluation of Replication Results
2015
This article introduces a new approach for evaluating replication results. It combines effect-size estimation with hypothesis testing, assessing the extent to which the replication results are consistent with an effect size big enough to have been detectable in the original study. The approach is demonstrated by examining replications of three well-known findings. Its benefits include the following: (a) differentiating \"unsuccessful\" replication attempts (i.e., studies yielding p > .05) that are too noisy from those that actively indicate the effect is undetectably different from zero, (b) \"protecting\" true findings from underpowered replications, and (c) arriving at intuitively compelling inferences in general and for the revisited replications in particular.
Journal Article
Friends of Victims: Personal Experience and Prosocial Behavior
2008
Why do different people give to different causes? We show that the sympathy inherent to a close relationship with a victim extends to other victims suffering from the same misfortunes that have afflicted their friends and loved ones. Both sympathy and donations are greater among those related to a victim, and they are greater among those in a communal relationship as compared to those in an exchange relationship. Experiments that control for information support causality and rule out the alternative explanation that any effect is driven by the information advantage possessed by friends of victims.
Journal Article
A manifesto for reproducible science
by
Bishop, Dorothy V. M.
,
Ware, Jennifer J.
,
Nosek, Brian A.
in
706/689
,
Behavioral Sciences
,
Bias
2017
Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.
Leading voices in the reproducibility landscape call for the adoption of measures to optimize key elements of the scientific process.
Journal Article
Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone
2013
I argue that requiring authors to post the raw data supporting their published results has the benefit, among many others, of making fraud much less likely to go undetected. I illustrate this point by describing two cases of suspected fraud I identified exclusively through statistical analysis of reported means and standard deviations. Analyses of the raw data behind these published results provided invaluable confirmation of the initial suspicions, ruling out benign explanations (e.g., reporting errors, unusual distributions), identifying additional signs of fabrication, and also ruling out one of the suspected fraud's explanations for his anomalous results. If journals, granting agencies, universities, or other entities overseeing research promoted or required data posting, it seems inevitable that fraud would be reduced.
Journal Article
P-curve won’t do your laundry, but it will distinguish replicable from non-replicable findings in observational research: Comment on Bruns & Ioannidis (2016)
2019
p-curve, the distribution of significant p-values, can be analyzed to assess if the findings have evidential value, whether p-hacking and file-drawering can be ruled out as the sole explanations for them. Bruns and Ioannidis (2016) have proposed p-curve cannot examine evidential value with observational data. Their discussion confuses false-positive findings with confounded ones, failing to distinguish correlation from causation. We demonstrate this important distinction by showing that a confounded but real, hence replicable association, gun ownership and number of sexual partners, leads to a right-skewed p-curve, while a false-positive one, respondent ID number and trust in the supreme court, leads to a flat p-curve. P-curve can distinguish between replicable and non-replicable findings. The observational nature of the data is not consequential.
Journal Article
Mistake #37: The Effect of Previously Encountered Prices on Current Housing Demand
2006
Based on contrast effects studies from psychology, we predicted that movers arriving from more expensive cities would rent pricier apartments than those arriving from cheaper cities. We also predicted that as people stayed in their new city they would get used to the new prices and would readjust their housing expenditures countering the initial impact of previous prices. We found support for both predictions in a sample of 928 movers from the PSID. Alternative explanations based on unobserved wealth and taste, and on imperfect information are ruled out.
Journal Article
New Yorkers Commute More Everywhere: Contrast Effects in the Field
2006
Previous experimental research has shown that people's decisions can be influenced by options they have encountered in the past. This paper uses PSID data to study this phenomenon in the field, by observing how long people commute after moving between cities. It is found, as predicted, that (i) people choose longer commutes in a city they have just moved to, the longer the average commute was in the city they came from, and (ii) when they move again within the new city, they revise their commute length, countering the effect their origin city had on their initial decision.
Journal Article