Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
35
result(s) for
"Van Assen, Marcel A. L. M."
Sort by:
Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis
by
Wicherts, Jelte M.
,
van Assen, Marcel A. L. M.
,
van Aert, Robbie C. M.
in
Analysis
,
Bias
,
Biology and Life Sciences
2019
Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias exists, how strongly it affects different scientific literatures is currently less well-known. We examined evidence of publication bias in a large-scale data set of primary studies that were included in 83 meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and 499 systematic reviews from the Cochrane Database of Systematic Reviews (CDSR; representing meta-analyses from medicine). Publication bias was assessed on all homogeneous subsets (3.8% of all subsets of meta-analyses published in Psychological Bulletin) of primary studies included in meta-analyses, because publication bias methods do not have good statistical properties if the true effect size is heterogeneous. Publication bias tests did not reveal evidence for bias in the homogeneous subsets. Overestimation was minimal but statistically significant, providing evidence of publication bias that appeared to be similar in both fields. However, a Monte-Carlo simulation study revealed that the creation of homogeneous subsets resulted in challenging conditions for publication bias methods since the number of effect sizes in a subset was rather small (median number of effect sizes equaled 6). Our findings are in line with, in its most extreme case, publication bias ranging from no bias until only 5% statistically nonsignificant effect sizes being published. These and other findings, in combination with the small percentages of statistically significant primary effect sizes (28.9% and 18.9% for subsets published in Psychological Bulletin and CDSR), led to the conclusion that evidence for publication bias in the studied homogeneous subsets is weak, but suggestive of mild publication bias in both psychology and medicine.
Journal Article
The Prevalence of Marginally Significant Results in Psychology Over Time
by
Olsson-Collentine, Anton
,
van Assen, Marcel A. L. M.
,
Hartgerink, Chris H. J.
in
Bias
,
Clinical psychology
,
Evidentiality
2019
We examined the percentage of p values (.05 < p ≤ .10) reported as marginally significant in 44,200 articles, across nine psychology disciplines, published in 70 journals belonging to the American Psychological Association between 1985 and 2016. Using regular expressions, we extracted 42,504 p values between .05 and .10. Almost 40% of p values in this range were reported as marginally significant, although there were considerable differences between disciplines. The practice is most common in organizational psychology (45.4%) and least common in clinical psychology (30.1%). Contrary to what was reported by previous researchers, our results showed no evidence of an increasing trend in any discipline; in all disciplines, the percentage of p values reported as marginally significant was decreasing or constant over time. We recommend against reporting these results as marginally significant because of the low evidential value of p values between .05 and .10.
Journal Article
Explaining quality of life of older people in the Netherlands using a multidimensional assessment of frailty
by
Gobbens, Robbert J. J.
,
van Assen, Marcel A. L. M.
,
Luijikx, Katrien G.
in
Aged
,
Aged, 80 and over
,
Comorbidity
2013
Purpose Although frailty was originally a medial concept, nowadays more and more researchers are convinced of its multidimensional nature, including a psychological and social domain of frailty as well as a physical domain. The objective of this study was to test the hypothesis that the prediction of quality of life by physical frailty components is improved by adding psychological and social frailty components. Methods This cross-sectional study was carried out with a sample of Dutch citizens. A total of 1,031 people aged 65 years and older completed a Web-based questionnaire containing the Tilburg Frailty Indicator for measuring physical, psychological, and social frailty, and the WHOQOLBREF for measuring four quality of life domains (physical health, psychological, social relations, environmental). Results The findings show that the prediction of all quality of life domains by eight physical components of frailty was improved after adding four psychological and three social frailty components. The psychological frailty component 'feeling down' significantly improved the prediction of all four quality of life domains, after controlling for the effects of background characteristics and all other frailty components. Conclusion This study emphasizes the importance of a multidimensional assessment of frailty in the prediction of quality of life in older people.
Journal Article
Ensuring the quality and specificity of preregistrations
by
Bakker, Marjan
,
Wicherts, Jelte M.
,
Veldkamp, Coosje L. S.
in
Analysis
,
Behavioral sciences
,
Biology and Life Sciences
2020
Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of “researcher degrees of freedom” aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called “OSF Preregistration,” http://osf.io/prereg/ ). The Prereg Challenge format was a “structured” workflow with detailed instructions and an independent review to confirm completeness; the “Standard” format was “unstructured” with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the “structured” format restricted the opportunistic use of researcher degrees of freedom better (Cliff’s Delta = 0.49) than the “unstructured” format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.
Journal Article
Associations between lifestyle factors and multidimensional frailty: a cross-sectional study among community-dwelling older people
by
Gobbens, Robbert J. J.
,
van Assen, Marcel A. L. M.
,
Helmink, Judith H. M.
in
Aged
,
Aging
,
Alcohol use
2022
Background
Multidimensional frailty, including physical, psychological, and social components, is associated to disability, lower quality of life, increased healthcare utilization, and mortality. In order to prevent or delay frailty, more knowledge of its determinants is necessary; one of these determinants is lifestyle. The aim of this study is to determine the association between lifestyle factors smoking, alcohol use, nutrition, physical activity, and multidimensional frailty.
Methods
This cross-sectional study was conducted in two samples comprising in total 45,336 Dutch community-dwelling individuals aged 65 years or older. These samples completed a questionnaire including questions about smoking, alcohol use, physical activity, sociodemographic factors (both samples), and nutrition (one sample). Multidimensional frailty was assessed with the Tilburg Frailty Indicator (TFI).
Results
Higher alcohol consumption, physical activity, healthy nutrition, and less smoking were associated with less total, physical, psychological and social frailty after controlling for effects of other lifestyle factors and sociodemographic characteristics of the participants (age, gender, marital status, education, income). Effects of physical activity on total and physical frailty were up to considerable, whereas the effects of other lifestyle factors on frailty were small.
Conclusions
The four lifestyle factors were not only associated with physical frailty but also with psychological and social frailty. The different associations of frailty domains with lifestyle factors emphasize the importance of assessing frailty broadly and thus to pay attention to the multidimensional nature of this concept. The findings offer healthcare professionals starting points for interventions with the purpose to prevent or delay the onset of frailty, so community-dwelling older people have the possibility to aging in place accompanied by a good quality of life.
Journal Article
Recommendations in pre-registrations and internal review board proposals promote formal power analyses but do not increase sample size
by
Bakker, Marjan
,
Wicherts, Jelte M.
,
Veldkamp, Coosje L. S.
in
Biology and Life Sciences
,
Comparative analysis
,
Computer and Information Sciences
2020
In this preregistered study, we investigated whether the statistical power of a study is higher when researchers are asked to make a formal power analysis before collecting data. We compared the sample size descriptions from two sources: (i) a sample of pre-registrations created according to the guidelines for the Center for Open Science Preregistration Challenge (PCRs) and a sample of institutional review board (IRB) proposals from Tilburg School of Behavior and Social Sciences, which both include a recommendation to do a formal power analysis, and (ii) a sample of pre-registrations created according to the guidelines for Open Science Framework Standard Pre-Data Collection Registrations (SPRs) in which no guidance on sample size planning is given. We found that PCRs and IRBs (72%) more often included sample size decisions based on power analyses than the SPRs (45%). However, this did not result in larger planned sample sizes. The determined sample size of the PCRs and IRB proposals (Md = 90.50) was not higher than the determined sample size of the SPRs (Md = 126.00; W = 3389.5, p = 0.936). Typically, power analyses in the registrations were conducted with G*power, assuming a medium effect size, α = .05 and a power of .80. Only 20% of the power analyses contained enough information to fully reproduce the results and only 62% of these power analyses pertained to the main hypothesis test in the pre-registration. Therefore, we see ample room for improvements in the quality of the registrations and we offer several recommendations to do so.
Journal Article
Why Publishing Everything Is More Effective than Selective Publishing of Statistically Significant Results
by
Wicherts, Jelte M.
,
van Assen, Marcel A. L. M.
,
Nuijten, Michèle B.
in
Behavioral sciences
,
Economic models
,
Hypotheses
2014
De Winter and Happee examined whether science based on selective publishing of significant results may be effective in accurate estimation of population effects, and whether this is even more effective than a science in which all results are published (i.e., a science without publication bias). Based on their simulation study they concluded that \"selective publishing yields a more accurate meta-analytic estimation of the true effect than publishing everything, (and that) publishing nonreplicable results while placing null results in the file drawer can be beneficial for the scientific collective\" (p.4).
Using their scenario with a small to medium population effect size, we show that publishing everything is more effective for the scientific collective than selective publishing of significant results. Additionally, we examined a scenario with a null effect, which provides a more dramatic illustration of the superiority of publishing everything over selective publishing.
Publishing everything is more effective than only reporting significant outcomes.
Journal Article
Bayesian evaluation of effect size after replicating an original study
by
van Assen, Marcel A. L. M.
,
van Aert, Robbie C. M.
in
Applications programs
,
Bayes Theorem
,
Bayesian analysis
2017
The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study's significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method.
Journal Article
Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science
by
Wicherts, Jelte M.
,
Veldkamp, Coosje L. S.
,
van Assen, Marcel A. L. M.
in
Analysis
,
Aviation
,
Best practice
2014
Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.
Journal Article
Autonomy-connectedness mediates sex differences in symptoms of psychopathology
2017
This study aimed to examine if autonomy-connectedness, capacity for self-governance under the condition of connectedness, would mediate sex differences in symptoms of various mental disorders (depression, anxiety, eating disorders, antisocial personality disorder).
Participants (N = 5,525) from a representative community sample in the Netherlands filled out questionnaires regarding the variables under study.
Autonomy-connectedness (self-awareness, SA; sensitivity to others, SO; capacity for managing new situations, CMNS) fully mediated the sex differences in depression and anxiety, and partly in eating disorder -(drive for thinness, bulimia, and body dissatisfaction) and anti-social personality disorder characteristics. The mediations followed the expected sex-specific patterns. SO related positively to the internalizing disorder indices, and negatively to the anti-social personality disorder. SA related negatively to all disorder indices; and CMNS to all internalizing disorder indices, but positively to the anti-social personality disorder.
Treatment of depression, anxiety, but also eating disorders and the antisocial personality disorder may benefit from a stronger focus on autonomy strengthening.
Journal Article