Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
32
result(s) for
"Vanpaemel, Wolf"
Sort by:
Increasing Transparency Through a Multiverse Analysis
2016
Empirical research inevitably includes constructing a data set by processing raw data into a form ready for statistical analysis. Data processing often involves choices among several reasonable options for excluding, transforming, and coding data. We suggest that instead of performing only one analysis, researchers could perform a multiverse analysis, which involves performing all analyses across the whole set of alternatively processed data sets corresponding to a large set of reasonable scenarios. Using an example focusing on the effect of fertility on religiosity and political attitudes, we show that analyzing a single data set can be misleading and propose a multiverse analysis as an alternative practice. A multiverse analysis offers an idea of how much the conclusions change because of arbitrary choices in data construction and gives pointers as to which choices are most consequential in the fragility of the result.
Journal Article
Multiverse analyses in the classroom
2022
Most empirical papers in psychology involve statistical analyses performed on a new or existing dataset. Sometimes the robustness of a finding is demonstrated via data-analytical triangulation (e.g., obtaining comparable outcomes across different operationalizations of the dependent variable), but systematically considering the plethora of alternative analysis pathways is rather uncommon. However, researchers increasingly recognize the importance of establishing the robustness of a finding. The latter can be accomplished through a so-called multiverse analysis, which involves methodically examining the arbitrary choices pertaining to data processing and/or model building. In the present paper, we describe how the multiverse approach can be implemented in student research projects within psychology programs, drawing on our personal experience as instructors. Embedding a multiverse project in students’ curricula addresses an important scientific need, as studies examining the robustness or fragility of phenomena are largely lacking in psychology. Additionally, it offers students an ideal opportunity to put various statistical methods into practice, thereby also raising awareness about the abundance and consequences of arbitrary decisions in data-analytic processing. An attractive practical feature is that one can reuse existing datasets, which proves especially useful when resources are limited, or when circumstances such as the COVID-19 lockdown measures restrict data collection possibilities.
Journal Article
Data sharing upon request and statistical consistency errors in psychology: A replication of Wicherts, Bakker and Molenaar (2011)
by
Vanpaemel, Wolf
,
Heyman, Tom
,
Tuerlinckx, Francis
in
Analysis
,
Biology and Life Sciences
,
Consistency
2023
Sharing research data allows the scientific community to verify and build upon published work. However, data sharing is not common practice yet. The reasons for not sharing data are myriad: Some are practical, others are more fear-related. One particular fear is that a reanalysis may expose errors. For this explanation, it would be interesting to know whether authors that do not share data genuinely made more errors than authors who do share data. (Wicherts, Bakker and Molenaar 2011) examined errors that can be discovered based on the published manuscript only, because it is impossible to reanalyze unavailable data. They found a higher prevalence of such errors in papers for which the data were not shared. However, (Nuijten et al. 2017) did not find support for this finding in three large studies. To shed more light on this relation, we conducted a replication of the study by (Wicherts et al. 2011). Our study consisted of two parts. In the first part, we reproduced the analyses from (Wicherts et al. 2011) to verify the results, and we carried out several alternative analytical approaches to evaluate the robustness of the results against other analytical decisions. In the second part, we used a unique and larger data set that originated from (Vanpaemel et al. 2015) on data sharing upon request for reanalysis, to replicate the findings in (Wicherts et al. 2011). We applied statcheck for the detection of consistency errors in all included papers and manually corrected false positives. Finally, we again assessed the robustness of the replication results against other analytical decisions. Everything taken together, we found no robust empirical evidence for the claim that not sharing research data for reanalysis is associated with consistency errors.
Journal Article
The representational instability in the generalization of fear learning
2024
Perception and perceptual memory play crucial roles in fear generalization, yet their dynamic interaction remains understudied. This research (N = 80) explored their relationship through a classical differential conditioning experiment. Results revealed that while fear context perception fluctuates over time with a drift effect, perceptual memory remains stable, creating a disjunction between the two systems. Surprisingly, this disjunction does not significantly impact fear generalization behavior. Although most participants demonstrated generalization aligned with perceptual rather than physical stimulus distances, incorporating perceptual memory data into perceptual distance calculations did not enhance model performance. This suggests a potential shift in the mapping of the perceptual memory component of fear context, occurring alongside perceptual dynamics. Overall, this work provides evidence for understanding fear generalization behavior through different stimulus representational processes. Such mechanistic investigations can enhance our understanding of how individuals behave when facing threats and potentially aid in developing mechanism-specific diagnoses and treatments.
Journal Article
Discussion points for Bayesian inference
2020
Why is there no consensual way of conducting Bayesian analyses? We present a summary of agreements and disagreements of the authors on several discussion points regarding Bayesian inference. We also provide a thinking guideline to assist researchers in conducting Bayesian inference in the social and behavioural sciences.
Journal Article
Abstraction and model evaluation in category learning
by
Vanpaemel, Wolf
,
Storms, Gert
in
Behavioral Science and Psychology
,
Classification
,
Cognitive Psychology
2010
Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.
Journal Article
Idealness and similarity in goal-derived categories: A computational examination
by
Vanpaemel, Wolf
,
Voorspoels, Wouter
,
Storms, Gert
in
Behavioral Science and Psychology
,
Biological and medical sciences
,
Classification
2013
The finding that the typicality gradient in goal-derived categories is mainly driven by ideals rather than by exemplar similarity has stood uncontested for nearly three decades. Due to the rather rigid earlier implementations of similarity, a key question has remained—that is, whether a more flexible approach to similarity would alter the conclusions. In the present study, we evaluated whether a similarity-based approach that allows for dimensional weighting could account for findings in goal-derived categories. To this end, we compared a computational model of exemplar similarity (the generalized context model; Nosofsky, Journal of Experimental Psychology. General 115:39–57,
1986
) and a computational model of ideal representation (the ideal-dimension model; Voorspoels, Vanpaemel, & Storms, Psychonomic Bulletin & Review 18:1006-114,
2011
) in their accounts of exemplar typicality in ten goal-derived categories. In terms of both goodness-of-fit and generalizability, we found strong evidence for an ideal approach in nearly all categories. We conclude that focusing on a limited set of features is necessary but not sufficient to account for the observed typicality gradient. A second aspect of ideal representations—that is, that extreme rather than common, central-tendency values drive typicality—seems to be crucial.
Journal Article
Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts
2008
Features are at the core of many empirical and modeling endeavors in the study of semantic concepts. This article is concerned with the delineation of features that are important in natural language concepts and the use of these features in the study of semantic concept representation. The results of a feature generation task in which the exemplars and labels of 15 semantic categories served as cues are described. The importance of the generated features was assessed by tallying the frequency with which they were generated and by obtaining judgments of their relevance. The generated attributes also featured in extensive exemplar by feature applicability matrices covering the 15 different categories, as well as two large semantic domains (that of animals and artifacts). For all exemplars of the 15 semantic categories, typicality ratings, goodness ratings, goodness rank order, generation frequency, exemplar associative strength, category associative strength, estimated age of acquisition, word frequency, familiarity ratings, imageability ratings, and pairwise similarity ratings are described as well. By making these data easily available to other researchers in the field, we hope to provide ample opportunities for continued investigations into the nature of semantic concept representation. These data may be downloaded from the Psychonomic Society’s Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.
Journal Article