Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
32
result(s) for
"Moerkerke, Beatrijs"
Sort by:
Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI
2018
The importance of integrating research findings is incontrovertible and procedures for coordinate-based meta-analysis (CBMA) such as Activation Likelihood Estimation (ALE) have become a popular approach to combine results of fMRI studies when only peaks of activation are reported. As meta-analytical findings help building cumulative knowledge and guide future research, not only the quality of such analyses but also the way conclusions are drawn is extremely important. Like classical meta-analyses, coordinate-based meta-analyses can be subject to different forms of publication bias which may impact results and invalidate findings. The file drawer problem refers to the problem where studies fail to get published because they do not obtain anticipated results (e.g. due to lack of statistical significance). To enable assessing the stability of meta-analytical results and determine their robustness against the potential presence of the file drawer problem, we present an algorithm to determine the number of noise studies that can be added to an existing ALE fMRI meta-analysis before spatial convergence of reported activation peaks over studies in specific regions is no longer statistically significant. While methods to gain insight into the validity and limitations of results exist for other coordinate-based meta-analysis toolboxes, such as Galbraith plots for Multilevel Kernel Density Analysis (MKDA) and funnel plots and egger tests for seed-based d mapping, this procedure is the first to assess robustness against potential publication bias for the ALE algorithm. The method assists in interpreting meta-analytical results with the appropriate caution by looking how stable results remain in the presence of unreported information that may differ systematically from the information that is included. At the same time, the procedure provides further insight into the number of studies that drive the meta-analytical results. We illustrate the procedure through an example and test the effect of several parameters through extensive simulations. Code to generate noise studies is made freely available which enables users to easily use the algorithm when interpreting their results.
Journal Article
The empirical replicability of task-based fMRI as a function of sample size
by
Whelan, Robert
,
Nees, Frauke
,
Martinot, Jean-Luc
in
Brain Mapping
,
Brain Mapping - methods
,
Brain Mapping - standards
2020
Replicating results (i.e. obtaining consistent results using a new independent dataset) is an essential part of good science. As replicability has consequences for theories derived from empirical studies, it is of utmost importance to better understand the underlying mechanisms influencing it. A popular tool for non-invasive neuroimaging studies is functional magnetic resonance imaging (fMRI). While the effect of underpowered studies is well documented, the empirical assessment of the interplay between sample size and replicability of results for task-based fMRI studies remains limited. In this work, we extend existing work on this assessment in two ways. Firstly, we use a large database of 1400 subjects performing four types of tasks from the IMAGEN project to subsample a series of independent samples of increasing size. Secondly, replicability is evaluated using a multi-dimensional framework consisting of 3 different measures: (un)conditional test-retest reliability, coherence and stability. We demonstrate not only a positive effect of sample size, but also a trade-off between spatial resolution and replicability. When replicability is assessed voxelwise or when observing small areas of activation, a larger sample size than typically used in fMRI is required to replicate results. On the other hand, when focussing on clusters of voxels, we observe a higher replicability. In addition, we observe variability in the size of clusters of activation between experimental paradigms or contrasts of parameter estimates within these.
Journal Article
The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses
by
Seurinck, Ruth
,
Banaschewski, Tobias
,
Lemaitre, Herve
in
Algorithms
,
Child & adolescent psychiatry
,
coordinate-based meta-analysis
2018
Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (
= 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.
Journal Article
Introducing Alternative-Based Thresholding for Defining Functional Regions of Interest in fMRI
by
Seurinck, Ruth
,
Durnez, Joke
,
Bandettini, Peter A.
in
alternative distribution
,
Brain mapping
,
Brain research
2017
In fMRI research, one often aims to examine activation in specific functional regions of interest (fROIs). Current statistical methods tend to localize fROIs inconsistently, focusing on avoiding detection of false activation. Not missing true activation is however equally important in this context. In this study, we explored the potential of an alternative-based thresholding (ABT) procedure, where evidence against the null hypothesis of no effect and evidence against a prespecified alternative hypothesis is measured to control both false positives and false negatives directly. The procedure was validated in the context of localizer tasks on simulated brain images and using a real data set of 100 runs per subject. Voxels categorized as active with ABT can be confidently included in the definition of the fROI, while inactive voxels can be confidently excluded. Additionally, the ABT method complements classic null hypothesis significance testing with valuable information by making a distinction between voxels that show evidence against both the null and alternative and voxels for which the alternative hypothesis cannot be rejected despite lack of evidence against the null.
Journal Article
Evaluation of Second-Level Inference in fMRI Analysis
2016
We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference.
Journal Article
Review Paper: Reporting Practices for Task fMRI Studies
by
Seurinck, Ruth
,
Acar, Freya
,
Heuten, Talia
in
Data analysis
,
Data processing
,
Functional magnetic resonance imaging
2023
What are the standards for the reporting methods and results of fMRI studies, and how have they evolved over the years? To answer this question we reviewed 160 papers published between 2004 and 2019. Reporting styles for methods and results of fMRI studies can differ greatly between published studies. However, adequate reporting is essential for the comprehension, replication and reuse of the study (for instance in a meta-analysis). To aid authors in reporting the methods and results of their task-based fMRI study the COBIDAS report was published in 2016, which provides researchers with clear guidelines on how to report the design, acquisition, preprocessing, statistical analysis and results (including data sharing) of fMRI studies (Nichols et al. in Best Practices in Data Analysis and Sharing in Neuroimaging using fMRI, 2016). In the past reviews have been published that evaluate how fMRI methods are reported based on the 2008 guidelines, but they did not focus on how task based fMRI results are reported. This review updates reporting practices of fMRI methods, and adds an extra focus on how fMRI results are reported. We discuss reporting practices about the design stage, specific participant characteristics, scanner characteristics, data processing methods, data analysis methods and reported results.
Journal Article
Improving the Eligibility of Task-Based fMRI Studies for Meta-Analysis: A Review and Reporting Recommendations
by
Seurinck, Ruth
,
Acar, Freya
,
Heuten, Talia
in
Functional magnetic resonance imaging
,
Medical imaging
,
Meta-analysis
2024
Decisions made during the analysis or reporting of an fMRI study influence the eligibility of that study to be entered into a meta-analysis. In a meta-analysis, results of different studies on the same topic are combined. To combine the results, it is necessary that all studies provide equivalent pieces of information. However, in task-based fMRI studies we see a large variety in reporting styles. Several specific meta-analysis methods have been developed to deal with the reporting practices occurring in task-based fMRI studies, therefore each requiring a specific type of input. In this manuscript we provide an overview of the meta-analysis methods and the specific input they require. Subsequently we discuss how decisions made during the study influence the eligibility of a study for a meta-analysis and finally we formulate some recommendations about how to report an fMRI study so that it complies with as many meta-analysis methods as possible.
Journal Article
Examining evolutions in the adoption of metacognitive regulation in reciprocal peer tutoring groups
by
Van Keer, Hilde
,
Valcke, Martin
,
Moerkerke, Beatrijs
in
Behavioral Objectives
,
Collaborative learning
,
College students
2016
We aimed to investigate how metacognitive regulation is characterised during collaborative learning in a higher education reciprocal peer tutoring (RPT) setting. Sixty-four Educational Sciences students participated in a semester-long RPT-intervention and tutored one another in small groups of six. All sessions of five randomly selected RPT-groups were videotaped (70 h of video recordings). Analyses were focussed on identifying time-bound evolutions with regard to (a) the frequency of occurrence of metacognitive regulation, (b) the low-/deep-level approach to regulation, and (c) the initiative (by tutors/tutees) for metacognitive regulation. Logistic regression models allowing change points were adopted to study evolutions over time. The results indicated that RPT-groups increasingly adopt metacognitive regulation (i.e. orientation and evaluation) as the RPT-intervention progressed. Regarding RPT-groups’ regulative approach, the results revealed a significant evolution towards deep-level metacognitive regulation (i.e. orientation and monitoring), despite a dominant adoption of low-level regulation strategies. With regard to the initiative, the results demonstrated that tutees started to initiate RPT-groups’ monitoring significantly more frequently as they became familiar with the RPT-setting. Orientation, planning, and evaluation remained tutor-centred responsibilities.
Journal Article