Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
123
result(s) for
"Repeated measures (Research method)"
Sort by:
What is replication?
by
Nosek, Brian A.
,
Errington, Timothy M.
in
Biology and Life Sciences
,
Data Interpretation, Statistical
,
Diagnostic systems
2020
Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a study's procedure and observing whether the prior finding recurs. This definition is intuitive, easy to apply, and incorrect. We propose that replication is a study for which any outcome would be considered diagnostic evidence about a claim from prior research. This definition reduces emphasis on operational characteristics of the study and increases emphasis on the interpretation of possible outcomes. The purpose of replication is to advance theory by confronting existing understanding with new evidence. Ironically, the value of replication may be strongest when existing understanding is weakest. Successful replication provides evidence of generalizability across the conditions that inevitably differ from the original study; Unsuccessful replication indicates that the reliability of the finding may be more constrained than recognized previously. Defining replication as a confrontation of current theoretical expectations clarifies its important, exciting, and generative role in scientific progress.
Journal Article
Using multiple agreement methods for continuous repeated measures data: a tutorial for practitioners
by
Scott, Charles
,
Inácio, Vanda
,
Parker, Richard A.
in
Agreement
,
Agreements
,
Chronic obstructive lung disease
2020
Background
Studies of agreement examine the distance between readings made by different devices or observers measuring the same quantity. If the values generated by each device are close together most of the time then we conclude that the devices agree. Several different agreement methods have been described in the literature, in the linear mixed modelling framework, for use when there are time-matched repeated measurements within subjects.
Methods
We provide a tutorial to help guide practitioners when choosing among different methods of assessing agreement based on a linear mixed model assumption. We illustrate the use of five methods in a head-to-head comparison using real data from a study involving Chronic Obstructive Pulmonary Disease (COPD) patients and matched repeated respiratory rate observations. The methods used were the concordance correlation coefficient, limits of agreement, total deviation index, coverage probability, and coefficient of individual agreement.
Results
The five methods generated similar conclusions about the agreement between devices in the COPD example; however, some methods emphasized different aspects of the between-device comparison, and the interpretation was clearer for some methods compared to others.
Conclusions
Five different methods used to assess agreement have been compared in the same setting to facilitate understanding and encourage the use of multiple agreement methods in practice. Although there are similarities between the methods, each method has its own strengths and weaknesses which are important for researchers to be aware of. We suggest that researchers consider using the coverage probability method alongside a graphical display of the raw data in method comparison studies. In the case of disagreement between devices, it is important to look beyond the overall summary agreement indices and consider the underlying causes. Summarising the data graphically and examining model parameters can both help with this.
Journal Article
Selecting a sample size for studies with repeated measures
by
Logan, Henrietta L
,
Glueck, Deborah H
,
Muller, Keith E
in
Analysis
,
Analysis of Variance
,
Computer simulation
2013
Many researchers favor repeated measures designs because they allow the detection of within-person change over time and typically have higher statistical power than cross-sectional designs. However, the plethora of inputs needed for repeated measures designs can make sample size selection, a critical step in designing a successful study, difficult. Using a dental pain study as a driving example, we provide guidance for selecting an appropriate sample size for testing a time by treatment interaction for studies with repeated measures. We describe how to (1) gather the required inputs for the sample size calculation, (2) choose appropriate software to perform the calculation, and (3) address practical considerations such as missing data, multiple aims, and continuous covariates.
Journal Article
Systematic heterogenisation to improve reproducibility in animal studies
by
Suman, Patrick Remus
,
Lino de Oliveira, Cilene
in
Animal Experimentation
,
Animal research
,
Animals
2022
A recent study published in PLOS Biology investigated whether the systematic use of multiple experimenters boosts the reproducibility of behavioural assays in mice. These findings open up prospects for solutions to reproducibility issues in animal research.
Journal Article
Better methods can’t make up for mediocre theory
2019
[...]good theory must make sense, or at least acknowledge its contradictions. (The general consensus is that these studies did not establish the presence of extrasensory perception in college students, but the prevalence of overly flexible statistics; Bem defends the statistics as sound.) The work flouted well-supported ideas about physics and causality. Because the researchers required their results to be consistent with a broad theoretical framework, they probed deeper and discovered that their finding stemmed from a loose fibre-optic cable.
Journal Article
Reporting and analysis of repeated measurements in preclinical animals experiments
by
Totton, Sarah C.
,
Cullen, Jonah N.
,
O’Connor, Annette M.
in
Analysis
,
Animal experimentation
,
Animal Experimentation - standards
2019
A common feature of preclinical animal experiments is repeated measurement of the outcome, e.g., body weight measured in mice pups weekly for 20 weeks. Separate time point analysis or repeated measures analysis approaches can be used to analyze such data. Each approach requires assumptions about the underlying data and violations of these assumptions have implications for estimation of precision, and type I and type II error rates. Given the ethical responsibilities to maximize valid results obtained from animals used in research, our objective was to evaluate approaches to reporting repeated measures design used by investigators and to assess how assumptions about variation in the outcome over time impact type I and II error rates and precision of estimates. We assessed the reporting of repeated measures designs of 58 studies in preclinical animal experiments. We used simulation modelling to evaluate three approaches to statistical analysis of repeated measurement data. In particular, we assessed the impact of (a) repeated measure analysis assuming that the outcome had non-constant variation at all time points (heterogeneous variance) (b) repeated measure analysis assuming constant variation in the outcome (homogeneous variance), (c) separate ANOVA at individual time point in repeated measures designs. The evaluation of the three model fitting was based on comparing the p-values distributions, the type I and type II error rates and by implication, the shrinkage or inflation of standard error estimates from 1000 simulated dataset. Of 58 studies with repeated measures design, three provided a rationale for repeated measurement and 23 studies reported using a repeated-measures analysis approach. Of the 35 studies that did not use repeated-measures analysis, fourteen studies used only two time points to calculate weight change which potentially means collected data was not fully utilized. Other studies reported only select time points (n = 12) raising the issue of selective reporting. Simulation studies showed that an incorrect assumption about the variance structure resulted in modified error rates and precision estimates. The reporting of the validity of assumptions for repeated measurement data is very poor. The homogeneous variation assumption, which is often invalid for body weight measurements, should be confirmed prior to conducting the repeated-measures analysis using homogeneous covariance structure and adjusting the analysis using corrections or model specifications if this is not met.
Journal Article
Methods matter in repeating ocean acidification studies
by
Paolo Domenici
,
Douglas P. Chivers
,
Megan J. Welch
in
631/158/2165
,
704/829/826
,
Carbon Dioxide
2020
Journal Article
A controlled trial for reproducibility
by
Sheehan, Paul E.
,
Vora, Gary J.
,
Raphael, Marc P.
in
706/648/496
,
706/648/697
,
Bioengineering
2020
For three years, part of DARPA has funded two teams for each project: one for research and one for reproducibility. The investment is paying off.
For three years, part of DARPA has funded two teams for each project: one for research and one for reproducibility. The investment is paying off.
Journal Article
Modelling of longitudinal data to predict cardiovascular disease risk: a methodological review
by
Stevens, David
,
Harrison, Stephanie L.
,
Lane, Deirdre A.
in
Analysis
,
Blood pressure
,
Cardiovascular disease
2021
Objective
The identification of methodology for modelling cardiovascular disease (CVD) risk using longitudinal data and risk factor trajectories.
Methods
We screened MEDLINE-Ovid from inception until 3 June 2020. MeSH and text search terms covered three areas: data type, modelling type and disease area including search terms such as “longitudinal”, “trajector*” and “cardiovasc*” respectively. Studies were filtered to meet the following inclusion criteria: longitudinal individual patient data in adult patients with ≥3 time-points and a CVD or mortality outcome. Studies were screened and analyzed by one author. Any queries were discussed with the other authors. Comparisons were made between the methods identified looking at assumptions, flexibility and software availability.
Results
From the initial 2601 studies returned by the searches 80 studies were included. Four statistical approaches were identified for modelling the longitudinal data: 3 (4%) studies compared time points with simple statistical tests, 40 (50%) used single-stage approaches, such as including single time points or summary measures in survival models, 29 (36%) used two-stage approaches including an estimated longitudinal parameter in survival models, and 8 (10%) used joint models which modelled the longitudinal and survival data together. The proportion of CVD risk prediction models created using longitudinal data using two-stage and joint models increased over time.
Conclusions
Single stage models are still heavily utilized by many CVD risk prediction studies for modelling longitudinal data. Future studies should fully utilize available longitudinal data when analyzing CVD risk by employing two-stage and joint approaches which can often better utilize the available data.
Journal Article
Single time point comparisons in longitudinal randomized controlled trials: power and bias in the presence of missing data
2016
Background
The primary analysis in a longitudinal randomized controlled trial is sometimes a comparison of arms at a single time point. While a two-sample
t
-test is often used, missing data are common in longitudinal studies and decreases power by reducing sample size. Mixed models for repeated measures (MMRM) can test treatment effects at specific time points, have been shown to give unbiased estimates in certain missing data contexts, and may be more powerful than a two sample
t
-test.
Methods
We conducted a simulation study to compare the performance of a complete-case
t
-test to a MMRM in terms of power and bias under different missing data mechanisms. Impact of within- and between-person variance, dropout mechanism, and variance-covariance structure were all considered.
Results
While both complete-case
t
-test and MMRM provided unbiased estimation of treatment differences when data were missing completely at random, MMRM yielded an absolute power gain of up to 12 %. The MMRM provided up to 25 % absolute increased power over the
t
-test when data were missing at random, as well as unbiased estimation.
Conclusions
Investigators interested in single time point comparisons should use a MMRM with a contrast to gain power and unbiased estimation of treatment effects instead of a complete-case two sample
t
-test.
Journal Article