Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
41 result(s) for "Moerbeek, Mirjam"
Sort by:
Optimal allocation of clusters in stepped wedge designs with a decaying correlation structure
The cluster randomized stepped wedge design is a multi-period uni-directional switch design in which all clusters start in the control condition and at the beginning of each new period a random sample of clusters crosses over to the intervention condition. Such designs often use uniform allocation, with an equal number of clusters at each treatment switch. However, the uniform allocation is not necessarily the most efficient. This study derives the optimal allocation of clusters to treatment sequences in the cluster randomized stepped wedge design, for both cohort and cross-sectional designs. The correlation structure is exponential decay, meaning the correlation decreases with the time lag between two measurements. The optimal allocation is shown to depend on the intraclass correlation coefficient, the number of subjects per cluster-period and the cluster and (in the case of a cohort design) individual autocorrelation coefficients. For small to medium values of these autocorrelations those sequences that have their treatment switch earlier or later in the study are allocated a larger proportion of clusters than those clusters that have their treatment switch halfway the study. When the autocorrelation coefficients increase, the clusters become more equally distributed across the treatment sequences. For the cohort design, the optimal allocation is almost equal to the uniform allocation when both autocorrelations approach the value 1. For almost all scenarios that were studied, the efficiency of the uniform allocation is 0.8 or higher. R code to derive the optimal allocation is available online.
Optimal placebo-treatment comparisons in trials with an incomplete within-subject design and heterogeneous costs and variances
The aim of a clinical trial is to compare placebo to one or more treatments. The within-subject design is known to be more efficient than the between-subject design. However, in some trials that implement a within-subject design it is not possible to evaluate the placebo and all treatments within each subject. The design then becomes an incomplete within-subject design. An important question is how many subjects should be allocated to each combination of placebo and treatments. This paper studies optimal allocations of subjects in trials with a placebo and two treatments under heterogenous costs and variances. Two optimality criteria that consider the placebo-treatment contrasts simultaneously are considered, and the design is derived under a budgetary constraint. More subjects are allocated to those combinations with higher variances and lower costs. The optimal allocation is compared to the uniform allocation, which allocates equal number of subjects to each placebo and treatment combination, and to the complete within-subject design, where placebo and all treatments are available in each subject. The methodology is illustrated on the basis of an example on consultation time in primary care. A Shiny app is available to facilitate use of the methodology.
Bayesian updating: increasing sample size during the course of a study
Background A priori sample size calculation requires an a priori estimate of the size of the effect. An incorrect estimate may result in a sample size that is too low to detect effects or that is unnecessarily high. An alternative to a priori sample size calculation is Bayesian updating, a procedure that allows increasing sample size during the course of a study until sufficient support for a hypothesis is achieved. This procedure does not require and a priori estimate of the effect size. This paper introduces Bayesian updating to researchers in the biomedical field and presents a simulation study that gives insight in sample sizes that may be expected for two-group comparisons. Methods Bayesian updating uses the Bayes factor, which quantifies the degree of support for a hypothesis versus another one given the data. It can be re-calculated each time new subjects are added, without the need to correct for multiple interim analyses. A simulation study was conducted to study what sample size may be expected and how large the error rate is, that is, how often the Bayes factor shows most support for the hypothesis that was not used to generate the data. Results The results of the simulation study are presented in a Shiny app and summarized in this paper. Lower sample size is expected when the effect size is larger and the required degree of support is lower. However, larger error rates may be observed when a low degree of support is required and/or when the sample size at the start of the study is small. Furthermore, it may occur sufficient support for neither hypothesis is achieved when the sample size is bounded by a maximum. Conclusions Bayesian updating is a useful alternative to a priori sample size calculation, especially so in studies where additional subjects can be recruited easily and data become available in a limited amount of time. The results of the simulation study show how large a sample size can be expected and how large the error rate is.
Optimal allocations for two treatment comparisons within the proportional odds cumulative logits model
This paper studies optimal treatment allocations for two treatment comparisons when the outcome is ordinal and analyzed by a proportional odds cumulative logits model. The variance of the treatment effect estimator is used as optimality criterion. The optimal design is sought so that this variance is minimal for a given total sample size or a given budget, meaning that the power for the test on treatment effect is maximal, or it is sought so that a required power level is achieved at a minimal total sample size or budget. Results are presented for three, five and seven ordered response categories, three treatment effect sizes and a skewed, bell-shaped or polarized distribution of the response probabilities. The optimal proportion subjects in the intervention condition decreases with the number of response categories and the costs for the intervention relative to those for the control. The relation between the optimal proportion and effect size depends on the distribution of the response probabilities. The widely used balanced design is not always the most efficient; its efficiency as compared to the optimal design decreases with increasing cost ratio. The optimal design is highly robust to misspecification of the response probabilities and treatment effect size. The optimal design methodology is illustrated using two pharmaceutical examples. A Shiny app is available to find the optimal treatment allocation, to evaluate the efficiency of the balanced design and to study the relation between budget or sample size and power.
How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level
Background The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. Methods The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. Results The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. Conclusions The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
Bayesian sequential designs in studies with multilevel data
In many studies in the social and behavioral sciences, the data have a multilevel structure, with subjects nested within clusters. In the design phase of such a study, the number of clusters to achieve a desired power level has to be calculated. This requires a priori estimates of the effect size and intraclass correlation coefficient. If these estimates are incorrect, the study may be under- or overpowered. This may be overcome by using a group-sequential design, where interim tests are done at various points in time of the study. Based on interim test results, a decision is made to either include additional clusters or to reject the null hypothesis and conclude the study. This contribution introduces Bayesian sequential designs as an alternative to group-sequential designs. This approach compares various hypotheses based on the support in the data for each of them. If neither hypothesis receives a sufficient degree of support, additional clusters are included in the study and the Bayes factor is recalculated. This procedure continues until one of the hypotheses receives sufficient support. This paper explains how the Bayes factor is used as a measure of support for a hypothesis and how a Bayesian sequential design is conducted. A simulation study in the setting of a two-group comparison was conducted to study the effects of the minimum and maximum number of clusters per group and the desired degree of support. It is concluded that Bayesian sequential designs are a flexible alternative to the group sequential design.
The Design of Cluster Randomized Trials With Random Cross-Classifications
Data from cluster randomized trials do not always have a pure hierarchical structure. For instance, students are nested within schools that may be crossed by neighborhoods, and soldiers are nested within army units that may be crossed by mental health—care professionals. It is important that the random cross-classification is taken into account while planning a cluster randomized trial. This article presents sample size equations, such that a desired power level is achieved for the test on treatment effect. Furthermore, it also presents optimal sample sizes given a budgetary constraint, with a special focus on conditional optimal designs where one of the sample sizes is fixed beforehand. The optimal design methodology is illustrated using a postdeployment training to reduce illhealth in armed forces personnel.
Optimal design of cluster randomized crossover trials with a continuous outcome: Optimal number of time periods and treatment switches under a fixed number of clusters or fixed budget
In the cluster randomized crossover trial, a sequence of treatment conditions, rather than just one treatment condition, is assigned to each cluster. This contribution studies the optimal number of time periods in studies with a treatment switch at the end of each time period, and the optimal number of treatment switches in a trial with a fixed number of time periods. This is done for trials with a fixed number of clusters, and for trials in which the costs per cluster, subject, and treatment switch are taken into account using a budgetary constraint. The focus is on trials with a cross-sectional design where a continuous outcome variable is measured at the end of each time period. An exponential decay correlation structure is used to model dependencies among subjects within the same cluster. A linear multilevel mixed model is used to estimate the treatment effect and its associated variance. The optimal design minimizes this variance. Matrix algebra is used to identify the optimal design and other highly efficient designs. For a fixed number of clusters, a design with the maximum number of time periods is optimal and treatment switches should occur at each time period. However, when a budgetary constraint is taken into account, the optimal design may have fewer time periods and fewer treatment switches. The Shiny app was developed to facilitate the use of the methodology in this contribution.
The Design of Cluster Randomized Crossover Trials
The inefficiency induced by between-cluster variation in cluster randomized (CR) trials can be reduced by implementing a crossover (CO) design. In a simple CO trial, each subject receives each treatment in random order. A powerful characteristic of this design is that each subject serves as its own control. In a CR CO trial, clusters of subjects are randomly allocated to a sequence of interventions. Under this design, each subject is either included in only one of the treatment periods (CO at cluster level) or in both periods (CO at subject level). In this study, the efficiency of both CR CO trials relative to the CR trial without CO is demonstrated. Furthermore, the optimal allocation of clusters and subjects given a fixed budget or desired power level is discussed.
Serial Order Effect in Divergent Thinking in Five- to Six-Year-Olds: Individual Differences as Related to Executive Functions
This study examined the unfolding in real time of original ideas during divergent thinking (DT) in five- to six-year-olds and related individual differences in DT to executive functions (EFs). The Alternative Uses Task was administered with verbal prompts that encouraged children to report on their thinking processes while generating uses for daily objects. In addition to coding the originality of each use, the domain-specific DT processes memory retrieval and mental operations were coded from children’s explanations. Six EF tasks were administered and combined into composites to measure working memory, shifting, inhibition, and selective attention. The results replicated findings of a previous study with the same children but at age four years: (1) there was a serial order effect of the originality of uses; and (2) the process mental operations predicted the originality of uses. Next, the results revealed that both domain-general EFs and domain-specific executive processes played a role in the real-time unfolding of original ideas during DT. Particularly, the DT process mental operations was positively related to the early generation of original ideas, while selective attention was negatively related to the later generation of original ideas. These findings deepen our understanding of how controlled executive processes operate during DT.