Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
9
result(s) for
"Randomization-based test"
Sort by:
A roadmap to using randomization in clinical trials
2021
Background
Randomization is the foundation of any clinical trial involving treatment comparison. It helps mitigate selection bias, promotes similarity of treatment groups with respect to important known and unknown confounders, and contributes to the validity of statistical tests. Various restricted randomization procedures with different probabilistic structures and different statistical properties are available. The goal of this paper is to present a systematic roadmap for the choice and application of a restricted randomization procedure in a clinical trial.
Methods
We survey available restricted randomization procedures for sequential allocation of subjects in a randomized, comparative, parallel group clinical trial with equal (1:1) allocation. We explore statistical properties of these procedures, including balance/randomness tradeoff, type I error rate and power. We perform head-to-head comparisons of different procedures through simulation under various experimental scenarios, including cases when common model assumptions are violated. We also provide some real-life clinical trial examples to illustrate the thinking process for selecting a randomization procedure for implementation in practice.
Results
Restricted randomization procedures targeting 1:1 allocation vary in the degree of balance/randomness they induce, and more importantly, they vary in terms of validity and efficiency of statistical inference when common model assumptions are violated (e.g. when outcomes are affected by a linear time trend; measurement error distribution is misspecified; or selection bias is introduced in the experiment). Some procedures are more robust than others. Covariate-adjusted analysis may be essential to ensure validity of the results. Special considerations are required when selecting a randomization procedure for a clinical trial with very small sample size.
Conclusions
The choice of randomization design, data analytic technique (parametric or nonparametric), and analysis strategy (randomization-based or population model-based) are all very important considerations. Randomization-based tests are robust and valid alternatives to likelihood-based tests and should be considered more frequently by clinical investigators.
Journal Article
Latitudinal gradients in species richness in assemblages of sessile animals in rocky intertidal zone: mechanisms determining scale-dependent variability
by
Yamamoto, Tomoko
,
Hori, Masakazu
,
Okuda, Takehiro
in
additive diversity components
,
Aggregation
,
Animal and plant ecology
2009
1. Although latitudinal gradients in species richness within a region are observed in a range of taxa and habitats, little is known about variability in its scale dependence or causal processes. The scale-dependent variability of latitudinal gradients in species richness can be affected by latitudinal differences in (i) the regional relative abundance distribution, and (ii) the degree of aggregated distribution (i.e., intraspecific aggregation and interspecific segregation; henceforth, the degree of aggregation) reflecting differences in ecological processes among regions, which are not mutually exclusive. 2. In rocky intertidal sessile animal assemblages along Japan's Pacific coast (between 31°N and 43°N), scale-dependent variability of the latitudinal gradient in species richness and its causal mechanisms were examined by explicitly incorporating three hierarchical spatial scales into the monitoring design: plots (50 x 100 cm), shores (78 to 235 m), and regions (16·7 to 42·5 km). 3. To evaluate latitudinal differences in the degree of aggregation, the degree of intraspecific aggregation at each spatial scale in each region was examined using the standardized Morishita index. Furthermore, the observed species richness was compared with the species richness expected by random sampling from the regional species pool using randomization tests. 4. Latitudinal gradients in species richness were observed at all spatial scales, but the gradients became steadily more moderate with decreasing spatial scale. The slope of the relative abundance distribution decreased with decreasing latitude. 5. Tests of an index of intraspecific aggregation and randomization tests indicated that although species richness at smaller scales differed significantly from species richness expected based on a random distribution, the degree of aggregation did not vary with latitude. Although some ecological processes (possibly species sorting) may have played a role in determining species richness at small spatial scales, the importance of these processes did not vary with latitude. 6. Thus, scale-dependent variability in the latitudinal gradient of species richness appears to be explained mainly by latitudinal differences in the regional relative abundance distribution by imposing statistical constraint caused by decreasing grain size.
Journal Article
Do Spatial Designs Outperform Classic Experimental Designs?
by
Bhatta, Madhav
,
Hoefler, Raegan
,
Covarrubias, Eduardo
in
accuracy
,
Agricultural land
,
Agriculture
2020
Controlling spatial variation in agricultural field trials is the most important step to compare treatments efficiently and accurately. Spatial variability can be controlled at the experimental design level with the assignment of treatments to experimental units and at the modeling level with the use of spatial corrections and other modeling strategies. The goal of this study was to compare the efficiency of methods used to control spatial variation in a wide range of scenarios using a simulation approach based on real wheat data. Specifically, classic and spatial experimental designs with and without a two-dimensional autoregressive spatial correction were evaluated in scenarios that include differing experimental unit sizes, experiment sizes, relationships among genotypes, genotype by environment interaction levels, and trait heritabilities. Fully replicated designs outperformed partially and unreplicated designs in terms of accuracy; the alpha-lattice incomplete block design was best in all scenarios of the medium-sized experiments. However, in terms of response to selection, partially replicated experiments that evaluate large population sizes were superior in most scenarios. The AR1 × AR1 spatial correction had little benefit in most scenarios except for the medium-sized experiments with the largest experimental unit size and low GE. Overall, the results from this study provide a guide to researchers designing and analyzing large field experiments.
Journal Article
A Closer Look at Testing the \No-Treatment-Effect\ Hypothesis in a Comparative Experiment
2015
Standard tests of the \"no-treatment-effect\" hypothesis for a comparative experiment include permutation tests, the Wilcoxon rank sum test, two-sample t tests, and Fisher-type randomization tests. Practitioners are aware that these procedures test different no-effect hypotheses and are based on different modeling assumptions. However, this awareness is not always, or even usually, accompanied by a clear understanding or appreciation of these differences. Borrowing from the rich literatures on causality and finite-population sampling theory, this paper develops a modeling framework that affords answers to several important questions, including: exactly what hypothesis is being tested, what model assumptions are being made, and are there other, perhaps better, approaches to testing a no-effect hypothesis? The framework lends itself to clear descriptions of three main inference approaches: process-based, randomization-based, and selection-based. It also promotes careful consideration of model assumptions and targets of inference, and highlights the importance of randomization. Along the way, Fisher-type randomization tests are compared to permutation tests and a less well-known Neyman-type randomization test. A simulation study compares the operating characteristics of the Neyman-type randomization test to those of the other more familiar tests.
Journal Article
Evaluating the Validity of Post-Hoc Subgroup Inferences: A Case Study
2016
In randomized experiments, the random assignment of units to treatment groups justifies many of the widely used traditional analysis methods for evaluating causal effects. Specifying subgroups of units for further examination after observing outcomes, however, may partially nullify any advantages of randomized assignment when data are analyzed naively. Some previous statistical literature has treated all post-hoc subgroup analyses homogeneously as entirely invalid and thus uninterpretable. The extent of the validity of such analyses and the factors that affect the degree of validity remain largely unstudied. Here, we describe a recent pharmaceutical case with First Amendment legal implications, in which post-hoc subgroup analyses played a pivotal and controversial role. Through Monte Carlo simulation, we show that post-hoc results that seem highly significant make dramatic movements toward insignificance after accounting for the subgrouping procedure presumably used. Finally, we propose a novel, randomization-based method that generates valid post-hoc subgroup p-values, provided we know exactly how the subgroups were constructed. If we do not know the exact subgrouping procedure, our method may still place helpful bounds on the significance level of estimated effects. This randomization-based approach allows us to evaluate causal effects in situations where valid evaluations were previously considered impossible.
[Received February 2014. Revised April 2015.]
Journal Article
Inference for Response-Adaptive Randomization
by
Rosenberger, William F
,
Lachin, John M
in
linear rank tests
,
maximum likelihood estimators
,
population‐based inference
2015,2016
Inference for response‐adaptive randomization is very complicated because both the treatment assignments and responses are correlated. This leads to nonstandard problems and new insights into conditioning. This chapter first examines likelihood‐based inference and then randomization‐based inference. More details on the theory of likelihood‐based inference for response‐adaptive randomization can be found in Hu and Rosenberger (2006). Response‐adaptive randomization induces additional correlation among the responses, and this leads to an increase in the variance of the test statistic. This increased variance contributes to a decrease in power for standard tests based on a population model. The chapter explores the power of response‐adaptive randomization procedures. As with restricted randomization procedures, randomization‐based inference can be performed following a response‐adaptive randomization procedure using the family of linear rank tests.
Book Chapter
Randomization as a Basis for Inference
by
Rosenberger, William F
,
Lachin, John M
in
conditional tests
,
group sequential monitoring
,
Monte Carlo re‐randomization
2015,2016
This chapter explores the differences between the randomization model and the population model. In so doing, it develops the principles of randomization‐based inference using randomization tests, originally proposed in the early part of the last century by Fisher (1935). The chapter compares methods for the computation of randomization tests for a test of the simple treatment effect. Then, it describes how to compute unconditional and conditional tests using Monte Carlo re‐randomization. The techniques can be implemented in SAS or R and run very quickly. The chapter tabulates the error rates for the randomization test and the t‐test when responses are assumed to be normally distributed, for n = 50. Finally, the chapter describes a group sequential monitoring strategy for monitoring randomization tests.
Book Chapter
Problems and Deficiencies in Academic Research – Orientology Shows the Way
Since 2006 at each successive year the numbers of papers received and published in All India Oriental Conference have increased by 30 to 40 percent. The observations of editors of 44th AIOC held at Tirupati in 2008 are pointing to new trend of plagiarism. The research institutions and Universities are likely to face this menace seriously in the years to come. As a preventive measure the UGC has recommended software to curb plagiarism noticed in research papers. In view of this there is a need to improve quality of research not only in the context of Indian languages area but also in the faculties of Humanities, Social Sciences including Commerce and Management as well. Unintentional plagiarism arises due to ignorance or unawareness of new researchers about the research process. They are too eager to produce or reproduce something very quickly. In order to improve quality of research and to make research useful there is a great need of interaction and exchange between researchers across disciplines of field including languages, Social Sciences, technology, Commerce and Management. The problem of plagiarism becomes a minor one when considered in the context of greater problem of improving quality of research.
Journal Article
Single Case Quantitative Methods for Practice‐Based Evidence
by
Morley, Stephen
,
McMillan, Dean
in
AB design, most basic approach to single case research and withdrawal design
,
alternating treatment design (ATD) ‐ comparing two or more treatments in same person
,
clinicians, clients and services ‐ needing data both rigorous and relevant
2010
This chapter contains sections titled:
Introduction
The relevance of single case designs for practice‐based evidence
The methodology of the single case design: the research ideal
Problems in applying the research ideal to standard clinical practice
Modifying the research ideal for practice‐based evidence
Conclusion
Acknowledgements
References
Book Chapter