Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,768 result(s) for "Sequential design"
Sort by:
Local Gaussian Process Approximation for Large Computer Experiments
We provide a new approach to approximate emulation of large computer experiments. By focusing expressly on desirable properties of the predictive equations, we derive a family of local sequential design schemes that dynamically define the support of a Gaussian process predictor based on a local subset of the data. We further derive expressions for fast sequential updating of all needed quantities as the local designs are built up iteratively. Then we show how independent application of our local design strategy across the elements of a vast predictive grid facilitates a trivially parallel implementation. The end result is a global predictor able to take advantage of modern multicore architectures, providing a nonstationary modeling feature as a bonus. We demonstrate our method on two examples using designs with thousands of data points, and compare to the method of compactly supported covariances. Supplementary materials for this article are available online.
Bayesian group sequential designs for phase III emergency medicine trials: a case study using the PARAMEDIC2 trial
Background Phase III trials often require large sample sizes, leading to high costs and delays in clinical decision-making. Group sequential designs can improve trial efficiency by allowing for early stopping for efficacy and/or futility and thus may decrease the sample size, trial duration and associated costs. Bayesian approaches may offer additional benefits by incorporating previous information into the analyses and using decision criteria that are more practically relevant than those used in frequentist approaches. Frequentist group sequential designs have often been used for phase III studies, but the use of Bayesian group sequential designs is less common. The aim of this work was to explore how Bayesian group sequential designs could be constructed for phase III trials conducted in emergency medicine. Methods The PARAMEDIC2 trial was a phase III randomised controlled trial that compared the use of adrenaline to placebo in out-of-hospital cardiac arrest patients on 30-day survival rates. It used a frequentist group sequential design to allow early stopping for efficacy or harm. We constructed several alternative Bayesian group sequential designs and studied their operating characteristics via simulation. We then virtually re-executed the trial by applying the Bayesian designs to the PARAMEDIC2 data to demonstrate what might have happened if these designs had been used in practice. Results We produced three alternative Bayesian group sequential designs, each of which had greater than 90% power to detect the target treatment effect. A Bayesian design which performed interim analyses every 500 patients recruited produced the lowest average sample size. Using the alternative designs, the PARAMEDIC2 trial could have declared adrenaline superior for 30-day survival with approximately 1500 fewer patients. Conclusions Using the PARAMEDIC2 trial as a case study, we demonstrated how Bayesian group sequential designs can be constructed for phase III emergency medicine trials. The Bayesian framework enabled us to obtain efficient designs using decision criteria based on the probability of benefit or harm. It also enabled us to incorporate information from previous studies on the treatment effect via the prior distributions. We recommend the wider use of Bayesian approaches in phase III clinical trials. Trial registration PARAMEDIC2 Trial registration ISRCTN, ISRCTN73485024. Registered 13 March 2014, http://www.isrctn.com/ISRCTN73485024
Comparison of Bayesian and frequentist group-sequential clinical trial designs
Background There is a growing interest in the use of Bayesian adaptive designs in late-phase clinical trials. This includes the use of stopping rules based on Bayesian analyses in which the frequentist type I error rate is controlled as in frequentist group-sequential designs. Methods This paper presents a practical comparison of Bayesian and frequentist group-sequential tests. Focussing on the setting in which data can be summarised by normally distributed test statistics, we evaluate and compare boundary values and operating characteristics. Results Although Bayesian and frequentist group-sequential approaches are based on fundamentally different paradigms, in a single arm trial or two-arm comparative trial with a prior distribution specified for the treatment difference, Bayesian and frequentist group-sequential tests can have identical stopping rules if particular critical values with which the posterior probability is compared or particular spending function values are chosen. If the Bayesian critical values at different looks are restricted to be equal, O’Brien and Fleming’s design corresponds to a Bayesian design with an exceptionally informative negative prior, Pocock’s design to a Bayesian design with a non-informative prior and frequentist designs with a linear alpha spending function are very similar to Bayesian designs with slightly informative priors.This contrasts with the setting of a comparative trial with independent prior distributions specified for treatment effects in different groups. In this case Bayesian and frequentist group-sequential tests cannot have the same stopping rule as the Bayesian stopping rule depends on the observed means in the two groups and not just on their difference. In this setting the Bayesian test can only be guaranteed to control the type I error for a specified range of values of the control group treatment effect. Conclusions Comparison of frequentist and Bayesian designs can encourage careful thought about design parameters and help to ensure appropriate design choices are made.
A supermartingale approach to Gaussian process based sequential design of experiments
Gaussian process (GP) models have become a well-established framework for the adaptive design of costly experiments, and notably of computer experiments. GP-based sequential designs have been found practically efficient for various objectives, such as global optimization (estimating the global maximum or maximizer (s) of a function), reliability analysis (estimating a probability of failure) or the estimation of level sets and excursion sets. In this paper, we study the consistency of an important class of sequential designs, known as stepwise uncertainty reduction (SUR) strategies. Our approach relies on the key observation that the sequence of residual uncertainty measures, in SUR strategies, is generally a supermartingale with respect to the filtration generated by the observations. This observation enables us to establish generic consistency results for a broad class of SUR strategies. The consistency of several popular sequential design strategies is then obtained by means of this general result. Notably, we establish the consistency of two SUR strategies proposed by Bect, Ginsbourger, Li, Picheny and Vazquez (Stat. Comput. 22 (2012) 773–793)–to the best of our knowledge, these are the first proofs of consistency for GP-based sequential design algorithms dedicated to the estimation of excursion sets and their measure. We also establish a new, more general proof of consistency for the expected improvement algorithm for global optimization which, unlike previous results in the literature, applies to any GP with continuous sample paths.
An active machine learning approach for optimal design of magnesium alloys using Bayesian optimisation
In the pursuit of magnesium (Mg) alloys with targeted mechanical properties, a multi-objective Bayesian optimisation workflow is presented to enable optimal Mg-alloy design. A probabilistic Gaussian process regressor model was trained through an active learning loop, while balancing the exploration and exploitation trade-off via an acquisition function of the upper confidence bound. New candidate alloys suggested by the optimiser within each iteration were appended to the training data, and the performance of this sequential strategy was validated via a regret analysis. Using the proposed approach, the dependency of the prediction error on the training data was overcome by considering both the predictions and their associated uncertainties. The method developed here, has been packaged into a web tool with a graphical user-interactive interface (GUI) that allows the proposed optimal Mg-alloy design strategy to be deployed.
Modeling an Augmented Lagrangian for Blackbox Constrained Optimization
Constrained blackbox optimization is a difficult problem, with most approaches coming from the mathematical programming literature. The statistical literature is sparse, especially in addressing problems with nontrivial constraints. This situation is unfortunate because statistical methods have many attractive properties: global scope, handling noisy objectives, sensitivity analysis, and so forth. To narrow that gap, we propose a combination of response surface modeling, expected improvement, and the augmented Lagrangian numerical optimization framework. This hybrid approach allows the statistical model to think globally and the augmented Lagrangian to act locally. We focus on problems where the constraints are the primary bottleneck, requiring expensive simulation to evaluate and substantial modeling effort to map out. In that context, our hybridization presents a simple yet effective solution that allows existing objective-oriented statistical approaches, like those based on Gaussian process surrogates and expected improvement heuristics, to be applied to the constrained setting with minor modification. This work is motivated by a challenging, real-data benchmark problem from hydrology where, even with a simple linear objective function, learning a nontrivial valid region complicates the search for a global minimum. Supplementary materials for this article are available online.
High-Dimensional Materials and Process Optimization Using Data-Driven Experimental Design with Well-Calibrated Uncertainty Estimates
The optimization of composition and processing to obtain materials that exhibit desirable characteristics has historically relied on a combination of domain knowledge, trial and error, and luck. We propose a methodology that can accelerate this process by fitting data-driven models to experimental data as it is collected to suggest which experiment should be performed next. This methodology can guide the practitioner to test the most promising candidates earlier and can supplement scientific and engineering intuition with data-driven insights. A key strength of the proposed framework is that it scales to high-dimensional parameter spaces, as are typical in materials discovery applications. Importantly, the data-driven models incorporate uncertainty analysis, so that new experiments are proposed based on a combination of exploring high-uncertainty candidates and exploiting high-performing regions of parameter space. Over four materials science test cases, our methodology led to the optimal candidate being found with three times fewer required measurements than random guessing on average.
A Novel Hybrid Sequential Design Strategy for Global Surrogate Modeling of Computer Experiments
Many complex real-world systems can be accurately modeled by simulations. However, high-fidelity simulations may take hours or even days to compute. Because this can be impractical, a surrogate model is often used to approximate the dynamic behavior of the original simulator. This model can then be used as a cheap, drop-in replacement for the simulator. Because simulations can be very expensive, the data points, which are required to build the model, must be chosen as optimally as possible. Sequential design strategies offer a huge advantage over one-shot experimental designs because they can use information gathered from previous data points in order to determine the location of new data points. Each sequential design strategy must perform a trade-off between exploration and exploitation, where the former involves selecting data points in unexplored regions of the design space, while the latter suggests adding data points in regions which were previously identified to be interesting (for example, highly nonlinear regions). In this paper, a novel hybrid sequential design strategy is proposed which uses a Monte Carlo-based approximation of a Voronoi tessellation for exploration and local linear approximations of the simulator for exploitation. The advantage of this method over other sequential design methods is that it is independent of the model type, and can therefore be used in heterogeneous modeling environments, where multiple model types are used at the same time. The new method is demonstrated on a number of test problems, showing that it is a robust, competitive, and efficient sequential design strategy.
A systematic survey of adaptive trials shows substantial improvement in methods is needed
To investigate the design, conduct, and analysis of adaptive trials through a systematic survey and provide recommendations for future adaptive trials. We systematically searched MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, and ClinicalTrials.gov databases up to January 2020. We included trials that were self-described as adaptive trials or applied adaptive designs. We identified three frequently used adaptive designs and summarized their methodological details in terms of design, conduct, and analysis. Lastly, we provided recommendations for future adaptive trials. We included a total of 128 trials in this study. The primary motivations for using adaptive design were to speed up the trials and facilitate decision-making (n = 29, 31.5%). The three most frequently used methods were group sequential design (GSD) (n = 71, 55.5%), adaptive dose-finding design (ADFD) (n = 35, 27.3%), and adaptive randomization design (ARD) (n = 26, 20.3%). The timing and frequency of interim analysis were detailed in three-fourths of the GSD trials (n = 55, 77.5%) and in half of the ADFD trials (n = 19, 54.3%); however, more than half of the ARD trials (n = 15, 57.7%) did not provide this information. Some trials selected a different outcome than the primary outcome for interim analysis (GSD: n = 7, 12.7%; ADFD: n = 8, 27.6%; ARD: n = 7, 50.0%), but the majority of these trials did not provide explicit reasons for this choice (GSD: n = 7, 100.0%; ADFD: n = 7, 87.5%; ARD: n = 5, 71.4%). More than half (n = 76, 59.4%) of trials did not mention the accessibility of supporting documents, and two-thirds (n = 86, 67.2%) did not state the establishment of independent data monitoring committees (IDMCs). Moreover, unplanned adjustments were observed during the conduct of one-sixth adaptive trials (n = 22, 17.2%). Based on our findings, we provide 14 recommendations for improving adaptive trials in the future. Substantial improvements were needed in methods of adaptive trials, particularly in the areas of interim analysis, the establishment of independent data monitoring committees, and unplanned adjustments. In this study, we offer recommendations from both general and specific aspects for researchers to carefully design, conduct, and analyze adaptive trials.
Understanding consumers’ value co-creation and value co-destruction with augmented reality service marketing
Purpose Although businesses increasingly use augmented reality (AR) to enhance service experiences, the way AR service marketing inspires consumers remains underexplored. Drawing on the consumer inspiration literature, the authors examine how AR service marketing activities such as entertainment, interaction, trendiness and customization enhance consumer inspiration. In addition, the authors explore the role of consumer empowerment and skepticism as key underlying mechanisms between consumer inspiration and value co-creation (VCC) or co-destruction (VCD) intentions. Design/methodology/approach The study used a mixed method, explanatory sequential design to gain a more comprehensive understanding of their proposed theoretical framework. The quantitative survey study involved 344 AR app users, followed by a qualitative open-ended essay study with 34 AR app users. Findings Results suggest that AR service marketing activities positively influence consumer inspiration, which in turn increases consumer empowerment and reduces skepticism. The authors also found that consumer empowerment leads to VCC, while skepticism leads to VCD. These findings provide valuable insights for practitioners seeking to implement AR service marketing activities effectively to inspire consumers, foster value creation and manage value destruction. Practical implications The study highlights inspiration as a key factor in motivating consumers to co-create value, transcending typical service experiences and limitations. Empowered consumers, feeling inspired, are more inclined to contribute effectively to VCC, also fostering trust in the service provider. AR serves not just as a sales channel, but also as a tool for relationship-building and brand retention. Managers should leverage AR to elicit feelings of trendiness, customization and interaction, fostering empowerment and inspiring consumers to co-create value. Originality/value This study significantly contributes to the growing body of literature on consumer inspiration and AR service marketing. It emphasizes the need to consider external (i.e. marketing-induced) stimuli in understanding the sources and consequences of consumer inspiration through AR.