Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
5,142 result(s) for "statistics: design of experiments"
Sort by:
Near-Optimal A-B Testing
We consider the problem of A-B testing when the impact of the treatment is marred by a large number of covariates. Randomization can be highly inefficient in such settings, and thus we consider the problem of optimally allocating test subjects to either treatment with a view to maximizing the precision of our estimate of the treatment effect. Our main contribution is a tractable algorithm for this problem in the online setting, where subjects arrive, and must be assigned, sequentially, with covariates drawn from an elliptical distribution with finite second moment. We further characterize the gain in precision afforded by optimized allocations relative to randomized allocations, and show that this gain grows large as the number of covariates grows. Our dynamic optimization framework admits several generalizations that incorporate important operational constraints such as the consideration of selection bias, budgets on allocations, and endogenous stopping times. In a set of numerical experiments, we demonstrate that our method simultaneously offers better statistical efficiency and less selection bias than state-of-the-art competing biased coin designs. This paper was accepted by Noah Gans, stochastic models and simulation .
Simple Procedures for Selecting the Best Simulated System When the Number of Alternatives is Large
In this paper, we address the problem of finding the simulated system with the best (maximum or minimum) expected performance when the number of alternatives is finite, but large enough that ranking-and-selection (R&S) procedures may require too much computation to be practical. Our approach is to use the data provided by the first stage of sampling in an R&S procedure to screen out alternatives that are not competitive, and thereby avoid the (typically much larger) second-stage sample for these systems. Our procedures represent a compromise between standard R&S procedures-which are easy to implement, but can be computationally inefficient-and fully sequential procedures-which can be statistically efficient, but are more difficult to implement and depend on more restrictive assumptions. We present a general theory for constructing combined screening and indifference-zone selection procedures, several specific procedures and a portion of an extensive empirical evaluation.
Resource Allocation Among Simulation Time Steps
Motivated by the problem of efficient estimation of expected cumulative rewards or cashflows, this paper proposes and analyzes a variance reduction technique for estimating the expectation of the sum of sequentially simulated random variables. In some applications, simulation effort is of greater value when applied to early time steps rather than shared equally among all time steps; this occurs, for example, when discounting renders immediate rewards or cashflows more important than those in the future. This suggests that deliberately stopping some paths early may improve efficiency. We formulate and solve the problem of optimal allocation of resources to time horizons with the objective of minimizing variance subject to a cost constraint. The solution has a simple characterization in terms of the convex hull of points defined by the covariance matrix of the cashflows. We also develop two ways to enhance variance reduction through early stopping. One takes advantage of the statistical theory of missing data. The other redistributes the cumulative sum to make optimal use of early stopping.
Safe At Home? An Experiment in Domestic Airline Security
The paper describes a scientific experiment about a contentious policy issue: What costs and disruptions might arise if U.S. domestic airlines adopted positive passenger bag-match (PPBM), an antiterrorist measure aimed at preventing baggage unaccompanied by passengers from traveling in aircraft luggage compartments? The heart of the effort was a two-week live test of domestic bag-match that involved 11 airlines, 8,000 flights, and nearly 750,000 passengers. Working with the Federal Aviation Administration, the authors played a major role in designing, monitoring, and analyzing the live test. However, the live test provided \"raw materials\" for an assessment of PPBM rather than the assessment itself. As we discuss, there are difficulties in extrapolating from a short experiment involving 4% of domestic flights to the steady-state consequences of systemwide bag-match. Our findings challenge the widely held industry view that PPBM would have grave impacts on domestic operations. We ultimately estimated that, under usual operating conditions, PPBM would delay domestic departures by an average of approximately 1 minute per flight. (Approximately one-seventh of flights would suffer bag-match departure delays, which would average about 7 minutes apiece.) Implementing bag-match would cost the airlines roughly 40 cents per passenger enplanement, and would require virtually no reduction in the number of flights performed. Restricting bag-match to 5% of passengers chosen under a security profile would cut these delays by about 75% and these dollar costs by about 50%.
Separability in Optimal Allocation
The optimal allocation for stratification, parameterized by the respective sampling strategy to use in each stratum, is derived directly from the notion of efficiency. Especially with simulation, there are often opportunities to maximize efficiency (myopically) within each stratum. To maximize efficiency globally, first maximize the efficiency of the sampling strategy for each stratum separately and then use the optimal allocation given these respective maximizers. Given any other allocation, maximizing the efficiency of the sampling strategy in each stratum separately does not give the highest efficiency attainable with that allocation except in degenerate cases. Given a class Cscript of deterministic rounding strategies, the rounding of the (continuous) optimal allocation over Cscript, which maximizes efficiency, cannot be improved by a strategy that randomizes over Cscript.
Controlled Experimental Design for Statistical Comparison of Integer Programming Algorithms
Testing and comparison of integer programming algorithms is an integral part of the algorithm development process. When test problems are randomly generated, the techniques of statistical experimental design can provide a basis around which to structure computational experiments. This paper formulates the problem of constructing and analyzing controlled integer programming tests in the experimental design context and develops approaches to dealing with a number of issues that arise. Both analytic results and empirical evidence from a large experiment are employed in deriving the suggested techniques.
Where Medical Statistics Meets Artificial Intelligence
Challenges at the interface of medical statistics and AI are population inference vs. prediction, generalizability, reproducibility and interpretation of evidence, and stability and statistical guarantees.
Experimental designs for identifying causal mechanisms
Experimentation is a powerful methodology that enables scientists to establish causal claims empirically. However, one important criticism is that experiments merely provide a black box view of causality and fail to identify causal mechanisms. Specifically, critics argue that, although experiments can identify average causal effects, they cannot explain the process through which such effects come about. If true, this represents a serious limitation of experimentation, especially for social and medical science research that strives to identify causal mechanisms. We consider several experimental designs that help to identify average natural indirect effects. Some of these designs require the perfect manipulation of an intermediate variable, whereas others can be used even when only imperfect manipulation is possible. We use recent social science experiments to illustrate the key ideas that underlie each of the designs proposed.
State And Federal Coverage For Pregnant Immigrants: Prenatal Care Increased, No Change Detected For Infant Health
Expanded health insurance coverage for pregnant immigrant women who are in the United States lawfully as well as those who are in the country without documentation may address barriers in access to pregnancy-related care. We present new evidence on the impact of states' public health insurance expansions for pregnant immigrant women (both state-funded and expansions under the children's Health Insurance Program) on their prenatal care use, mode of delivery, and infant health. Our quasi-experimental design compared changes in immigrant women's outcomes in states expanding coverage to changes in outcomes for nonimmigrant women in the same state and to women in nonexpanding states. We found that prenatal care use increased among all immigrant women following coverage expansion and that cesarean section increased among immigrant women with less than a high school diploma. We found no effects on the incidence of low birthweight, preterm birth, being small for gestational age, or infant death. State public insurance programs that cover pregnant immigrant women appear to have improved prenatal care utilization without observable changes in infant health or mortality.
Design of experiments and machine learning with application to industrial experiments
In the context of product innovation, there is an emerging trend to use Machine Learning (ML) models with the support of Design Of Experiments (DOE). The paper aims firstly to review the most suitable designs and ML models to use jointly in an Active Learning (AL) approach; it then reviews ALPERC, a novel AL approach, and proves the validity of this method through a case study on amorphous metallic alloys, where this algorithm is used in combination with a Random Forest model.