Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
13,754 result(s) for "Stochastic optimization"
Sort by:
Data-driven stochastic optimization for distributional ambiguity with integrated confidence region
We discuss stochastic optimization problems under distributional ambiguity. The distributional uncertainty is captured by considering an entire family of distributions. Because we assume the existence of data, we can consider confidence regions for the different estimators of the parameters of the distributions. Based on the definition of an appropriate estimator in the interior of the resulting confidence region, we propose a new data-driven stochastic optimization problem. This new approach applies the idea of a-posteriori Bayesian methods to the confidence region. We are able to prove that the expected value, over all observations and all possible distributions, of the optimal objective function of the proposed stochastic optimization problem is bounded by a constant. This constant is small for a sufficiently large i.i.d. sample size and depends on the chosen confidence level and the size of the confidence region. We demonstrate the utility of the new optimization approach on a Newsvendor and a reliability problem.
An exact solution approach for risk-averse mixed-integer multi-stage stochastic programming problems
Risk-averse mixed-integer multi-stage stochastic programming problems are challenging, large scale and non-convex optimization problems. In this study, we propose an exact solution algorithm for a type of these problems with an objective of dynamic mean-CVaR risk measure and binary first stage decision variables. The proposed algorithm is based on an evaluate-and-cut procedure and uses lower bounds obtained from a scenario tree decomposition method called as group subproblem approach. We also show that, under the assumption that the first stage integer variables are bounded, our algorithm solves problems with mixed-integer variables in all stages. Computational experiments on risk-averse multi-stage stochastic server location and generation expansion problems reveal that the proposed algorithm is able to solve problem instances with more than one million binary variables within a reasonable time under a modest computational setting.
A randomized method for handling a difficult function in a convex optimization problem, motivated by probabilistic programming
We propose a randomized gradient method for handling a convex function whose gradient computation is demanding. The method bears a resemblance to the stochastic approximation family. But in contrast to stochastic approximation, the present method builds a model problem. The approach is adapted to probability maximization and probabilistic constrained problems. We discuss simulation procedures for gradient estimation.
From Predictive to Prescriptive Analytics
We combine ideas from machine learning (ML) and operations research and management science (OR/MS) in developing a framework, along with specific methods, for using data to prescribe optimal decisions in OR/MS problems. In a departure from other work on data-driven optimization, we consider data consisting, not only of observations of quantities with direct effect on costs/revenues, such as demand or returns, but also predominantly of observations of associated auxiliary quantities. The main problem of interest is a conditional stochastic optimization problem, given imperfect observations, where the joint probability distributions that specify the problem are unknown. We demonstrate how our proposed methods are generally applicable to a wide range of decision problems and prove that they are computationally tractable and asymptotically optimal under mild conditions, even when data are not independent and identically distributed and for censored observations. We extend these to the case in which some decision variables, such as price, may affect uncertainty and their causal effects are unknown. We develop the coefficient of prescriptiveness P to measure the prescriptive content of data and the efficacy of a policy from an operations perspective. We demonstrate our approach in an inventory management problem faced by the distribution arm of a large media company, shipping 1 billion units yearly. We leverage both internal data and public data harvested from IMDb, Rotten Tomatoes, and Google to prescribe operational decisions that outperform baseline measures. Specifically, the data we collect, leveraged by our methods, account for an 88% improvement as measured by our coefficient of prescriptiveness. This paper was accepted by Noah Gans, optimization.
Spider wasp optimizer: a novel meta-heuristic optimization algorithm
This work presents a new nature-inspired meta-heuristic algorithm named spider wasp optimization (SWO) algorithm, which is based on replicating the hunting, nesting, and mating behaviors of the female spider wasps in nature. This proposed algorithm has various unique updating strategies, making it applicable to a wide range of optimization problems with different exploration and exploitation requirements. The proposed SWO is compared with nine newly published and well-established metaheuristics over four different benchmarks: (1) Standard benchmark, including 23 unimodal and multimodal test functions; (2) test suite of CEC2017, (3) test suite of CEC2020, and (4) test suite of CEC2014 to validate its reliability. Moreover, two classical engineering design problems, namely, welded bean and pressure vessel designs, and parameter estimation of the single-diode, double-diode, and triple-diode photovoltaic models are used to further evaluate the performance of SWO in optimizing real-world optimization problems. Experimental findings demonstrate that SWO is more competitive compared with the state-of-art meta-heuristic methods for four validated benchmarks and superior to all observed real-world optimization problems. Specifically, SWO achieves an overall effective percentage of 78.2% on the standard benchmark, 92.31% on CEC2014, 77.78% on CEC2017, 60% on CEC2020, and 100% on real-world problems. The source code of SWO is publicly available at https://www.mathworks.com/matlabcentral/fileexchange/126010-spider-wasp-optimizer-swo.
Riemannian stochastic fixed point optimization algorithm
This paper considers a stochastic optimization problem over the fixed point sets of quasinonexpansive mappings on Riemannian manifolds. The problem enables us to consider Riemannian hierarchical optimization problems over complicated sets, such as the intersection of many closed convex sets, the set of all minimizers of a nonsmooth convex function, and the intersection of sublevel sets of nonsmooth convex functions. We focus on adaptive learning rate optimization algorithms, which adapt step-sizes (referred to as learning rates in the machine learning field) to find optimal solutions quickly. We then propose a Riemannian stochastic fixed point optimization algorithm, which combines fixed point approximation methods on Riemannian manifolds with the adaptive learning rate optimization algorithms. We also give convergence analyses of the proposed algorithm for nonsmooth convex and smooth nonconvex optimization. The analysis results indicate that, with small constant step-sizes, the proposed algorithm approximates a solution to the problem. Consideration of the case in which step-size sequences are diminishing demonstrates that the proposed algorithm solves the problem with a guaranteed convergence rate. This paper also provides numerical comparisons that demonstrate the effectiveness of the proposed algorithms with formulas based on the adaptive learning rate optimization algorithms, such as Adam and AMSGrad.
Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
Classical stochastic gradient methods are well suited for minimizing expected-value objective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., the problem min x E v f v ( E w [ g w ( x ) ] ) . In order to solve this stochastic composition problem, we propose a class of stochastic compositional gradient descent (SCGD) algorithms that can be viewed as stochastic versions of quasi-gradient method. SCGD update the solutions based on noisy sample gradients of f v , g w and use an auxiliary variable to track the unknown quantity E w g w ( x ) . We prove that the SCGD converge almost surely to an optimal solution for convex optimization problems, as long as such a solution exists. The convergence involves the interplay of two iterations with different time scales. For nonsmooth convex problems, the SCGD achieves a convergence rate of O ( k - 1 / 4 ) in the general case and O ( k - 2 / 3 ) in the strongly convex case, after taking k samples. For smooth convex problems, the SCGD can be accelerated to converge at a rate of O ( k - 2 / 7 ) in the general case and O ( k - 4 / 5 ) in the strongly convex case. For nonconvex problems, we prove that any limit point generated by SCGD is a stationary point, for which we also provide the convergence rate analysis. Indeed, the stochastic setting where one wants to optimize compositions of expected-value functions is very common in practice. The proposed SCGD methods find wide applications in learning, estimation, dynamic programming, etc.
A periodic review integrated inventory model for buyer's unidentified protection interval demand distribution
Nowadays, considering inflation observed in most societies, it is important to investigate on effect of this phenomenon on inventory/production decisions. Consequently, the paper aims to study the influence of inflationary condition on a specific periodic review integrated vendor-buyer inventory system in the presence of vendor's imperfect manufacturing process. The lead time crashing cost is represented as a piecewise linear function of reduced lead time. In practice, the identification of the protection interval demand distribution of restricted data is a difficult task. As a result, this investigation assumes that the information about the protection interval demand for buyer is limited to its mean and standard deviation. Buyer's ordering cost can be reduced through further investment. For the discussed model, an effective solution procedure is developed to determine optimal policy. Theorems on conditional convexity and concavity of objective cost function in the decision variables of the respective inflationary integrated inventory system are shown and proved. A numerical example is presented to illustrate the results of the proposed model.
Robust sample average approximation
Sample average approximation (SAA) is a widely popular approach to data-driven decision-making under uncertainty. Under mild assumptions, SAA is both tractable and enjoys strong asymptotic performance guarantees. Similar guarantees, however, do not typically hold in finite samples. In this paper, we propose a modification of SAA, which we term Robust SAA, which retains SAA’s tractability and asymptotic properties and, additionally, enjoys strong finite-sample performance guarantees. The key to our method is linking SAA, distributionally robust optimization, and hypothesis testing of goodness-of-fit. Beyond Robust SAA, this connection provides a unified perspective enabling us to characterize the finite sample and asymptotic guarantees of various other data-driven procedures that are based upon distributionally robust optimization. This analysis provides insight into the practical performance of these various methods in real applications. We present examples from inventory management and portfolio allocation, and demonstrate numerically that our approach outperforms other data-driven approaches in these applications.
The importance of better models in stochastic optimization
Standard stochastic optimization methods are brittle, sensitive to stepsize choice and other algorithmic parameters, and they exhibit instability outside of well-behaved families of objectives. To address these challenges, we investigate models for stochastic optimization and learning problems that exhibit better robustness to problem families and algorithmic parameters. With appropriately accurate models—which we call the APROX family—stochastic methods can be made stable, provably convergent, and asymptotically optimal; even modeling that the objective is nonnegative is sufficient for this stability. We extend these results beyond convexity to weakly convex objectives, which include compositions of convex losses with smooth functions common in modern machine learning. We highlight the importance of robustness and accurate modeling with experimental evaluation of convergence time and algorithm sensitivity.