Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
139 result(s) for "Nemirovski, Arkadi"
Sort by:
Convex Approximations of Chance Constrained Programs
We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, a system of randomly perturbed convex constraints. This problem may happen to be computationally intractable; our goal is to build its computationally tractable approximation, i.e., an efficiently solvable deterministic optimization program with the feasible set contained in the chance constrained problem. We construct a general class of such convex conservative approximations of the corresponding chance constrained problem. Moreover, under the assumptions that the constraints are affine in the perturbations and the entries in the perturbation vector are independent-of-each-other random variables, we build a large deviation-type approximation, referred to as \"Bernstein approximation,\" of the chance constrained problem. This approximation is convex and efficiently solvable. We propose a simulation-based scheme for bounding the optimal value in the chance constrained problem and report numerical experiments aimed at comparing the Bernstein and well-known scenario approximation approaches. Finally, we extend our construction to the case of ambiguous chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set rather than to be known exactly, while the chance constraint should be satisfied for every distribution given by this set.
Robust Optimization
Robust optimization is still a relatively new approach to optimization problems affected by uncertainty, but it has already proved so useful in real applications that it is difficult to tackle such problems today without considering this powerful methodology. Written by the principal developers of robust optimization, and describing the main achievements of a decade of research, this is the first book to provide a comprehensive and up-to-date account of the subject. Robust optimization is designed to meet some major challenges associated with uncertainty-affected optimization problems: to operate under lack of full information on the nature of uncertainty; to model the problem in a form that can be solved efficiently; and to provide guarantees about the performance of the solution. The book starts with a relatively simple treatment of uncertain linear programming, proceeding with a deep analysis of the interconnections between the construction of appropriate uncertainty sets and the classical chance constraints (probabilistic) approach. It then develops the robust optimization theory for uncertain conic quadratic and semidefinite optimization problems and dynamic (multistage) problems. The theory is supported by numerous examples and computational illustrations. An essential book for anyone working on optimization and decision making under uncertainty, Robust Optimization also makes an ideal graduate textbook on the subject.
Conditional gradient algorithms for norm-regularized smooth convex optimization
Motivated by some applications in signal processing and machine learning, we consider two convex optimization problems where, given a cone K , a norm ‖ · ‖ and a smooth convex function f , we want either (1) to minimize the norm over the intersection of the cone and a level set of f , or (2) to minimize over the cone the sum of f and a multiple of the norm. We focus on the case where (a) the dimension of the problem is too large to allow for interior point algorithms, (b) ‖ · ‖ is “too complicated” to allow for computationally cheap Bregman projections required in the first-order proximal gradient algorithms. On the other hand, we assume that it is relatively easy to minimize linear forms over the intersection of K and the unit ‖ · ‖ -ball. Motivating examples are given by the nuclear norm with K being the entire space of matrices, or the positive semidefinite cone in the space of symmetric matrices, and the Total Variation norm on the space of 2D images. We discuss versions of the Conditional Gradient algorithm capable to handle our problems of interest, provide the related theoretical efficiency estimates and outline some applications.
Selected topics in robust convex optimization
Robust Optimization is a rapidly developing methodology for handling optimization problems affected by non-stochastic “uncertain-but- bounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of robust counterpart of an optimization problem with uncertain data, (2) tractability of robust counterparts, (3) links between RO and traditional chance constrained settings of problems with stochastic data, and (4) a novel generic application of the RO methodology in Robust Linear Control.
Robust optimization ? methodology and applications
Robust Optimization (RO) is a modeling methodology, combined with computational tools, to process optimization problems in which the data are uncertain and is only known to belong to some uncertainty set. The paper surveys the main results of RO as applied to uncertain linear, conic quadratic and semidefinite programming. For these cases, computationally tractable robust counterparts of uncertain problems are explicitly obtained, or good approximations of these counterparts are proposed, making RO a useful tool for real-world applications. We discuss some of these applications, specifically: antenna design, truss topology design and stability analysis/synthesis in uncertain dynamic systems. We also describe a case study of 90 LPs from the NETLIB collection. The study reveals that the feasibility properties of the usual solutions of real world LPs can be severely affected by small perturbations of the data and that the RO methodology can be successfully used to overcome this phenomenon.
Extending Scope of Robust Optimization: Comprehensive Robust Counterparts of Uncertain Problems
In this paper, we propose a new methodology for handling optimization problems with uncertain data. With the usual Robust Optimization paradigm, one looks for the decisions ensuring a required performance for all realizations of the data from a given bounded uncertainty set, whereas with the proposed approach, we require also a controlled deterioration in performance when the data is outside the uncertainty set. The extension of Robust Optimization methodology developed in this paper opens up new possibilities to solve efficiently multi-stage finite-horizon uncertain optimization problems, in particular, to analyze and to synthesize linear controllers for discrete time dynamical systems. [PUBLICATION ABSTRACT]
Dual subgradient algorithms for large-scale nonsmooth learning problems
“Classical” First Order (FO) algorithms of convex optimization, such as Mirror Descent algorithm or Nesterov’s optimal algorithm of smooth convex optimization, are well known to have optimal (theoretical) complexity estimates which do not depend on the problem dimension. However, to attain the optimality, the domain of the problem should admit a “good proximal setup”. The latter essentially means that (1) the problem domain should satisfy certain geometric conditions of “favorable geometry”, and (2) the practical use of these methods is conditioned by our ability to compute at a moderate cost proximal transformation at each iteration. More often than not these two conditions are satisfied in optimization problems arising in computational learning, what explains why proximal type FO methods recently became methods of choice when solving various learning problems. Yet, they meet their limits in several important problems such as multi-task learning with large number of tasks, where the problem domain does not exhibit favorable geometry, and learning and matrix completion problems with nuclear norm constraint, when the numerical cost of computing proximal transformation becomes prohibitive in large-scale problems. We propose a novel approach to solving nonsmooth optimization problems arising in learning applications where Fenchel-type representation of the objective function is available. The approach is based on applying FO algorithms to the dual problem and using the accuracy certificates supplied by the method to recover the primal solution. While suboptimal in terms of accuracy guaranties, the proposed approach does not rely upon “good proximal setup” for the primal problem but requires the problem domain to admit a Linear Optimization oracle —the ability to efficiently maximize a linear form on the domain of the primal problem.
Nonparametric Estimation by Convex Programming
The problem we concentrate on is as follows: given (1) a convex compact set X in $\\mathbb{R}^{n}$ , an affine mapping x ↦ A(x), a parametric family ${p_{\\mu}(·)}$ of probability densities and (2) N i.i.d. observations of the random variable ω, distributed with the density $p_{A(x)}(\\cdot)$ for some (unknown) x ∊ X, estimate the value $g^{T}$ x of a given linear form at x. For several families ${p_{\\mu}(·)}$ with no additional assumptions on X and A, we develop computationally efficient estimation routines which are minimax optimal, within an absolute constant factor. We then apply these routines to recovering x itself in the Euclidean norm.
Prox-Method with Rate of Convergence O (1/ t ) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
We propose a prox-type method with efficiency estimate $O(\\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone Lipschitz continuous operators. Application examples include matrix games, eigenvalue minimization, and computing the Lovasz capacity number of a graph, and these are illustrated by numerical experiments with large-scale matrix games and Lovasz capacity problems.
On Polyhedral Approximations of the Second-Order Cone
We demonstrate that a conic quadratic problem, ( CQP ) min x { e T x | Ax ≥ b , | A x − b | 2 ≤ c T x − d , = 1 , ... , m } ,   | y | 2 = y T y . is \"polynomially reducible\" to Linear Programming. We demonstrate this by constructing, for every ∈ (0, ], an LP program (explicitly given in terms of and the data of (CQP)) ( LP ) min x , u { e T x | P ( x u ) + p ≥ 0 } with the following properties: the number dim x + dim u of variables and the number dim p of constraints in (LP) do not exceed O ( 1 ) [ dim x + dim b + ∑ = 1 m dim b ] ln 1 ; every feasible solution x to (CQP) can be extended to a feasible solution ( x, u ) to (LP); if ( x, u ) is feasible for (LP), then x satisfies the \" -relaxed\" constraints of (CQP), namely, Ax ≥ b , | A x − b | 2 ≤ ( 1 + ) [ c T x − d ] , = 1 , ... , m .