Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,842 result(s) for "Numerical methods in mathematical programming, optimization and calculus of variations"
Sort by:
Theory and Applications of Robust Optimization
In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
SparseNet: Coordinate Descent With Nonconvex Penalties
We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. In this article we pursue a coordinate-descent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a df-standardizing reparametrization that assists our pathwise algorithm. The MC+ penalty is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. Certain technical derivations and experiments related to this article are included in the Supplementary Materials section.
BOUNDS ON ELASTICITIES WITH OPTIMIZATION FRICTIONS: A SYNTHESIS OF MICRO AND MACRO EVIDENCE ON LABOR SUPPLY
How can price elasticities be identified when agents face optimization frictions such as adjustment costs or inattention? I derive bounds on structural price elasticities that are a function of the observed effect of a price change on demand, the size of the price change, and the degree of frictions. The degree of frictions is measured by the utility losses agents tolerate to deviate from the frictionless optimum. The bounds imply that frictions affect intensive margin elasticities much more than extensive margin elasticities. I apply these bounds to the literature on labor supply. The utility costs of ignoring the tax changes used to identify intensive margin labor supply elasticities are typically less than 1% of earnings. As a result, small frictions can explain the differences between micro and macro elasticities, extensive and intensive margin elasticities, and other disparate findings. Pooling estimates from existing studies, I estimate a Hicksian labor supply elasticity of 0.33 on the intensive margin and 0.25 on the extensive margin after accounting for frictions.
Bat algorithm for constrained optimization tasks
In this study, we use a new metaheuristic optimization algorithm, called bat algorithm (BA), to solve constraint optimization tasks. BA is verified using several classical benchmark constraint problems. For further validation, BA is applied to three benchmark constraint engineering problems reported in the specialized literature. The performance of the bat algorithm is compared with various existing algorithms. The optimal solutions obtained by BA are found to be better than the best solutions provided by the existing methods. Finally, the unique search features used in BA are analyzed, and their implications for future research are discussed in detail.
SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. Second derivatives are assumed to be unavailable or too expensive to calculate. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. The Hessian of the Lagrangian is approximated using a limited-memory quasi-Newton method. SNOPT is a particular implementation that uses a reduced-Hessian semidefinite QP solver (SQOPT) for the QP subproblems. It is designed for problems with many thousands of constraints and variables but is best suited for problems with a moderate number of degrees of freedom (say, up to 2000). Numerical results are given for most of the CUTEr and COPS test collections (about 1020 examples of all sizes up to 40000 constraints and variables, and up to 20000 degrees of freedom).
Multivariate quantiles and multiple-output regression quantiles: From L1 optimization to halfspace depth
A new multivariate concept of quantile, based on a directional version of Koenker and Bassett's traditional regression quantiles, is introduced for multivariate location and multiple-output regression problems. In their empirical version, those quantiles can be computed efficiently via linear programming techniques. Consistency, Bahadur representation and asymptotic normality results are established. Most importantly, the contours generated by those quantiles are shown to coincide with the classical halfspace depth contours associated with the name of Tukey. This relation does not only allow for efficient depth contour computations by means of parametric linear programming, but also for transferring from the quantile to the depth universe such asymptotic results as Bahadur representations. Finally, linear programming duality opens the way to promising developments in depth-related multivariate rank-based inference. [PUBLICATION ABSTRACT]
Pathwise Coordinate Optimization
We consider \"one-at-a-time\" coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the $L_{1}\\text{-penalized}$ regression (lasso) in the literature, but it seems to have been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well-known LARS (or homotopy) procedure in large lasso problems, and that it can be applied to related methods such as the garotte and elastic net. It turns out that coordinate-wise descent does not work in the \"fused lasso,\" however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally, we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems.
A Survey of the S-Lemma
In this survey we review the many faces of the S-lemma, a result about the correctness of the S-procedure. The basic idea of this widely used method came from control theory but it has important consequences in quadratic and semidefinite optimization, convex geometry, and linear algebra as well. These were all active research areas, but as there was little interaction between researchers in these different areas, their results remained mainly isolated. Here we give a unified analysis of the theory by providing three different proofs for the S-lemma and revealing hidden connections with various areas of mathematics. We prove some new duality results and present applications from control theory, error estimation, and computational geometry.
Piecewise Linear Regularized Solution Paths
We consider the generic regularized optimization problem $\\hat{\\beta}(\\lambda)={\\rm arg}\\ {\\rm min}_{\\beta}\\ L({\\rm y},X\\beta)+\\lambda J(\\beta)$. Efron, Hastie, Johnstone and Tibshirani [Ann. Statist. 32 (2004) 407-499] have shown that for the LASSO-that is, if L is squared error loss and J (β) = ∥β∥₁ is the $\\ell _{1}$ norm of β-the optimal coefficient path is piecewise linear, that is, $\\partial \\hat{\\beta}(\\lambda)/\\partial \\lambda $ is piecewise constant. We derive a general characterization of the properties of (loss L, penalty J) pairs which give piecewise linear coefficient paths. Such pairs allow for efficient generation of the full regularized coefficient paths. We investigate the nature of efficient path following algorithms which arise. We use our results to suggest robust versions of the LASSO for regression and classification, and to develop new, efficient algorithms for existing problems in the literature, including Mammen and van de Geer's locally adaptive regression splines.
Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization
We propose an ℓ 1 -penalized estimation procedure for high-dimensional linear mixedeffects models. The models are useful whenever there is a grouping structure among highdimensional observations, that is, for clustered data. We prove a consistency and an oracle optimality result and we develop an algorithm with provable numerical convergence. Furthermore, we demonstrate the performance of the method on simulated and a real high-dimensional data set.