Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,883 result(s) for "Numerical methods in optimization and calculus of variations"
Sort by:
Theory and Applications of Robust Optimization
In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multistage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
SparseNet: Coordinate Descent With Nonconvex Penalties
We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. In this article we pursue a coordinate-descent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a df-standardizing reparametrization that assists our pathwise algorithm. The MC+ penalty is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. Certain technical derivations and experiments related to this article are included in the Supplementary Materials section.
BOUNDS ON ELASTICITIES WITH OPTIMIZATION FRICTIONS: A SYNTHESIS OF MICRO AND MACRO EVIDENCE ON LABOR SUPPLY
How can price elasticities be identified when agents face optimization frictions such as adjustment costs or inattention? I derive bounds on structural price elasticities that are a function of the observed effect of a price change on demand, the size of the price change, and the degree of frictions. The degree of frictions is measured by the utility losses agents tolerate to deviate from the frictionless optimum. The bounds imply that frictions affect intensive margin elasticities much more than extensive margin elasticities. I apply these bounds to the literature on labor supply. The utility costs of ignoring the tax changes used to identify intensive margin labor supply elasticities are typically less than 1% of earnings. As a result, small frictions can explain the differences between micro and macro elasticities, extensive and intensive margin elasticities, and other disparate findings. Pooling estimates from existing studies, I estimate a Hicksian labor supply elasticity of 0.33 on the intensive margin and 0.25 on the extensive margin after accounting for frictions.
Bat algorithm for constrained optimization tasks
In this study, we use a new metaheuristic optimization algorithm, called bat algorithm (BA), to solve constraint optimization tasks. BA is verified using several classical benchmark constraint problems. For further validation, BA is applied to three benchmark constraint engineering problems reported in the specialized literature. The performance of the bat algorithm is compared with various existing algorithms. The optimal solutions obtained by BA are found to be better than the best solutions provided by the existing methods. Finally, the unique search features used in BA are analyzed, and their implications for future research are discussed in detail.
Multivariate quantiles and multiple-output regression quantiles: From L1 optimization to halfspace depth
A new multivariate concept of quantile, based on a directional version of Koenker and Bassett's traditional regression quantiles, is introduced for multivariate location and multiple-output regression problems. In their empirical version, those quantiles can be computed efficiently via linear programming techniques. Consistency, Bahadur representation and asymptotic normality results are established. Most importantly, the contours generated by those quantiles are shown to coincide with the classical halfspace depth contours associated with the name of Tukey. This relation does not only allow for efficient depth contour computations by means of parametric linear programming, but also for transferring from the quantile to the depth universe such asymptotic results as Bahadur representations. Finally, linear programming duality opens the way to promising developments in depth-related multivariate rank-based inference. [PUBLICATION ABSTRACT]
Pathwise Coordinate Optimization
We consider \"one-at-a-time\" coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the $L_{1}\\text{-penalized}$ regression (lasso) in the literature, but it seems to have been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well-known LARS (or homotopy) procedure in large lasso problems, and that it can be applied to related methods such as the garotte and elastic net. It turns out that coordinate-wise descent does not work in the \"fused lasso,\" however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally, we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems.
Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization
We propose an ℓ 1 -penalized estimation procedure for high-dimensional linear mixedeffects models. The models are useful whenever there is a grouping structure among highdimensional observations, that is, for clustered data. We prove a consistency and an oracle optimality result and we develop an algorithm with provable numerical convergence. Furthermore, we demonstrate the performance of the method on simulated and a real high-dimensional data set.
Analysis of Half-Quadratic Minimization Methods for Signal and Image Recovery
We address the minimization of regularized convex cost functions which are customarily used for edge-preserving restoration and reconstruction of signals and images. In order to accelerate computation, the multiplicative and the additive half-quadratic reformulation of the original cost-function have been pioneered in Geman and Reynolds [IEEE Trans. Pattern Anal. Machine Intelligence, 14 (1992), pp. 367--383] and Geman and Yang IEEE Trans. Image Process., 4 (1995), pp. 932--946]. The alternate minimization of the resultant (augmented) cost-functions has a simple explicit form. The goal of this paper is to provide a systematic analysis of the convergence rate achieved by these methods. For the multiplicative and additive half-quadratic regularizations, we determine their upper bounds for their root-convergence factors. The bound for the multiplicative form is seen to be always smaller than the bound for the additive form. Experiments show that the number of iterations required for convergence for the multiplicative form is always less than that for the additive form. However, the computational cost of each iteration is much higher for the multiplicative form than for the additive form. The global assessment is that minimization using the additive form of half-quadratic regularization is faster than using the multiplicative form. When the additive form is applicable, it is hence recommended. Extensive experiments demonstrate that in our MATLAB implementation, both methods are substantially faster (in terms of computational times) than the standard MATLAB Optimization Toolbox routines used in our comparison study.
A Tutorial on MM Algorithms
Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the log-likelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in majorizing or minorizing an objective function. In our opinion, MM algorithms deserve to be part of the standard toolkit of professional statisticians. This article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition to surveying previous work on MM algorithms, this article introduces some new material on constrained optimization and standard error estimation.
Analysis of service satisfaction in web auction logistics service using a combination of Fruit fly optimization algorithm and general regression neural network
When constructing classification and prediction models, most researchers used genetic algorithm, particle swarm optimization algorithm, or ant colony optimization algorithm to optimize parameters of artificial neural network models in their previous studies. In this paper, a brand new approach using Fruit fly optimization algorithm (FOA) is adopted to optimize artificial neural network model. First, we carried out principal component regression on the results data of a questionnaire survey on logistics quality and service satisfaction of online auction sellers to construct our logistics quality and service satisfaction detection model. Relevant principal components in the principal component regression analysis results were selected for independent variables, and overall satisfaction level toward auction sellers’ logistics service as indicated in the questionnaire survey was selected as a dependent variable for sample data of this study. In the end, FOA-optimized general regression neural network (FOAGRNN), PSO-optimized general regression neural network (PSOGRNN), and other data mining techniques for ordinary general regression neural network were used to construct a logistics quality and service satisfaction detection model. In the study, 4–6 principal components in principal component regression analysis were selected as independent variables of the model. Analysis results of the study show that of the four data mining techniques, FOA-optimized GRNN model has the best detection capacity.