Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
9,398 result(s) for "Convex optimization"
Sort by:
Sparse generalized eigenvalue problem
The sparse generalized eigenvalue problem (GEP) plays a pivotal role in a large family of high dimensional statistical models, including sparse Fisher’s discriminant analysis, canonical correlation analysis and sufficient dimension reduction. The sparse GEP involves solving a non-convex optimization problem. Most existing methods and theory in the context of specific statistical models that are special cases of the sparse GEP require restrictive structural assumptions on the input matrices. We propose a two-stage computational framework to solve the sparse GEP. At the first stage, we solve a convex relaxation of the sparse GEP. Taking the solution as an initial value, we then exploit a non-convex optimization perspective and propose the truncated Rayleigh flow method (which we call ‘rifle’) to estimate the leading generalized eigenvector. We show that rifle converges linearly to a solution with the optimal statistical rate of convergence. Theoretically, our method significantly improves on the existing literature by eliminating structural assumptions on the input matrices. To achieve this, our analysis involves two key ingredients: a new analysis of the gradient-based method on non-convex objective functions, and a fine-grained characterization of the evolution of sparsity patterns along the solution path. Thorough numerical studies are provided to validate the theoretical results.
Provable sparse tensor decomposition
We propose a novel sparse tensor decomposition method, namely the tensor truncated power method, that incorporates variable selection in the estimation of decomposition components. The sparsity is achieved via an efficient truncation step embedded in the tensor power iteration. Our method applies to a broad family of high dimensional latent variable models, including high dimensional Gaussian mixtures and mixtures of sparse regressions. A thorough theoretical investigation is further conducted. In particular, we show that the final decomposition estimator is guaranteed to achieve a local statistical rate, and we further strengthen it to the global statistical rate by introducing a proper initialization procedure. In high dimensional regimes, the statistical rate obtained significantly improves those shown in the existing nonsparse decomposition methods. The empirical advantages of tensor truncated power are confirmed in extensive simulation results and two real applications of click-through rate prediction and high dimensional gene clustering.
Distributed quantized mirror descent for strongly convex optimization over time-varying directed graph
This paper investigates a distributed strongly convex constrained optimization problem in the non-Euclidean sense, where the bit rate of the considered communication between nodes is assumed to be limited, and the communication topology is represented by a time-varying directed graph. By considering the limitation of communication capacity, the quantization technique is applied in the process of exchanging information over the network. Then a distributed quantized mirror descent (DQMD) algorithm, which uses the Bregman divergence and time-varying quantizers, is developed for the strongly convex optimization under a convex constraint set. The convergence of the developed algorithm is also analyzed. It is shown that the nodes’ state errors are bounded by some terms related to the quantization resolutions, and the sublinear upper-bounds can be guaranteed by choosing appropriate quantization resolutions. Finally, a distributed ridge regression is provided as an example to verify the validity of the proposed method.
Free Final-Time Fuel-Optimal Powered Landing Guidance Algorithm Combing Lossless Convex Optimization with Deep Neural Network Predictor
The real-time guidance algorithm is the key technology of the powered landing. Given the lack of real-time performance of the convex optimization algorithm with free final time, a lossless convex optimization (LCvx) algorithm based on the deep neural network (DNN) predictor is proposed. Firstly, the DNN predictor is built to map the optimal final time. Then, the LCvx algorithm is used to solve the problem of fuel-optimal powered landing with the given final time. The optimality and real-time performance of the proposed algorithm are verified by numerical examples. Finally, a closed-loop simulation framework is constructed, and the accuracy of landing under various disturbances is verified. The proposed method does not need complex iterative operations compared with the traditional algorithm with free final time. Therefore, the computational efficiency can be improved by an order of magnitude.
Infinite Product and Its Convergence in CAT(1) Spaces
In this paper, we study the convergence of infinite product of strongly quasi-nonexpansive mappings on geodesic spaces with curvature bounded above by one. Our main applications behind this study are to solve convex feasibility by alternating projections, and to solve minimizers of convex functions and common minimizers of several objective functions. To prove our main results, we introduce a new concept of orbital Δ-demiclosed mappings which covers finite products of strongly quasi-nonexpansive, Δ-demiclosed mappings, and hence is applicable to the convergence of infinite products.
On arbitrary compression for decentralized consensus and stochastic optimization over directed networks
We study the decentralized consensus and stochastic optimization problems with compressed communications over static directed graphs. We propose an iterative gradient-based algorithm that compresses messages according to a desired compression ratio. The proposed method provably reduces the communication overhead on the network at every communication round. Contrary to existing literature, we allow for arbitrary compression ratios in the communicated messages. We show a linear convergence rate for the proposed method on the consensus problem. Moreover, we provide explicit convergence rates for decentralized stochastic optimization problems on smooth functions that are either (i) strongly convex, (ii) convex, or (iii) non-convex. Finally, we provide numerical experiments to illustrate convergence under arbitrary compression ratios and the communication efficiency of our algorithm.
Maximum likelihood estimation of a multi-dimensional log-concave density
Let X₁,[ellipsis (horizontal)],Xn be independent and identically distributed random vectors with a (Lebesgue) density f. We first prove that, with probability 1, there is a unique log-concave maximum likelihood estimator [graphic removed] of f. The use of this estimator is attractive because, unlike kernel density estimation, the method is fully automatic, with no smoothing parameters to choose. Although the existence proof is non-constructive, we can reformulate the issue of computing [graphic removed] in terms of a non-differentiable convex optimization problem, and thus combine techniques of computational geometry with Shor's r-algorithm to produce a sequence that converges to [graphic removed] . An R version of the algorithm is available in the package LogConcDEAD--log-concave density estimation in arbitrary dimensions. We demonstrate that the estimator has attractive theoretical properties both when the true density is log-concave and when this model is misspecified. For the moderate or large sample sizes in our simulations, [graphic removed] is shown to have smaller mean integrated squared error compared with kernel-based methods, even when we allow the use of a theoretical, optimal fixed bandwidth for the kernel estimator that would not be available in practice. We also present a real data clustering example, which shows that our methodology can be used in conjunction with the expectation-maximization algorithm to fit finite mixtures of log-concave densities.
Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization
Stochastic gradient descent is the method of choice for solving large-scale optimization problems in machine learning. However, the question of how to effectively select the step-sizes in stochastic gradient descent methods is challenging, and can greatly influence the performance of stochastic gradient descent algorithms. In this paper, we propose a class of faster adaptive gradient descent methods, named AdaSGD, for solving both the convex and non-convex optimization problems. The novelty of this method is that it uses a new adaptive step size that depends on the expectation of the past stochastic gradient and its second moment, which makes it efficient and scalable for big data and high parameter dimensions. We show theoretically that the proposed AdaSGD algorithm has a convergence rate of O(1/T) in both convex and non-convex settings, where T is the maximum number of iterations. In addition, we extend the proposed AdaSGD to the case of momentum and obtain the same convergence rate for AdaSGD with momentum. To illustrate our theoretical results, several numerical experiments for solving problems arising in machine learning are made to verify the promise of the proposed method.
A Single-Phase, Proximal Path-Following Framework
We propose a new proximal path-following framework for a class of constrained convex problems. We consider settings where the nonlinear—and possibly nonsmooth—objective part is endowed with a proximity operator, and the constraint set is equipped with a self-concordant barrier. Our approach relies on the following two main ideas. First, we reparameterize the optimality condition as an auxiliary problem, such that a good initial point is available; by doing so, a family of alternative paths toward the optimum is generated. Second, we combine the proximal operator with path-following ideas to design a single-phase, proximal path-following algorithm. We prove that our algorithm has the same worst-case iteration complexity bounds as in standard path-following methods from the literature but does not require an initial phase. Our framework also allows inexactness in the evaluation of proximal Newton directions, without sacrificing the worst-case iteration complexity. We demonstrate the merits of our algorithm via three numerical examples, where proximal operators play a key role.