Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,603 result(s) for "Sparse optimization"
Sort by:
Global Convergence of ADMM in Nonconvex Nonsmooth Optimization
In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ϕ ( x 0 , … , x p , y ) , subject to coupled linear equality constraints. Our ADMM updates each of the primal variables x 0 , … , x p , y , followed by updating the dual variable. We separate the variable y from x i ’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, ℓ q quasi-norm, Schatten- q quasi-norm ( 0 < q < 1 ), minimax concave penalty (MCP), and smoothly clipped absolute deviation penalty. It also allows nonconvex constraints such as compact manifolds (e.g., spherical, Stiefel, and Grassman manifolds) and linear complementarity constraints. Also, the x 0 -block can be almost any lower semi-continuous function. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge. Our results provide sufficient conditions for ADMM to converge on (convex or nonconvex) monotropic programs with three or more blocks, as they are special cases of our model. ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). We present a simple example to illustrate how ADMM converges but ALM diverges with bounded penalty parameter β . Indicated by this example and other analysis in this paper, ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.
Learning partial differential equations via data discovery and sparse optimization
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
EXTRACTING SPARSE HIGH-DIMENSIONAL DYNAMICS FROM LIMITED DATA
Extracting governing equations from dynamic data is an essential task in model selection and parameter estimation. The form of the governing equation is rarely known a priori; however, based on the sparsity-of-effect principle one may assume that the number of candidate functions needed to represent the dynamics is very small. In this work, we leverage the sparse structure of the governing equations along with recent results from random sampling theory to develop methods for selecting dynamical systems from undersampled data. In particular, we detail three sampling strategies that lead to the exact recovery of first-order dynamical systems when we are given fewer samples than unknowns. The first method makes no assumptions on the behavior of the data, and requires a certain number of random initial samples. The second method utilizes the structure of the governing equation to limit the number of random initializations needed. The third method leverages chaotic behavior in the data to construct a nearly deterministic sampling strategy. Using results from compressive sensing, we show that the strategies lead to exact recovery, which is stable to the sparse structure of the governing equations and robust to noise in the estimation of the velocity. Computational results validate each of the sampling strategies and highlight potential applications.
DC formulations and algorithms for sparse optimization problems
We propose a DC (Difference of two Convex functions) formulation approach for sparse optimization problems having a cardinality or rank constraint. With the largest-k norm, an exact DC representation of the cardinality constraint is provided. We then transform the cardinality-constrained problem into a penalty function form and derive exact penalty parameter values for some optimization problems, especially for quadratic minimization problems which often appear in practice. A DC Algorithm (DCA) is presented, where the dual step at each iteration can be efficiently carried out due to the accessible subgradient of the largest-k norm. Furthermore, we can solve each DCA subproblem in linear time via a soft thresholding operation if there are no additional constraints. The framework is extended to the rank-constrained problem as well as the cardinality- and the rank-minimization problems. Numerical experiments demonstrate the efficiency of the proposed DCA in comparison with existing methods which have other penalty terms.
A proximal method for composite minimization
We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algorithmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.
AN AUGMENTED LAGRANGIAN METHOD FOR NON-LIPSCHITZ NONCONVEX PROGRAMMING
We consider a class of constrained optimization problems where the objective function is a sum of a smooth function and a nonconvex non-Lipschitz function. Many problems in sparse portfolio selection, edge preserving image restoration, and signal processing can be modelled in this form. First, we propose the concept of the Karush–Kuhn–Tucker (KKT) stationary condition for the non-Lipschitz problem and show that it is necessary for optimality under a constraint qualification called the relaxed constant positive linear dependence (RCPLD) condition, which is weaker than the Mangasarian–Fromovitz constraint qualification and holds automatically if all the constraint functions are affine. Then we propose an augmented Lagrangian (AL) method in which the augmented Lagrangian subproblems are solved by a nonmonotone proximal gradient method. Under the assumption that a feasible point is known, we show that any accumulation point of the sequence generated by our method must be a feasible point. Moreover, if RCPLD holds at such an accumulation point, then it is a KKT point of the original problem. Finally, we conduct numerical experiments to compare the performance of our AL method and the interior point (IP) method for solving two sparse portfolio selection models. The numerical results demonstrate that our method is not only comparable to the IP method in terms of solution quality, but also substantially faster than the IP method.
A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.
Inexact proximal point method with a Bregman regularization for quasiconvex multiobjective optimization problems via limiting subdifferentials
In this paper, we investigate a class of unconstrained multiobjective optimization problems (abbreviated as, MPQs), where the components of the objective function are locally Lipschitz and quasiconvex. To solve MPQs, we introduce an inexact proximal point method with Bregman distances (abbreviated as, IPPMB) via Mordukhovich limiting subdifferentials. We establish the well-definedness of the sequence generated by the IPPMB algorithm. Based on two versions of error criteria, we introduce two variants of IPPMB, namely, IPPMB1 and IPPMB2. Moreover, we establish that the sequences generated by the IPPMB1 and IPPMB2 algorithms converge to the Pareto–Mordukhovich critical point of the problem MPQ. In addition, we derive that if the components of the objective function of MPQ are convex, then the sequences converge to the weak Pareto efficient solution of MPQ. Furthermore, we discuss the linear and superlinear convergence of the sequence generated by the IPPMB2 algorithm. We furnish several non-trivial numerical examples to demonstrate the effectiveness of the proposed algorithms and solve them by employing MATLAB R2023b. To demonstrate the applicability and significance of the IPPMB algorithm, we solve a nonsmooth large-scale sparse quasiconvex multiobjective optimization by employing MATLAB R2023b.
Neural-inspired sensors enable sparse, efficient classification of spatiotemporal data
Sparse sensor placement is a central challenge in the efficient characterization of complex systems when the cost of acquiring and processing data is high. Leading sparse sensing methods typically exploit either spatial or temporal correlations, but rarely both. This work introduces a sparse sensor optimization that is designed to leverage the rich spatiotemporal coherence exhibited by many systems. Our approach is inspired by the remarkable performance of flying insects, which use a few embedded strain-sensitive neurons to achieve rapid and robust flight control despite large gust disturbances. Specifically, we identify neural-inspired sensors at a few key locations on a flapping wing that are able to detect body rotation. This task is particularly challenging as the rotational twisting mode is three orders of magnitude smaller than the flapping modes. We show that nonlinear filtering in time, built to mimic strain-sensitive neurons, is essential to detect rotation, whereas instantaneous measurements fail. Optimized sparse sensor placement results in efficient classification with approximately 10 sensors, achieving the same accuracy and noise robustness as full measurements consisting of hundreds of sensors. Sparse sensing with neural-inspired encoding establishes an alternative paradigm in hyperefficient, embodied sensing of spatiotemporal data and sheds light on principles of biological sensing for agile flight control.
Frank–Wolfe and friends: a journey into projection-free first-order optimization methods
Invented some 65 years ago in a seminal paper by Marguerite Straus-Frank and Philip Wolfe, the Frank–Wolfe method recently enjoys a remarkable revival, fuelled by the need of fast and reliable first-order optimization methods in Data Science and other relevant application areas. This review tries to explain the success of this approach by illustrating versatility and applicability in a wide range of contexts, combined with an account on recent progress in variants, improving on both the speed and efficiency of this surprisingly simple principle of first-order optimization.