Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
260 result(s) for "DC programming"
Sort by:
The DC (Difference of Convex Functions) Programming and DCA Revisited with DC Models of Real World Nonconvex Optimization Problems
The DC programming and its DC algorithm (DCA) address the problem of minimizing a function f=g-h (with g,h being lower semicontinuous proper convex functions on Rn) on the whole space. Based on local optimality conditions and DC duality, DCA was successfully applied to a lot of different and various nondifferentiable nonconvex optimization problems to which it quite often gave global solutions and proved to be more robust and more efficient than related standard methods, especially in the large scale setting. The computational efficiency of DCA suggests to us a deeper and more complete study on DC programming, using the special class of DC programs (when either g or h is polyhedral convex) called polyhedral DC programs. The DC duality is investigated in an easier way, which is more convenient to the study of optimality conditions. New practical results on local optimality are presented. We emphasize regularization techniques in DC programming in order to construct suitable equivalent DC programs to nondifferentiable nonconvex optimization problems and new significant questions which have to be answered. A deeper insight into DCA is introduced which really sheds new light on DCA and could partly explain its efficiency. Finally DC models of real world nonconvex optimization are reported. [PUBLICATION ABSTRACT]
Alternating DC algorithm for partial DC programming problems
DC (Difference of Convex functions) programming and DCA (DC Algorithm) play a key role in nonconvex programming framework. These tools have a rich and successful history of thirty five years of development, and the research in recent years is being increasingly explored to new trends in the development of DCA: design novel DCA variants to improve standard DCA, to deal with the scalability and with broader classes than DC programs. Following these trends, we address in this paper the two wide classes of nonconvex problems, called partial DC programs and generalized partial DC programs, and investigate an alternating approach based on DCA for them. A partial DC program in two variables (x,y)∈Rn×Rm takes the form of a standard DC program in each variable while fixing other variable. A so-named alternating DCA and its inexact/generalized versions are developed. The convergence properties of these algorithms are established: both exact and inexact alternating DCA converge to a weak critical point of the considered problem, in particular, when the Kurdyka–Łojasiewicz inequality property is satisfied, the algorithms furnish a Fréchet/Clarke critical point. The proposed algorithms are implemented on the problem of finding an intersection point of two nonconvex sets. Numerical experiments are performed on an important application that is robust principal component analysis. Numerical results show the efficiency and the superiority of the alternating DCA comparing with the standard DCA as well as a well known alternating projection algorithm.
Lagrange-type duality in DC programming problems with equivalent DC inequalities
In this paper, we provide Lagrange-type duality theorems for mathematical programming problems with DC objective and constraint functions. The class of problems to which Lagrange-type duality theorems can be applied is broader than the class in the previous research. The main idea is to consider equivalent inequality systems given by the maximization of the original functions. In order to compare the present results with the previously reported results, we describe the difference between their constraint qualifications, which are technical assumptions for the duality.
Unified SVM algorithm based on LS-DC loss
Over the past two decades, support vector machines (SVMs) have become a popular supervised machine learning model, and plenty of distinct algorithms are designed separately based on different KKT conditions of the SVM model for classification/regression with different losses, including convex and or nonconvex loss. In this paper, we propose an algorithm that can train different SVM models in a unified scheme. First, we introduce a definition of the least squares type of difference of convex loss (LS-DC) and show that the most commonly used losses in the SVM community are LS-DC loss or can be approximated by LS-DC loss. Based on the difference of convex algorithm (DCA), we then propose a unified algorithm called UniSVM which can solve the SVM model with any convex or nonconvex LS-DC loss, wherein only a vector is computed by the specifically chosen loss. UniSVM has a dominant advantage over all existing algorithms for training robust SVM models with nonconvex losses because it has a closed-form solution per iteration, while the existing algorithms always need to solve an L1SVM/L2SVM per iteration. Furthermore, by the low-rank approximation of the kernel matrix, UniSVM can solve large-scale nonlinear problems efficiently. To verify the efficacy and feasibility of the proposed algorithm, we perform many experiments on small artificial problems and large benchmark tasks both with and without outliers for classification and regression for comparison with state-of-the-art algorithms. The experimental results demonstrate that UniSVM can achieve comparable performance in less training time. The foremost advantage of UniSVM is that its core code in Matlab is less than 10 lines; hence, it can be easily grasped by users or researchers.
DC programming and DCA: thirty years of developments
The year 2015 marks the 30th birthday of DC (Difference of Convex functions) programming and DCA (DC Algorithms) which constitute the backbone of nonconvex programming and global optimization. In this article we offer a short survey on thirty years of developments of these theoretical and algorithmic tools. The survey is comprised of three parts. In the first part we present a brief history of the field, while in the second we summarize the state-of-the-art results and recent advances. We focus on main theoretical results and DCA solvers for important classes of difficult nonconvex optimization problems, and then give an overview of real-world applications whose solution methods are based on DCA. The third part is devoted to new trends and important open issues, as well as suggestions for future developments.
Computing B-Stationary Points of Nonsmooth DC Programs
Motivated by a class of applied problems arising from physical layer based security in a digital communication system, in particular, by a secrecy sum-rate maximization problem, this paper studies a nonsmooth, difference-of-convex (dc) minimization problem. The contributions of this paper are (i) clarify several kinds of stationary solutions and their relations; (ii) develop and establish the convergence of a novel algorithm for computing a d-stationary solution of a problem with a convex feasible set that is arguably the sharpest kind among the various stationary solutions; (iii) extend the algorithm in several directions including a randomized choice of the subproblems that could help the practical convergence of the algorithm, a distributed penalty approach for problems whose objective functions are sums of dc functions, and problems with a specially structured (nonconvex) dc constraint. For the latter class of problems, a pointwise Slater constraint qualification is introduced that facilitates the verification and computation of a B(ouligand)-stationary point.
Accelerating the DC algorithm for smooth functions
We introduce two new algorithms to minimise smooth difference of convex (DC) functions that accelerate the convergence of the classical DC algorithm (DCA). We prove that the point computed by DCA can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of DCA together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analysed under the Łojasiewicz property of the objective function. We apply our algorithms to a class of smooth DC programs arising in the study of biochemical reaction networks, where the objective function is real analytic and thus satisfies the Łojasiewicz property. Numerical tests on various biochemical models clearly show that our algorithms outperform DCA, being on average more than four times faster in both computational time and the number of iterations. Numerical experiments show that the algorithms are globally convergent to a non-equilibrium steady state of various biochemical networks, with only chemically consistent restrictions on the network topology.
On a Generalization of the Jensen–Shannon Divergence and the Jensen–Shannon Centroid
The Jensen–Shannon divergence is a renown bounded symmetrization of the Kullback–Leibler divergence which does not require probability densities to have matching supports. In this paper, we introduce a vector-skew generalization of the scalar α -Jensen–Bregman divergences and derive thereof the vector-skew α -Jensen–Shannon divergences. We prove that the vector-skew α -Jensen–Shannon divergences are f-divergences and study the properties of these novel divergences. Finally, we report an iterative algorithm to numerically compute the Jensen–Shannon-type centroids for a set of probability densities belonging to a mixture family: This includes the case of the Jensen–Shannon centroid of a set of categorical distributions or normalized histograms.
Open issues and recent advances in DC programming and DCA
DC (difference of convex functions) programming and DC algorithm (DCA) are powerful tools for nonsmooth nonconvex optimization. This field was created in 1985 by Pham Dinh Tao in its preliminary state, then the intensive research of the authors of this paper has led to decisive developments since 1993, and has now become classic and increasingly popular worldwide. For 35 years from their birthday, these theoretical and algorithmic tools have been greatly enriched, thanks to a lot of their applications, by researchers and practitioners in the world, to model and solve nonconvex programs from many fields of applied sciences. This paper is devoted to key open issues, recent advances and trends in the development of these tools to meet the growing need for nonconvex programming and global optimization. We first give an outline in foundations of DC programming and DCA which permits us to highlight the philosophy of these tools, discuss key issues, formulate open problems, and bring relevant answers. After outlining key open issues that require deeper and more appropriate investigations, we will present recent advances and ongoing works in these issues. They turn around novel solution techniques in order to improve DCA’s efficiency and scalability, a new generation of algorithms beyond the standard framework of DC programming and DCA for large-dimensional DC programs and DC learning with Big data, as well as for broader classes of nonconvex problems beyond DC programs.