Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
13,927 result(s) for "Convex analysis"
Sort by:
Decomposition of an integrally convex set into a Minkowski sum of bounded and conic integrally convex sets
Every polyhedron can be decomposed into a Minkowski sum (or vector sum) of a bounded polyhedron and a polyhedral cone. This paper establishes similar statements for some classes of discrete sets in discrete convex analysis, such as integrally convex sets, L ♮ -convex sets, and M ♮ -convex sets.
Recent progress on integrally convex functions
Integrally convex functions constitute a fundamental function class in discrete convex analysis, including M-convex functions, L-convex functions, and many others. This paper aims at a rather comprehensive survey of recent results on integrally convex functions with some new technical results. Topics covered in this paper include characterizations of integral convex sets and functions, operations on integral convex sets and functions, optimality criteria for minimization with a proximity-scaling algorithm, integral biconjugacy, and the discrete Fenchel duality. While the theory of M-convex and L-convex functions has been built upon fundamental results on matroids and submodular functions, developing the theory of integrally convex functions requires more general and basic tools such as the Fourier–Motzkin elimination.
Note on the polyhedral description of the Minkowski sum of two L-convex sets
L-convex sets are one of the most fundamental concepts in discrete convex analysis. Furthermore, the Minkowski sum of two L-convex sets, called L 2 -convex sets, is an intriguing object that is closely related to polymatroid intersection. This paper reveals the polyhedral description of an L 2 -convex set, together with the observation that the convex hull of an L 2 -convex set is a box-TDI polyhedron. Two different proofs are given for the polyhedral description. The first is a structural short proof, relying on the conjugacy theorem in discrete convex analysis, and the second is a direct algebraic proof, based on Fourier–Motzkin elimination. The obtained results admit natural graph representations. Implications of the obtained results in discrete convex analysis are also discussed.
Multiple Exchange Property for M♮-Concave Functions and Valuated Matroids
The multiple exchange property for matroid bases is generalized for valuated matroids and M-natural concave set functions. The proof is based on the Fenchel-type duality theorem in discrete convex analysis. The present result has an implication in economics: The strong no complementarities condition of Gul and Stacchetti is, in fact, equivalent to the gross substitutes condition of Kelso and Crawford.
A linear-time algorithm to compute the conjugate of convex piecewise linear-quadratic bivariate functions
We propose the first algorithm to compute the conjugate of a bivariate Piecewise Linear-Quadratic (PLQ) function in optimal linear worst-case time complexity. The key step is to use a planar graph, called the entity graph, not only to represent the entities (vertex, edge, or face) of the domain of a PLQ function but most importantly to record adjacent entities. We traverse the graph using breadth-first search to compute the conjugate of each entity using graph-matrix calculus, and use the adjacency information to create the output data structure in linear time.
Generalization of q-Integral Inequalities for α,ℏ−m-Convex Functions and Their Refinements
This article finds q- and h-integral inequalities in implicit form for generalized convex functions. We apply the definition of q−h-integrals to establish some new unified inequalities for a class of α,ℏ−m-convex functions. Refinements of these inequalities are given by applying a class of strongly α,ℏ−m-convex functions. Several q-integral inequalities for various kinds of convex and strongly convex functions are deduced under specific conditions.
The circle method and bounds for L-functions---II: Subconvexity for twists of GL(3) L-functions
Let$\\pi$be a${\\rm SL}(3,\\Bbb{Z})$Hecke-Maass cusp form. Let$\\chi=\\chi_1\\chi_2$be a Dirichlet character with$\\chi_i$primitive modulo$M_i$ . Suppose$M_1$ ,$M_2$are primes such that$\\sqrt{M_2}M^{4\\delta}
Computing the conjugate of convex piecewise linear-quadratic bivariate functions
We present a new algorithm to compute the Legendre–Fenchel conjugate of convex piecewise linear-quadratic (PLQ) bivariate functions. The algorithm stores a function using a (primal) planar arrangement. It then computes the (dual) arrangement associated with the conjugate by looping through vertices, edges, and faces in the primal arrangement and building associated dual vertices, edges, and faces. Using optimal computational geometry data structures, the algorithm has a linear time worst-case complexity. We present the algorithm, and illustrate it with numerical examples. We proceed to build a toolbox for convex bivariate PLQ functions by implementing the addition, and scalar multiplication operations. Finally, we compose these operators to compute classical convex analysis operators such as the Moreau envelope, and the proximal average.
Minimizing finite sums with the stochastic average gradient
We analyze the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method’s iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O ( 1 / k ) to O (1 /  k ) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O (1 /  k ) to a linear convergence rate of the form O ( ρ k ) for ρ < 1 . Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. This extends our earlier work Le Roux et al. (Adv Neural Inf Process Syst, 2012 ), which only lead to a faster rate for well-conditioned strongly-convex problems. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.
Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly, for certain classes of objective functions, our results are better than the standard worst-case bounds for deterministic algorithms. We present constrained and unconstrained versions of the method and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size. [PUBLICATION ABSTRACT]