Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
31,981
result(s) for
"Mathematics of Computing"
Sort by:
Exact values for three domination-like problems in circular and infinite grid graphs of small height
by
Preissmann, Myriam
,
Moncel, Julien
,
Darlay, Julien
in
[info.info-dm]computer science [cs]/discrete mathematics [cs.dm]
,
acm: g.: mathematics of computing/g.2: discrete mathematics/g.2.2: graph theory
,
acm: g.: mathematics of computing/g.2: discrete mathematics/g.2.2: graph theory/g.2.2.0: graph algorithms
2019
In this paper we study three domination-like problems, namely identifying codes, locating-dominating codes, and locating-total-dominating codes. We are interested in finding the minimum cardinality of such codes in circular and infinite grid graphs of given height. We provide an alternate proof for already known results, as well as new results. These were obtained by a computer search based on a generic framework, that we developed earlier, for the search of a minimum labeling satisfying a pseudo-d-local property in rotagraphs.
Journal Article
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
by
Ryan, Sarah M.
,
Gade, Dinakar
,
Wets, Roger J.-B.
in
Algorithms
,
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
2016
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. We report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
Journal Article
Compressing branch-and-bound trees
by
Xavier, Álinson S.
,
Muñoz, Gonzalo
,
Paat, Joseph
in
Algorithms
,
Analysis
,
Calculus of Variations and Optimal Control; Optimization
2025
A branch-and-bound (BB) tree certifies a dual bound on the value of an integer program. In this work, we introduce the tree compression problem (TCP):
Given a BB tree
T
that certifies a dual bound, can we obtain a smaller tree with the same (or stronger) bound by either (1) applying a different disjunction at some node in
T
or (2) removing leaves from
T
? We believe such post-hoc analysis of BB trees may assist in identifying helpful general disjunctions in BB algorithms. We initiate our study by considering computational complexity and limitations of TCP. We then conduct experiments to evaluate the compressibility of realistic branch-and-bound trees generated by commonly-used branching strategies, using both an exact and a heuristic compression algorithm.
Journal Article
Distributionally robust polynomial chance-constraints under mixture ambiguity sets
2021
Given X⊂Rn, ε∈(0,1), a parametrized family of probability distributions (μa)a∈A on Ω⊂Rp, we consider the feasible set Xε∗⊂X associated with the distributionally robust chance-constraint Xε∗={x∈X:Probμ[f(x,ω)>0]>1-ε,∀μ∈Ma},where Ma is the set of all possibles mixtures of distributions μa, a∈A. For instance and typically, the family Ma is the set of all mixtures of Gaussian distributions on R with mean and standard deviation a=(a,σ) in some compact set A⊂R2. We provide a sequence of inner approximations Xεd={x∈X:wd(x)<ε}, d∈N, where wd is a polynomial of degree d whose vector of coefficients is an optimal solution of a semidefinite program. The size of the latter increases with the degree d. We also obtain the strong and highly desirable asymptotic guarantee that λ(Xε∗\\Xεd)→0 as d increases, where λ is the Lebesgue measure on X. Same results are also obtained for the more intricated case of distributionally robust “joint” chance-constraints. There is a price to pay for this strong asymptotic guarantee which is the scalability of such a numerical scheme, and so far this important drawback makes it limited to problems of modest dimension.
Journal Article
Numerical optimization for symmetric tensor decomposition
by
Kolda, Tamara G.
in
Applied mathematics
,
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
2015
We consider the problem of decomposing a real-valued symmetric tensor as the sum of outer products of real-valued vectors. Algebraic methods exist for computing complex-valued decompositions of symmetric tensors, but here we focus on real-valued decompositions, both unconstrained and nonnegative, for problems with low-rank structure. We discuss when solutions exist and how to formulate the mathematical program. Numerical results show the properties of the proposed formulations (including one that ignores symmetry) on a set of test problems and illustrate that these straightforward formulations can be effective even though the problem is nonconvex.
Journal Article
Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations
by
Esfahani, Peyman Mohajerin
,
Kuhn, Daniel
in
Economic models
,
Global optimization
,
Monte Carlo simulation
2018
We consider stochastic programs where the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, we construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. The state-of-the-art methods for solving the resulting distributionally robust optimization problems rely on global optimization techniques, which quickly become computationally excruciating. In this paper we demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs—in many interesting cases even as tractable linear programs. Leveraging recent measure concentration results, we also show that their solutions enjoy powerful finite-sample performance guarantees. Our theoretical results are exemplified in mean-risk portfolio optimization as well as uncertainty quantification.
Journal Article
Linear convergence of first order methods for non-strongly convex optimization
by
Necoara, I
,
Nesterov, Yu
,
Glineur, F
in
Continuity (mathematics)
,
Convergence
,
Convex analysis
2019
The standard assumption for proving linear convergence of first order methods for smooth convex optimization is the strong convexity of the objective function, an assumption which does not hold for many practical applications. In this paper, we derive linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition. In particular, in the case of smooth constrained convex optimization, we provide several relaxations of the strong convexity conditions and prove that they are sufficient for getting linear convergence for several first order methods such as projected gradient, fast gradient and feasible descent methods. We also provide examples of functional classes that satisfy our proposed relaxations of strong convexity conditions. Finally, we show that the proposed relaxed strong convexity conditions cover important applications ranging from solving linear systems, Linear Programming, and dual formulations of linearly constrained convex problems.
Journal Article
Golden ratio algorithms for variational inequalities
2020
The paper presents a fully adaptive algorithm for monotone variational inequalities. In each iteration the method uses two previous iterates for an approximation of the local Lipschitz constant without running a linesearch. Thus, every iteration of the method requires only one evaluation of a monotone operator F and a proximal mapping g. The operator F need not be Lipschitz continuous, which also makes the algorithm interesting in the area of composite minimization. The method exhibits an ergodic O(1 / k) convergence rate and R-linear rate under an error bound condition. We discuss possible applications of the method to fixed point problems as well as its different generalizations.
Journal Article
Coordinate descent algorithms
by
Wright, Stephen J.
in
Algorithms
,
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
2015
Coordinate descent algorithms solve optimization problems by successively performing approximate minimization along coordinate directions or coordinate hyperplanes. They have been used in applications for many years, and their popularity continues to grow because of their usefulness in data analysis, machine learning, and other areas of current interest. This paper describes the fundamentals of the coordinate descent approach, together with variants and extensions and their convergence properties, mostly with reference to convex objectives. We pay particular attention to a certain problem structure that arises frequently in machine learning applications, showing that efficient implementations of accelerated coordinate descent algorithms are possible for problems of this type. We also present some parallel variants and discuss their convergence properties under several models of parallel execution.
Journal Article
Lower bounds for finding stationary points I
2020
We prove lower bounds on the complexity of finding ϵ-stationary points (points x such that ‖∇f(x)‖≤ϵ) of smooth, high-dimensional, and potentially non-convex functions f. We consider oracle-based complexity measures, where an algorithm is given access to the value and all derivatives of f at a query point x. We show that for any (potentially randomized) algorithm A, there exists a function f with Lipschitz pth order derivatives such that A requires at least ϵ-(p+1)/p queries to find an ϵ-stationary point. Our lower bounds are sharp to within constants, and they show that gradient descent, cubic-regularized Newton’s method, and generalized pth order regularization are worst-case optimal within their natural function classes.
Journal Article