Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
6,372
result(s) for
"linear convergence"
Sort by:
Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
2018
The proximal gradient algorithm for minimizing the sum of a smooth and nonsmooth convex function often converges linearly even without strong convexity. One common reason is that a multiple of the step length at each iteration may linearly bound the “error”—the distance to the solution set. We explain the observed linear convergence intuitively by proving the equivalence of such an error bound to a natural quadratic growth condition. Our approach generalizes to linear and quadratic convergence analysis for proximal methods (of Gauss-Newton type) for minimizing compositions of nonsmooth functions with smooth mappings. We observe incidentally that short step-lengths in the algorithm indicate near-stationarity, suggesting a reliable termination criterion.
Journal Article
On the linear convergence of the alternating direction method of multipliers
by
Luo, Zhi-Quan
,
Hong, Mingyi
in
Algorithms
,
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
2017
We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only
two
convex functions defined on
two
separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of
any
number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth
ℓ
1
regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.
Journal Article
On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers
by
Deng, Wei
,
Yin, Wotao
in
Algorithms
,
Computational Mathematics and Numerical Analysis
,
Convergence
2016
The formulation
min
x
,
y
f
(
x
)
+
g
(
y
)
,
subject
to
A
x
+
B
y
=
b
,
where
f
and
g
are extended-value convex functions, arises in many application areas such as signal processing, imaging and image processing, statistics, and machine learning either naturally or after variable splitting. In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous gradient. On this kind of problem, a very effective approach is the alternating direction method of multipliers (ADM or ADMM), which solves a sequence of
f
/
g
-decoupled subproblems. However, its effectiveness has not been matched by a provably fast rate of convergence; only sublinear rates such as
O
(1 /
k
) and
O
(
1
/
k
2
)
were recently established in the literature, though the
O
(1 /
k
) rates do not require strong convexity. This paper shows that global linear convergence can be guaranteed under the assumptions of strong convexity and Lipschitz gradient on one of the two functions, along with certain rank assumptions on
A
and
B
. The result applies to various generalizations of ADM that allow the subproblems to be solved faster and less exactly in certain manners. The derived rate of convergence also provides some theoretical guidance for optimizing the ADM parameters. In addition, this paper makes meaningful extensions to the existing global convergence theory of ADM generalizations.
Journal Article
Subgradient Extragradient Method with Double Inertial Steps for Variational Inequalities
by
Iyiola, Olaniyi S.
,
Yao, Yonghong
,
Shehu, Yekini
in
Algorithms
,
Computational Mathematics and Numerical Analysis
,
Convergence
2022
In this paper, we obtain successively weak, strong and linear convergence analysis of the sequence of iterates generated by our proposed subgradient extragradient method with double inertial extrapolation steps and self-adaptive step sizes for solving variational inequalities for which the cost operator is pseudo-monotone and Lipschitz continuous in real Hilbert spaces. Our proposed method is a combination of double inertial extrapolation steps, relaxation step and subgradient extragradient method which is aimed to increase the speed of convergence of many available subgradient extragradient methods with inertia for solving variational inequalities. Several versions of subgradient extragradient methods with inertial extrapolation step serve as special cases of our proposed method and the inertia in our proposed method is more relaxed and chosen in [0, 1]. Numerical implementations of our method show that our method is efficient, implementable and the benefits gained when subgradient extragradient method with double inertial extrapolation steps are considered for variational inequalities instead of subgradient extragradient methods with one inertial extrapolation step available in the literature.
Journal Article
Linear Rate Convergence of the Alternating Direction Method of Multipliers for Convex Composite Programming
2018
In this paper, we aim to prove the linear rate convergence of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex composite optimization problems. Under a mild calmness condition, which holds automatically for convex composite piecewise linear-quadratic programming, we establish the global Q-linear rate of convergence for a general semi-proximal ADMM with the dual step-length being taken in (0, (1+5
1/2
)/2). This semi-proximal ADMM, which covers the classic one, has the advantage to resolve the potentially nonsolvability issue of the subproblems in the classic ADMM and possesses the abilities of handling the multi-block cases efficiently. We demonstrate the usefulness of the obtained results when applied to two- and multi-block convex quadratic (semidefinite) programming.
Journal Article
An inexact ADMM for separable nonconvex and nonsmooth optimization
2025
An inexact alternating direction method of multiplies (I-ADMM) with an expansion linesearch step was developed for solving a family of separable minimization problems subject to linear constraints, where the objective function is the sum of a smooth but possibly nonconvex function and a possibly nonsmooth nonconvex function. Global convergence and linear convergence rate of the I-ADMM were established under proper conditions while inexact relative error criterion was used for solving the subproblems. In addition, a unified proximal gradient (UPG) method with momentum acceleration was proposed for solving the smooth but possibly nonconvex subproblem. This UPG method guarantees global convergence and will automatically reduce to an optimal accelerated gradient method when the smooth function in the objective is convex. Our numerical experiments on solving nonconvex quadratic programming problems and sparse optimization problems from statistical learning show that the proposed I-ADMM is very effective compared with other state-of-the-art algorithms in the literature.
Journal Article
Linear convergence of the randomized sparse Kaczmarz method
2019
The randomized version of the Kaczmarz method for the solution of consistent linear systems is known to converge linearly in expectation. And even in the possibly inconsistent case, when only noisy data is given, the iterates are expected to reach an error threshold in the order of the noise-level with the same rate as in the noiseless case. In this work we show that the same also holds for the iterates of the recently proposed randomized sparse Kaczmarz method for recovery of sparse solutions. Furthermore we consider the more general setting of convex feasibility problems and their solution by the method of randomized Bregman projections. This is motivated by the observation that, similarly to the Kaczmarz method, the Sparse Kaczmarz method can also be interpreted as an iterative Bregman projection method to solve a convex feasibility problem. We obtain expected sublinear rates for Bregman projections with respect to a general strongly convex function. Moreover, even linear rates are expected for Bregman projections with respect to smooth or piecewise linear-quadratic functions, and also the regularized nuclear norm, which is used in the area of low rank matrix problems.
Journal Article
ASYMPTOTICALLY COMPATIBLE SCHEMES AND APPLICATIONS TO ROBUST DISCRETIZATION OF NONLOCAL MODELS
2014
Many problems in nature, being characterized by a parameter, are of interest both with a fixed parameter value and with the parameter approaching an asymptotic limit. Numerical schemes that are convergent in both regimes offer robust discretizations, which can be highly desirable in practice. The asymptotically compatible schemes studied in this paper meet such objectives for a class of parametrized problems. An abstract mathematical framework is established rigorously here together with applications to the numerical solution of both nonlocal models and their local limits. In particular, the framework can be applied to nonlocal diffusion models and a general state-based peridynamic system parametrized by the horizon radius. Recent findings have exposed the risks associated with some discretizations of nonlocal models when the horizon radius is proportional to the discretization parameter. Thus, it is desirable to develop asymptotically compatible schemes for such models so as to offer robust numerical discretizations to problems involving nonlocal interactions on multiple scales. This work provides new insight in this regard through a careful analysis of related conforming finite element discretizations and the finding is valid under minimal regularity assumptions on exact solutions. It reveals that as long as the finite element space contains continuous piecewise linear functions, the Galerkin finite element approximation is always asymptotically compatible. For piecewise constant finite element, whenever applicable, it is shown that a correct local limit solution can also be obtained as long as the discretization (mesh) parameter decreases faster than the modeling (horizon) parameter does. These results can be used to guide future computational studies of nonlocal problems.
Journal Article