Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
20,198
result(s) for
"Multipliers"
Sort by:
joint graphical lasso for inverse covariance estimation across multiple classes
by
Wang, Pei
,
Witten, Daniela M.
,
Danaher, Patrick
in
Algorithms
,
Alternating directions method of multipliers
,
Analysis of covariance
2014
We consider the problem of estimating multiple related Gaussian graphical models from a high dimensional data set with observations belonging to distinct classes. We propose the joint graphical lasso, which borrows strength across the classes to estimate multiple graphical models that share certain characteristics, such as the locations or weights of non‐zero edges. Our approach is based on maximizing a penalized log‐likelihood. We employ generalized fused lasso or group lasso penalties and implement a fast alternating directions method of multipliers algorithm to solve the corresponding convex optimization problems. The performance of the method proposed is illustrated through simulated and real data examples.
Journal Article
On the linear convergence of the alternating direction method of multipliers
by
Luo, Zhi-Quan
,
Hong, Mingyi
in
Algorithms
,
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
2017
We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only
two
convex functions defined on
two
separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of
any
number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth
ℓ
1
regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.
Journal Article
The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
by
Yuan, Xiaoming
,
He, Bingsheng
,
Chen, Caihua
in
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
,
Convergence
2016
The alternating direction method of multipliers (ADMM) is now widely used in many fields, and its convergence was proved when two blocks of variables are alternatively updated. It is strongly desirable and practically valuable to extend the ADMM directly to the case of a multi-block convex minimization problem where its objective function is the sum of more than two separable convex functions. However, the convergence of this extension has been missing for a long time—neither an affirmative convergence proof nor an example showing its divergence is known in the literature. In this paper we give a negative answer to this long-standing open question: The direct extension of ADMM is not necessarily convergent. We present a sufficient condition to ensure the convergence of the direct extension of ADMM, and give an example to show its divergence.
Journal Article
A Survey on Some Recent Developments of Alternating Direction Method of Multipliers
2022
Recently, alternating direction method of multipliers (ADMM) attracts much attentions from various fields and there are many variant versions tailored for different models. Moreover, its theoretical studies such as rate of convergence and extensions to nonconvex problems also achieve much progress. In this paper, we give a survey on some recent developments of ADMM and its variants.
Journal Article
Sobolev, Besov and Triebel-Lizorkin Spaces on Quantum Tori
2018
This paper gives a systematic study of Sobolev, Besov and Triebel-Lizorkin spaces on a noncommutative d-torus \\mathbb{T}^d_\\theta (with \\theta a skew symmetric real d\\times d-matrix). These spaces share many properties with their classical counterparts. The authors prove, among other basic properties, the lifting theorem for all these spaces and a Poincar type inequality for Sobolev spaces.
The Augmented Lagrangian Method as a Framework for Stabilised Methods in Computational Mechanics
by
Hansbo, Peter
,
Burman, Erik
,
Larson, Mats G.
in
Approximation
,
Augmented lagrange multiplier methods
,
Augmented Lagrangian methods
2023
In this paper we will present a review of recent advances in the application of the augmented Lagrange multiplier method as a general approach for generating multiplier-free stabilised methods. The augmented Lagrangian method consists of a standard Lagrange multiplier method augmented by a penalty term, penalising the constraint equations, and is well known as the basis for iterative algorithms for constrained optimisation problems. Its use as a stabilisation methods in computational mechanics has, however, only recently been appreciated. We first show how the method generates Galerkin/Least Squares type schemes for equality constraints and then how it can be extended to develop new stabilised methods for inequality constraints. Application to several different problems in computational mechanics is given.
Journal Article
On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers
by
Deng, Wei
,
Yin, Wotao
in
Algorithms
,
Computational Mathematics and Numerical Analysis
,
Convergence
2016
The formulation
min
x
,
y
f
(
x
)
+
g
(
y
)
,
subject
to
A
x
+
B
y
=
b
,
where
f
and
g
are extended-value convex functions, arises in many application areas such as signal processing, imaging and image processing, statistics, and machine learning either naturally or after variable splitting. In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous gradient. On this kind of problem, a very effective approach is the alternating direction method of multipliers (ADM or ADMM), which solves a sequence of
f
/
g
-decoupled subproblems. However, its effectiveness has not been matched by a provably fast rate of convergence; only sublinear rates such as
O
(1 /
k
) and
O
(
1
/
k
2
)
were recently established in the literature, though the
O
(1 /
k
) rates do not require strong convexity. This paper shows that global linear convergence can be guaranteed under the assumptions of strong convexity and Lipschitz gradient on one of the two functions, along with certain rank assumptions on
A
and
B
. The result applies to various generalizations of ADM that allow the subproblems to be solved faster and less exactly in certain manners. The derived rate of convergence also provides some theoretical guidance for optimizing the ADM parameters. In addition, this paper makes meaningful extensions to the existing global convergence theory of ADM generalizations.
Journal Article
Implementation of Novel 2x2 Vedic Multiplier using QCA Technology
2023
Advantages like working at high speed, scalability, and lower power consumption make QCA technology more feasible than modern CMOS technology. QCA Technology uses electrons’ Coulombic interaction and polarization to represent binary information 0 and 1. The present paper proposes a novel XOR Gate and a Half Adder design and uses them to implement a new 2x2 Vedic Multiplier on QCA technology. A 2x2 Vedic Multiplier multiplies two inputs, of two bits each, using Urdhva-Tiryakbhyam Vedic Sutra. The proposed circuit has a reduced cell count and Quantum cost compared Co-planar Vedic Multipliers to available in the literature. QCADesigner 2.0.3 is used for the simulation and verification of all three proposed circuits.
Journal Article
Rate of Convergence Analysis of Decomposition Methods Based on the Proximal Method of Multipliers for Convex Minimization
2014
This paper presents two classes of decomposition algorithms based on the proximal method of multipliers (PMM) introduced in the mid-1970s by Rockafellar for convex minimization. We first show that the PMM framework is at the root of many past and recent decomposition schemes suggested in the literature allowing for an elementary analysis of these methods through a unified scheme. We then prove various sublinear global convergence rate results for the two classes of PMM based decomposition algorithms for function values and constraints violation. Furthermore, under a mild assumption on the problem's data we derive rate of convergence results in terms of the original primal function values for both classes. As a by-product of our analysis we also obtain convergence of the sequences produced by the two algorithm classes to optimal primal-dual solutions. [PUBLICATION ABSTRACT]
Journal Article
Error Reduction Method of an RSFQ Approximate Multiplier using Double Operations
2025
This study proposes a method to reduce the error associated with approximate multiplication. This method considers a previously proposed approximate multiplier for RSFQ circuits producing an n -bit integer result for an n -bit operation. The proposed method uses the approximate multiplications of the multiplier twice to generate a 2 n -bit result for an n -bit operation. It uses simple operations, such as reversing bit-orders and decrement operations, with the approximate multiplications to produce the result. This paper discusses the implementation of the proposed method in RSFQ circuits. The evaluation results demonstrate that 70% of the multiplication results for 4-bit operations are correct. The average error is reduced compared with the approximate multiplication of the multiplier.
Journal Article