Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
90
result(s) for
"Goldfarb, Donald"
Sort by:
Robust Low-Rank Tensor Recovery: Models and Algorithms
2014
Robust tensor recovery plays an instrumental role in robustifying tensor decompositions for multilinear data analysis against outliers, gross corruptions, and missing values and has a diverse array of applications. In this paper, we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust principal component analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number of real applications the practical effectiveness of this convex optimization framework for robust low-rank tensor recovery. [PUBLICATION ABSTRACT]
Journal Article
Fixed point and Bregman iterative methods for matrix rank minimization
2011
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from
http://www.columbia.edu/~sm2756/FPCA.htm
for non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10
−5
in about 3 min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.
Journal Article
Fast alternating linearization methods for minimizing the sum of two convex functions
by
Ma, Shiqian
,
Scheinberg, Katya
,
Goldfarb, Donald
in
Algorithms
,
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
2013
We present in this paper alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most
iterations to obtain an
-optimal solution, while our accelerated (i.e., fast) versions of them require at most
iterations, with little change in the computational effort required at each iteration. For both types of methods, we present one algorithm that requires both functions to be smooth with Lipschitz continuous gradients and one algorithm that needs only one of the functions to be so. Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in (Fast multiple splitting algorithms for convex optimization, Columbia University,
2009
) where the algorithms are Jacobi type methods. Numerical results are reported to support our theoretical conclusions and demonstrate the practical potential of our algorithms.
Journal Article
An Iterative Regularization Method for Total Variation-Based Image Restoration
2005
We introduce a new iterative regularization procedure for inverse problems based on the use of Bregman distances, with particular focus on problems arising in image processing. We are motivated by the problem of restoring noisy and blurry images via variational methods by using total variation regularization. We obtain rigorous convergence results and effective stopping criteria for the general procedure. The numerical results for denoising appear to give significant improvement over standard models, and preliminary results for deblurring/denoising are very encouraging.
Journal Article
Parametric Maximum Flow Algorithms for Fast Total Variation Minimization
2009
This report studies the global minimization of anisotropically discretized total variation (TV) energies with an$L^p$(in particular,$L^1$and$L^2$ ) fidelity term using parametric maximum flow algorithms to minimize$s$ - $t$cut representations of these energies. The TV/ $L^2$model, also known as the Rudin-Osher-Fatemi (ROF) model, is suitable for restoring images contaminated by Gaussian noise, while the TV/ $L^1$model is able to remove impulsive noise from grayscale images and perform multiscale decompositions of them. Preliminary numerical results on large-scale two-dimensional CT and three-dimensional brain MR images are presented to illustrate the effectiveness of these approaches.
Journal Article
Second-order Cone Programming Methods for Total Variation-Based Image Restoration
2005
In this paper we present optimization algorithms for image restoration based on the total variation (TV) minimization framework of Rudin, Osher, and Fatemi (ROF). Our approach formulates TV minimization as a second-order cone program which is then solved by interior-point algorithms that are efficient both in practice (using nested dissection and domain decomposition) and in theory (i.e., they obtain solutions in polynomial time). In addition to the original ROF minimization model, we show how to apply our approach to other TV models, including ones that are not solvable by PDE-based methods. Numerical results on a varied set of images are presented to illustrate the effectiveness of our approach.
Journal Article
The Total Variation Regularized$L^1$Model for Multiscale Decomposition
by
Osher, Stanley
,
Yin, Wotao
,
Goldfarb, Donald
in
Applied mathematics
,
Banach spaces
,
Decomposition
2007
This paper studies the total variation regularization with an $L^1$ fidelity term (TV-$L^1$) model for decomposing an image into features of different scales. We first show that the images produced by this model can be formed from the minimizers of a sequence of decoupled geometry subproblems. Using this result we show that the TV-$L^1$ model is able to separate image features according to their scales, where the scale is analytically defined by the $G$-value. A number of other properties including the geometric and morphological invariance of the TV-$L^1$ model are also proved and their applications discussed.
Journal Article
A Curvilinear Search Method for p -Harmonic Flows on Spheres
2009
The problem of finding $p$-harmonic flows arises in a wide range of applications including color image (chromaticity) denoising, micromagnetics, liquid crystal theory, and directional diffusion. In this paper, we propose an innovative curvilinear search method for minimizing $p$-harmonic energies over spheres. Starting from a flow (map) on the unit sphere, our method searches along a curve that lies on the sphere in a manner similar to that of a standard inexact line search descent method. We show that our method is globally convergent if the step length satisfies the Armijo-Wolfe conditions. Computational tests are presented to demonstrate the efficiency of the proposed method and a variant of it that uses Barzilai-Borwein steps.
Journal Article
Fast Multiple-Splitting Algorithms for Convex Optimization
2012
We present in this paper two different classes of general multiple-splitting algorithms for solving finite-dimensional convex optimization problems. Under the assumption that the function being minimized can be written as the sum of $K$ convex functions, each of which has a Lipschitz continuous gradient, we prove that the number of iterations needed by the first class of algorithms to obtain an $\\epsilon$-optimal solution is $O((K-1)L/\\epsilon)$, where $L$ is an upper bound on all of the Lipschitz constants. The algorithms in the second class are accelerated versions of those in the first class, where the complexity result is improved to $O(\\sqrt{(K-1)L/\\epsilon})$ while the computational effort required at each iteration is almost unchanged. To the best of our knowledge, the complexity results presented in this paper are the first ones of this type that have been given for splitting and alternating direction-type methods. Moreover, all algorithms proposed in this paper are parallelizable, which makes them particularly attractive for solving certain large-scale problems. [PUBLICATION ABSTRACT]
Journal Article
A Fast Algorithm for Sparse Reconstruction Based on Shrinkage, Subspace Optimization, and Continuation
by
Wen, Zaiwen
,
Zhang, Yin
,
Yin, Wotao
in
Algorithms
,
Computational mathematics
,
Iterative methods
2010
The authors propose a fast algorithm for solving the ... minimization problem ... for recovering sparse solutions to an undetermined system of linear equations ... . The algorithm is divided into two stages that are performed repeatedly. In the first stage a first-order iterative \"shrinkage\" method yields an estimate of the subset of components of x likely to be nonzero in an optimal solution. Restricting the decision variables x to this subset and fixing their signs at their current values reduces the ... to a linear function of x. The resulting subspace problem, which involves the minimization of a smaller and smooth quadratic function, is solved in the second phase. Our code FPC_AS embeds this basic two-stage algorithm in a continuation (homotopy) approach by assigning a decreasing sequence of values to ... . This code exhibits state-of-the-art performance in terms of both its speed and its ability to recover sparse signals.(ProQuest: ... denotes formulae/symbols omitted.)
Journal Article