Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
46,115
result(s) for
"Matrix (mathematics)"
Sort by:
Log-gases and random matrices
2010
Random matrix theory, both as an application and as a theory, has evolved rapidly over the past fifteen years. Log-Gases and Random Matrices gives a comprehensive account of these developments, emphasizing log-gases as a physical picture and heuristic, as well as covering topics such as beta ensembles and Jack polynomials.
Positive Definite Matrices
2009,2007
This book represents the first synthesis of the considerable body of new research into positive definite matrices. These matrices play the same role in noncommutative analysis as positive real numbers do in classical analysis. They have theoretical and computational uses across a broad spectrum of disciplines, including calculus, electrical engineering, statistics, physics, numerical analysis, quantum information theory, and geometry. Through detailed explanations and an authoritative and inspiring writing style, Rajendra Bhatia carefully develops general techniques that have wide applications in the study of such matrices. Bhatia introduces several key topics in functional analysis, operator theory, harmonic analysis, and differential geometry--all built around the central theme of positive definite matrices. He discusses positive and completely positive linear maps, and presents major theorems with simple and direct proofs. He examines matrix means and their applications, and shows how to use positive definite functions to derive operator inequalities that he and others proved in recent years. He guides the reader through the differential geometry of the manifold of positive definite matrices, and explains recent work on the geometric mean of several matrices. Positive Definite Matrices is an informative and useful reference book for mathematicians and other researchers and practitioners. The numerous exercises and notes at the end of each chapter also make it the ideal textbook for graduate-level courses.
Matrices, Moments and Quadrature with Applications
by
Golub, Gene H
,
Meurant, Gérard
in
Algorithm
,
Basis (linear algebra)
,
Biconjugate gradient method
2009,2010
This computationally oriented book describes and explains the mathematical relationships among matrices, moments, orthogonal polynomials, quadrature rules, and the Lanczos and conjugate gradient algorithms. The book bridges different mathematical areas to obtain algorithms to estimate bilinear forms involving two vectors and a function of the matrix. The first part of the book provides the necessary mathematical background and explains the theory. The second part describes the applications and gives numerical examples of the algorithms and techniques developed in the first part. Applications addressed in the book include computing elements of functions of matrices; obtaining estimates of the error norm in iterative methods for solving linear systems and computing parameters in least squares and total least squares; and solving ill-posed problems using Tikhonov regularization. This book will interest researchers in numerical linear algebra and matrix computations, as well as scientists and engineers working on problems involving computation of bilinear forms.
Matrix Completions, Moments, and Sums of Hermitian Squares
2011
Intensive research in matrix completions, moments, and sums of Hermitian squares has yielded a multitude of results in recent decades. This book provides a comprehensive account of this quickly developing area of mathematics and applications and gives complete proofs of many recently solved problems. With MATLAB codes and more than 200 exercises, the book is ideal for a special topics course for graduate or advanced undergraduate students in mathematics or engineering, and will also be a valuable resource for researchers. Often driven by questions from signal processing, control theory, and quantum information, the subject of this book has inspired mathematicians from many subdisciplines, including linear algebra, operator theory, measure theory, and complex function theory. In turn, the applications are being pursued by researchers in areas such as electrical engineering, computer science, and physics. The book is self-contained, has many examples, and for the most part requires only a basic background in undergraduate mathematics, primarily linear algebra and some complex analysis. The book also includes an extensive discussion of the literature, with close to 600 references from books and journals from a wide variety of disciplines.
Optimization algorithms on matrix manifolds
2008
Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra.
Parallel Multi-Block ADMM with o(1 / k) Convergence
by
Lai, Ming-Jun
,
Deng, Wei
,
Peng, Zhimin
in
Algorithms
,
Computational Mathematics and Numerical Analysis
,
Convergence
2017
This paper introduces a parallel and distributed algorithm for solving the following minimization problem with linear constraints:
minimize
f
1
(
x
1
)
+
⋯
+
f
N
(
x
N
)
subject to
A
1
x
1
+
⋯
+
A
N
x
N
=
c
,
x
1
∈
X
1
,
…
,
x
N
∈
X
N
,
where
N
≥
2
,
f
i
are convex functions,
A
i
are matrices, and
X
i
are feasible sets for variable
x
i
. Our algorithm extends the alternating direction method of multipliers (ADMM) and decomposes the original problem into
N
smaller subproblems and solves them in parallel at each iteration. This paper shows that the classic ADMM can be extended to the
N
-block Jacobi fashion and preserve convergence in the following two cases: (i) matrices
A
i
are mutually near-orthogonal and have full column-rank,
or
(ii) proximal terms are added to the
N
subproblems (but without any assumption on matrices
A
i
). In the latter case, certain proximal terms can let the subproblem be solved in more flexible and efficient ways. We show that
‖
x
k
+
1
-
x
k
‖
M
2
converges at a rate of
o
(1 /
k
) where
M
is a symmetric positive semi-definte matrix. Since the parameters used in the convergence analysis are conservative, we introduce a strategy for automatically tuning the parameters to substantially accelerate our algorithm in practice. We implemented our algorithm (for the case ii above) on Amazon EC2 and tested it on basis pursuit problems with >300 GB of distributed data. This is the first time that successfully solving a compressive sensing problem of such a large scale is reported.
Journal Article
A fast higher-order ADI scheme for two-dimensional Riesz space-fractional diffusion equations
2025
A fast fourth-order ADI scheme is proposed to solve 2D Riesz space-fractional diffusion equations (RSFDEs). This scheme involves a sequence of symmetric positive definite (SPD) Toeplitz linear systems, thus allowing the fast sine transform to be employed for computational efficiency. Furthermore, those discretized SPD systems are solved using the preconditioned conjugate gradient (PCG) method. Our theoretical analysis demonstrates that the preconditioned matrix can be expressed as the sum of an identity matrix, a small-norm matrix, and a low-rank matrix. Numerical results verify the method’s efficacy.
Journal Article
Tensor-tensor algebra for optimal representation and compression of multiway data
by
Avron, Haim
,
Newman, Elizabeth
,
Horesh, Lior
in
Applied Mathematics
,
Compressibility
,
Compression
2021
With the advent of machine learning and its overarching pervasiveness it is imperative to devise ways to represent large datasets efficiently while distilling intrinsic features necessary for subsequent analysis. The primary workhorse used in data dimensionality reduction and feature extraction has been the matrix singular value decomposition (SVD), which presupposes that data have been arranged in matrix format. A primary goal in this study is to show that high-dimensional datasets are more compressible when treated as tensors (i.e., multiway arrays) and compressed via tensor-SVDs under the tensor-tensor product constructs and its generalizations. We begin by proving Eckart–Young optimality results for families of tensor-SVDs under two different truncation strategies. Since such optimality properties can be proven in both matrix and tensor-based algebras, a fundamental question arises: Does the tensor construct subsume the matrix construct in terms of representation efficiency? The answer is positive, as proven by showing that a tensor-tensor representation of an equal dimensional spanning space can be superior to its matrix counterpart. We then use these optimality results to investigate how the compressed representation provided by the truncated tensor SVD is related both theoretically and empirically to its two closest tensor-based analogs, the truncated high-order SVD and the truncated tensor-train SVD.
Journal Article
Mixed precision algorithms in numerical linear algebra
2022
Today’s floating-point arithmetic landscape is broader than ever. While scientific computing has traditionally used single precision and double precision floating-point arithmetics, half precision is increasingly available in hardware and quadruple precision is supported in software. Lower precision arithmetic brings increased speed and reduced communication and energy costs, but it produces results of correspondingly low accuracy. Higher precisions are more expensive but can potentially provide great benefits, even if used sparingly. A variety of mixed precision algorithms have been developed that combine the superior performance of lower precisions with the better accuracy of higher precisions. Some of these algorithms aim to provide results of the same quality as algorithms running in a fixed precision but at a much lower cost; others use a little higher precision to improve the accuracy of an algorithm. This survey treats a broad range of mixed precision algorithms in numerical linear algebra, both direct and iterative, for problems including matrix multiplication, matrix factorization, linear systems, least squares, eigenvalue decomposition and singular value decomposition. We identify key algorithmic ideas, such as iterative refinement, adapting the precision to the data, and exploiting mixed precision block fused multiply–add operations. We also describe the possible performance benefits and explain what is known about the numerical stability of the algorithms. This survey should be useful to a wide community of researchers and practitioners who wish to develop or benefit from mixed precision numerical linear algebra algorithms.
Journal Article
IR Tools: a MATLAB package of iterative regularization methods and large-scale test problems
by
Hansen, Per Christian
,
Gazzola, Silvia
,
Nagy, James G.
in
Algebra
,
Algorithms
,
Computer Science
2019
This paper describes a new MATLAB software package of iterative regularization methods and test problems for large-scale linear inverse problems. The software package, called IR TOOLS, serves two related purposes: we provide implementations of a range of iterative solvers, including several recently proposed methods that are not available elsewhere, and we provide a set of large-scale test problems in the form of discretizations of 2D linear inverse problems. The solvers include iterative regularization methods where the regularization is due to the semi-convergence of the iterations, Tikhonov-type formulations where the regularization is explicitly formulated in the form of a regularization term, and methods that can impose bound constraints on the computed solutions. All the iterative methods are implemented in a very flexible fashion that allows the problem’s coefficient matrix to be available as a (sparse) matrix, a function handle, or an object. The most basic call to all of the various iterative methods requires only this matrix and the right hand side vector; if the method uses any special stopping criteria, regularization parameters, etc., then default values are set automatically by the code. Moreover, through the use of an optional input structure, the user can also have full control of any of the algorithm parameters. The test problems represent realistic large-scale problems found in image reconstruction and several other applications. Numerical examples illustrate the various algorithms and test problems available in this package.
Journal Article