Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
48,568 result(s) for "Matrices (mathematics)"
Sort by:
Log-Gases and Random Matrices (LMS-34)
Random matrix theory, both as an application and as a theory, has evolved rapidly over the past fifteen years.Log-Gases and Random Matricesgives a comprehensive account of these developments, emphasizing log-gases as a physical picture and heuristic, as well as covering topics such as beta ensembles and Jack polynomials. Peter Forrester presents an encyclopedic development of log-gases and random matrices viewed as examples of integrable or exactly solvable systems. Forrester develops not only the application and theory of Gaussian and circular ensembles of classical random matrix theory, but also of the Laguerre and Jacobi ensembles, and their beta extensions. Prominence is given to the computation of a multitude of Jacobians; determinantal point processes and orthogonal polynomials of one variable; the Selberg integral, Jack polynomials, and generalized hypergeometric functions; Painlevé transcendents; macroscopic electrostatistics and asymptotic formulas; nonintersecting paths and models in statistical mechanics; and applications of random matrix theory. This is the first textbook development of both nonsymmetric and symmetric Jack polynomial theory, as well as the connection between Selberg integral theory and beta ensembles. The author provides hundreds of guided exercises and linked topics, makingLog-Gases and Random Matricesan indispensable reference work, as well as a learning resource for all students and researchers in the field.
Matrices, Moments and Quadrature with Applications
This computationally oriented book describes and explains the mathematical relationships among matrices, moments, orthogonal polynomials, quadrature rules, and the Lanczos and conjugate gradient algorithms. The book bridges different mathematical areas to obtain algorithms to estimate bilinear forms involving two vectors and a function of the matrix. The first part of the book provides the necessary mathematical background and explains the theory. The second part describes the applications and gives numerical examples of the algorithms and techniques developed in the first part. Applications addressed in the book include computing elements of functions of matrices; obtaining estimates of the error norm in iterative methods for solving linear systems and computing parameters in least squares and total least squares; and solving ill-posed problems using Tikhonov regularization. This book will interest researchers in numerical linear algebra and matrix computations, as well as scientists and engineers working on problems involving computation of bilinear forms.
Positive Definite Matrices
This book represents the first synthesis of the considerable body of new research into positive definite matrices. These matrices play the same role in noncommutative analysis as positive real numbers do in classical analysis. They have theoretical and computational uses across a broad spectrum of disciplines, including calculus, electrical engineering, statistics, physics, numerical analysis, quantum information theory, and geometry. Through detailed explanations and an authoritative and inspiring writing style, Rajendra Bhatia carefully develops general techniques that have wide applications in the study of such matrices. Bhatia introduces several key topics in functional analysis, operator theory, harmonic analysis, and differential geometry--all built around the central theme of positive definite matrices. He discusses positive and completely positive linear maps, and presents major theorems with simple and direct proofs. He examines matrix means and their applications, and shows how to use positive definite functions to derive operator inequalities that he and others proved in recent years. He guides the reader through the differential geometry of the manifold of positive definite matrices, and explains recent work on the geometric mean of several matrices. Positive Definite Matrices is an informative and useful reference book for mathematicians and other researchers and practitioners. The numerous exercises and notes at the end of each chapter also make it the ideal textbook for graduate-level courses.
Matrix Completions, Moments, and Sums of Hermitian Squares
Intensive research in matrix completions, moments, and sums of Hermitian squares has yielded a multitude of results in recent decades. This book provides a comprehensive account of this quickly developing area of mathematics and applications and gives complete proofs of many recently solved problems. With MATLAB codes and more than 200 exercises, the book is ideal for a special topics course for graduate or advanced undergraduate students in mathematics or engineering, and will also be a valuable resource for researchers. Often driven by questions from signal processing, control theory, and quantum information, the subject of this book has inspired mathematicians from many subdisciplines, including linear algebra, operator theory, measure theory, and complex function theory. In turn, the applications are being pursued by researchers in areas such as electrical engineering, computer science, and physics. The book is self-contained, has many examples, and for the most part requires only a basic background in undergraduate mathematics, primarily linear algebra and some complex analysis. The book also includes an extensive discussion of the literature, with close to 600 references from books and journals from a wide variety of disciplines.
Parallel Multi-Block ADMM with o(1 / k) Convergence
This paper introduces a parallel and distributed algorithm for solving the following minimization problem with linear constraints: minimize f 1 ( x 1 ) + ⋯ + f N ( x N ) subject to A 1 x 1 + ⋯ + A N x N = c , x 1 ∈ X 1 , … , x N ∈ X N , where N ≥ 2 , f i are convex functions, A i are matrices, and X i are feasible sets for variable x i . Our algorithm extends the alternating direction method of multipliers (ADMM) and decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This paper shows that the classic ADMM can be extended to the N -block Jacobi fashion and preserve convergence in the following two cases: (i) matrices A i are mutually near-orthogonal and have full column-rank, or (ii) proximal terms are added to the N subproblems (but without any assumption on matrices A i ). In the latter case, certain proximal terms can let the subproblem be solved in more flexible and efficient ways. We show that ‖ x k + 1 - x k ‖ M 2 converges at a rate of o (1 /  k ) where M is a symmetric positive semi-definte matrix. Since the parameters used in the convergence analysis are conservative, we introduce a strategy for automatically tuning the parameters to substantially accelerate our algorithm in practice. We implemented our algorithm (for the case ii above) on Amazon EC2 and tested it on basis pursuit problems with >300 GB of distributed data. This is the first time that successfully solving a compressive sensing problem of such a large scale is reported.
Optimization Algorithms on Matrix Manifolds
Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra.
A fast higher-order ADI scheme for two-dimensional Riesz space-fractional diffusion equations
A fast fourth-order ADI scheme is proposed to solve 2D Riesz space-fractional diffusion equations (RSFDEs). This scheme involves a sequence of symmetric positive definite (SPD) Toeplitz linear systems, thus allowing the fast sine transform to be employed for computational efficiency. Furthermore, those discretized SPD systems are solved using the preconditioned conjugate gradient (PCG) method. Our theoretical analysis demonstrates that the preconditioned matrix can be expressed as the sum of an identity matrix, a small-norm matrix, and a low-rank matrix. Numerical results verify the method’s efficacy.
A Power Method for Computing the Dominant Eigenvalue of a Dual Quaternion Hermitian Matrix
In this paper, we first study the projections onto the set of unit dual quaternions, and the set of dual quaternion vectors with unit norms. Then we propose a power method for computing the dominant eigenvalue of a dual quaternion Hermitian matrix. For a strict dominant eigenvalue, we show the sequence generated by the power method converges to the dominant eigenvalue and its corresponding eigenvector linearly. For a general dominant eigenvalue, we establish linear convergence of the standard part of the dominant eigenvalue. Based upon these, we reformulate the simultaneous localization and mapping problem as a rank-one dual quaternion completion problem. A two-block coordinate descent method is proposed to solve this problem. One block has a closed-form solution and the other block is the best rank-one approximation problem of a dual quaternion Hermitian matrix, which can be computed by the power method. Numerical experiments are presented to show the efficiency of our proposed power method.
Tensor-tensor algebra for optimal representation and compression of multiway data
With the advent of machine learning and its overarching pervasiveness it is imperative to devise ways to represent large datasets efficiently while distilling intrinsic features necessary for subsequent analysis. The primary workhorse used in data dimensionality reduction and feature extraction has been the matrix singular value decomposition (SVD), which presupposes that data have been arranged in matrix format. A primary goal in this study is to show that high-dimensional datasets are more compressible when treated as tensors (i.e., multiway arrays) and compressed via tensor-SVDs under the tensor-tensor product constructs and its generalizations. We begin by proving Eckart–Young optimality results for families of tensor-SVDs under two different truncation strategies. Since such optimality properties can be proven in both matrix and tensor-based algebras, a fundamental question arises: Does the tensor construct subsume the matrix construct in terms of representation efficiency? The answer is positive, as proven by showing that a tensor-tensor representation of an equal dimensional spanning space can be superior to its matrix counterpart. We then use these optimality results to investigate how the compressed representation provided by the truncated tensor SVD is related both theoretically and empirically to its two closest tensor-based analogs, the truncated high-order SVD and the truncated tensor-train SVD.