Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
30 result(s) for "Rayleigh Quotient Iteration"
Sort by:
Optimization algorithms on matrix manifolds
Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra.
Inverse, Shifted Inverse, and Rayleigh Quotient Iteration as Newton's Method
The l₂ normalized inverse, shifted inverse, and Rayleigh quotient iterations are classic algorithms for approximating an eigenvector of a symmetric matrix. This work establishes rigorously that each iterate produced by one of these three algorithms can be viewed as a Newton's method iterate followed by a normalization. The equivalences given here are not meant to suggest changes to the implementations of the classic eigenvalue algorithms. However, they add further understanding to the formal structure of these iterations, and they provide an explanation for their good behavior despite the possible need to solve systems with nearly singular coefficient matrices. A historical development of these eigenvalue algorithms is presented. Using our equivalences and traditional Newton's method theory helps to gain understanding as to why normalized Newton's method, inverse iteration, and shifted inverse iteration are only linearly convergent and not quadratically convergent, as would be expected, and why a new linear system need not be solved at each iteration. We also give an explanation as to why our normalized Newton's method equivalent of Rayleigh quotient iteration is cubically convergent and not just quadratically convergent, as would be expected.
On Rayleigh Quotient Iteration for the Dual Quaternion Hermitian Eigenvalue Problem
The application of eigenvalue theory to dual quaternion Hermitian matrices holds significance in the realm of multi-agent formation control. In this paper, we study the use of Rayleigh quotient iteration (RQI) for solving the right eigenpairs of dual quaternion Hermitian matrices. Combined with dual representation, the RQI algorithm can effectively compute the eigenvalue along with the associated eigenvector of the dual quaternion Hermitian matrices. Furthermore, by utilizing the minimal residual property of the Rayleigh quotient, a convergence analysis of the Rayleigh quotient iteration is derived. Numerical examples are provided to illustrate the high accuracy and low CPU time cost of the proposed Rayleigh quotient iteration compared with the power method for solving the dual quaternion Hermitian eigenvalue problem.
Mixed precision Rayleigh quotient iteration for total least squares problems
With the recent emergence of mixed precision hardware, there has been a renewed interest in its use for solving numerical linear algebra problems fast and accurately. The solution of total least squares problems, i.e., solving min E , r ‖ [ E , r ] ‖ F subject to ( A + E ) x = b + r , arises in numerous applications. Solving this problem requires finding the smallest singular value and corresponding right singular vector of [ A , b ] , which is challenging when A is large and sparse. An efficient algorithm for this case due to Björck et al. (SIAM J. Matrix Anal. Appl. 22(2), 413–429 2000 ), called RQI-PCGTLS, is based on Rayleigh quotient iteration coupled with the preconditioned conjugate gradient method. We develop a mixed precision variant of this algorithm, RQI-PCGTLS-MP, in which up to three different precisions can be used. We assume that the lowest precision is used in the computation of the preconditioner and give theoretical constraints on how this precision must be chosen to ensure stability. In contrast to standard least squares, for total least squares, the resulting constraint depends not only on the matrix A , but also on the right-hand side b . We perform a number of numerical experiments on model total least squares problems used in the literature, which demonstrate that our algorithm can attain the same accuracy as RQI-PCGTLS albeit with a potential convergence delay due to the use of low precision. Performance modeling shows that the mixed precision approach can achieve up to a 4 × speedup depending on the size of the matrix and the number of Rayleigh quotient iterations performed.
The Geometry of Algorithms with Orthogonality Constraints
In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.
Efficient initials for computing maximal eigenpair
This paper introduces some efficient initials for a well-known algorithm (an inverse iteration) for computing the maximal eigenpair of a class of real matrices. The initials not only avoid the collapse of the algorithm but are also unexpectedly efficient. The initials presented here are based on our analytic estimates of the maximal eigenvalue and a mimic of its eigenvector for many years of accumulation in the study of stochastic stability speed. In parallel, the same problem for computing the next to the maximal eigenpair is also studied.
Rayleigh Quotient Methods for Estimating Common Roots of Noisy Univariate Polynomials
The problem is considered of approximately solving a system of univariate polynomials with one or more common roots and its coefficients corrupted by noise. The goal is to estimate the underlying common roots from the noisy system. Symbolic algebra methods are not suitable for this. New Rayleigh quotient methods are proposed and evaluated for estimating the common roots. Using tensor algebra, reasonable starting values for the Rayleigh quotient methods can be computed. The new methods are compared to Gauss–Newton, solving an eigenvalue problem obtained from the generalized Sylvester matrix, and finding a cluster among the roots of all polynomials. In a simulation study it is shown that Gauss–Newton and a new Rayleigh quotient method perform best, where the latter is more accurate when other roots than the true common roots are close together.
Controlling Inner Iterations in the Jacobi–Davidson Method
The Jacobi-Davidson method is an eigenvalue solver which uses an inner-outer scheme. In the outer iteration one tries to approximate an eigenpair while in the inner iteration a linear system has to be solved, often iteratively, with the ultimate goal to make progress for the outer loop. In this paper we prove a relation between the residual norm of the inner linear system and the residual norm of the eigenvalue problem. We show that the latter may be estimated inexpensively during the inner iterations. On this basis, we propose a stopping strategy for the inner iterations to maximally exploit the strengths of the method. These results extend previous results obtained for the special case of Hermitian eigenproblems with the conjugate gradient or the symmetric QMR method as inner solver. The present analysis applies to both standard and generalized eigenproblems, does not require symmetry, and is compatible with most iterative methods for the inner systems. It can also be extended to other types of inner-outer eigenvalue solvers, such as inexact inverse iteration or inexact Rayleigh quotient iteration. The effectiveness of our approach is illustrated by a few numerical experiments, including the comparison of a standard Jacobi-Davidson code with the same code enhanced by our stopping strategy. [PUBLICATION ABSTRACT]