Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
68 result(s) for "Hallman, Eric"
Sort by:
Error Estimates for Least-Squares Problems
In this thesis we consider error estimates for a family of iterative algorithms for solving the least-squares problem minx ||Ax-b||2 based on the Golub-Kahan-Lanczos bidiagonalization process. Given a lower bound on the smallest singular value of A, we show how to compute upper bounds on the Euclidean error ||xk-x*||2 as well as the backward error. In the case of the Euclidean error, we demonstrate that our bound is sharp given the information we use about the matrix A.We also present two new algorithms aimed at minimizing our error estimates. The first, LS-Craig, is constructed to minimize our estimate of ||xk-x*||2 with every iteration. The second, LSMB, minimizes an objective function closely related to our backward error estimate. We find that at each step the iterate xk produced by LS-Craig is a convex combination of those produced by LSQR (which minimizes ||rk||2 = ||b-Axk||2 over a Krylov subspace) and Craig's method. The iterates produced by LSMB, in turn, are a convex combination of those produced by LSQR and LSMR (which minimizes ||AT rk||2 over the same subspace). Experiments on test cases from the University of Florida Sparse Matrix Collection show that in practice LS-Craig and LSMB perform at least as well as the existing algorithms, although never by more than a small margin. This suggests that LS-Craig could replace LSQR when stopping rules are based on the Euclidean error and LSMB could replace both LSQR and LSMR when stopping rules are based on the backward error.Finally, we extend the definition of the backward error to the case minX||AX-B||F where B may have any number of columns. We derive a formula for the backward error and propose two estimates that experiments suggest are highly reliable. We prove that the estimates are reliable for a few special cases, although the general problem remains open.
Two Variations on the XTrace Algorithm
This paper studies two potential modifications of XTrace (Epperly et al., SIMAX 45(1):1-23, 2024), a randomized algorithm for estimating the trace of a matrix. The first is a variance reduction step that averages the output of XTrace over right-multiplications of the test vectors by random orthogonal matrices. The second is to form a low-rank approximation to the matrix using the whole Krylov space produced by the test vectors, rather than the output of a single power iteration as is used by XTrace. Experiments on synthetic data show that the first modification offers only slight benefits in practice, while the second can lead to significant improvements depending on the spectrum of the matrix.
Extremal bounds for Gaussian trace estimation
This work derives extremal tail bounds for the Gaussian trace estimator applied to a real symmetric matrix. We define a partial ordering on the eigenvalues, so that when a matrix has greater spectrum under this ordering, its estimator will have worse tail bounds. This is done for two families of matrices: positive semidefinite matrices with bounded effective rank, and indefinite matrices with bounded 2-norm and fixed Frobenius norm. In each case, the tail region is defined rigorously and is constant for a given family.
Krylov-aware stochastic trace estimation
We introduce an algorithm for estimating the trace of a matrix function \\(f(\\mathbf{A})\\) using implicit products with a symmetric matrix \\(\\mathbf{A}\\). Existing methods for implicit trace estimation of a matrix function tend to treat matrix-vector products with \\(f(\\mathbf{A})\\) as a black-box to be computed by a Krylov subspace method. Like other recent algorithms for implicit trace estimation, our approach is based on a combination of deflation and stochastic trace estimation. However, we take a closer look at how products with \\(f(\\mathbf{A})\\) are integrated into these approaches which enables several efficiencies not present in previously studied methods. In particular, we describe a Krylov subspace method for computing a low-rank approximation of a matrix function by a computationally efficient projection onto Krylov subspace.
A Refined Probabilistic Error Bound for Sums
This paper considers a probabilistic model for floating-point computation in which the roundoff errors are represented by bounded random variables with mean zero. Using this model, a probabilistic bound is derived for the forward error of the computed sum of n real numbers. This work improves upon existing probabilistic bounds by holding to all orders, and as a result provides informative bounds for larger problem sizes.