Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
97 result(s) for "Frommer, Andreas"
Sort by:
A deflated conjugate gradient method for multiple right hand sides and multiple shifts
We consider the task of computing solutions of linear systems that only differ by a shift with the identity matrix as well as linear systems with several different right-hand sides. In the past, Krylov subspace methods have been developed which exploit either the need for solutions to multiple right-hand sides (e.g. deflation type methods and block methods) or multiple shifts (e.g. shifted CG) with some success. In this paper we present a block Krylov subspace method which, based on a block Lanczos process, exploits both features—shifts and multiple right-hand sides—at once. Such situations arise, for example, in lattice quantum chromodynamics (QCD) simulations within the Rational Hybrid Monte Carlo (RHMC) algorithm. We present numerical evidence that our method is superior compared to applying other iterative methods to each of the systems individually as well as, in typical situations, to shifted or block Krylov subspace methods.
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
Verified Computation of Square Roots of a Matrix
We present methods to compute verified square roots of a square matrix A. Given an approximation X to the square root, obtained by a classical floating point algorithm, we use interval arithmetic to find an interval matrix which is guaranteed to contain the error of X. Our approach is based on the Krawczyk method, which we modify in two different ways in such a manner that the computational complexity for an n x n matrix is reduced to n super( 3). The methods are based on the spectral decomposition or, in the case that the eigenvector matrix is ill conditioned, on a similarity transformation to block diagonal form. Numerical experiments prove that our methods are computationally efficient and that they yield narrow enclosures provided X is a good approximation. This is particularly true for symmetric matrices, since their eigenvector matrix is perfectly conditioned.
Efficient and Stable Arnoldi Restarts for Matrix Functions Based on Quadrature
When using the Arnoldi method for approximating $f(A){\\mathbf b}$, the action of a matrix function on a vector, the maximum number of iterations that can be performed is often limited by the storage requirements of the full Arnoldi basis. As a remedy, different restarting algorithms have been proposed in the literature, none of which has been universally applicable, efficient, and stable at the same time. We utilize an integral representation for the error of the iterates in the Arnoldi method which then allows us to develop an efficient quadrature-based restarting algorithm suitable for a large class of functions, including the so-called Stieltjes functions and the exponential function. Our method is applicable for functions of Hermitian and non-Hermitian matrices, requires no a priori spectral information, and runs with essentially constant computational work per restart cycle. We comment on the relation of this new restarting approach to other existing algorithms and illustrate its efficiency and numerical stability by various numerical experiments. [PUBLICATION ABSTRACT]
Block Krylov Subspace Methods for Functions of Matrices II: Modified Block FOM
We analyze an expansion of the generalized block Krylov subspace framework of [Electron. Trans. Numer. Anal., 47 (2017), pp. 100--126]. This expansion allows the use of low-rank modifications of the matrix projected onto the block Krylov subspace and contains, as special cases, the block GMRES method and the new block Radau--Arnoldi method. Within this general setting, we present results that extend the interpolation property from the nonblock case to a matrix polynomial interpolation property for the block case, and we relate the eigenvalues of the projected matrix to the latent roots of these matrix polynomials. Some error bounds for these modified block FOM methods for solving linear systems are presented. We then show how cospatial residuals can be preserved in the case of families of shifted linear block systems. This result is used to derive computationally practical restarted algorithms for block Krylov approximations that compute the action of a matrix function on a set of several vectors simultaneously. Finally, we prove some error bounds and present numerical results showing that two modifications of FOM, the block harmonic and the block Radau--Arnoldi methods for matrix functions, can significantly improve the convergence behavior.
An Algebraic Convergence Theory for Restricted Additive Schwarz Methods Using Weighted Max Norms
Convergence results for the restrictive additive Schwarz (RAS) method of Cai and Sarkis [SIAM J. Sci. Comput., 21 (1999), pp. 792-797] for the solution of linear systems of the form Ax = b are provided using an algebraic view of additive Schwarz methods and the theory of multisplittings. The linear systems studied are usually discretizations of partial differential equations in two or three dimensions. It is shown that in the case of A symmetric positive definite, the projections defined by the methods are not orthogonal with respect to the inner product defined by A, and therefore the standard analysis cannot be used here. The convergence results presented are for the class of M-matrices (and more generally for H-matrices) using weighted max norms. Comparison between different versions of the RAS method are given in terms of these norms. A comparison theorem with respect to the classical additive Schwarz method makes it possible to indirectly get quantitative results on rates of convergence which otherwise cannot be obtained by the theory. Several RAS variants are considered, including new ones and two-level schemes.