Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
49,311 result(s) for "Iterative method"
Sort by:
LSRN: A Parallel Iterative Solver for Strongly Over- or Underdetermined Systems
We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to min ‖ - ‖ , where ∈ ℝ with ≫ or ≪ , and where may be rank-deficient. Tikhonov regularization may also be included. Since is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min( )⌉ × min( ), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK's DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster.
LSMR: An Iterative Algorithm for Sparse Least-Squares Problems
An iterative method LSMR is presented for solving linear systems $Ax=b$ and least-squares problems $\\min \\|Ax-b\\|_2$, with $A$ being sparse or a fast linear operator. LSMR is based on the Golub-Kahan bidiagonalization process. It is analytically equivalent to the MINRES method applied to the normal equation $A^T\\! Ax = A^T\\! b$, so that the quantities $\\|A^T\\! r_k\\|$ are monotonically decreasing (where $r_k = b - Ax_k$ is the residual for the current iterate $x_k$). We observe in practice that $\\|r_k\\|$ also decreases monotonically, so that compared to LSQR (for which only $\\|r_k\\|$ is monotonic) it is safer to terminate LSMR early. We also report some experiments with reorthogonalization.
Randomized Extended Kaczmarz for Solving Least Squares
We present a randomized iterative algorithm that exponentially converges in the mean square to the minimum $\\ell_2$-norm least squares solution of a given linear system of equations. The expected number of arithmetic operations required to obtain an estimate of given accuracy is proportional to the squared condition number of the system multiplied by the number of nonzero entries of the input matrix. The proposed algorithm is an extension of the randomized Kaczmarz method that was analyzed by Strohmer and Vershynin. [PUBLICATION ABSTRACT]
A Strong Convergence Theorem for Solving Pseudo-monotone Variational Inequalities Using Projection Methods
Several iterative methods have been proposed in the literature for solving the variational inequalities in Hilbert or Banach spaces, where the underlying operator A is monotone and Lipschitz continuous. However, there are very few methods known for solving the variational inequalities, when the Lipschitz continuity of A is dispensed with. In this article, we introduce a projection-type algorithm for finding a common solution of the variational inequalities and fixed point problem in a reflexive Banach space, where A is pseudo-monotone and not necessarily Lipschitz continuous. Also, we present an application of our result to approximating solution of pseudo-monotone equilibrium problem in a reflexive Banach space. Finally, we present some numerical examples to illustrate the performance of our method as well as comparing it with related method in the literature.
A derivative-free iterative method for nonlinear monotone equations with convex constraints
In this paper, based on the projection strategy, we propose a derivative-free iterative method for large-scale nonlinear monotone equations with convex constraints, which can generate a sufficient descent direction at each iteration. Due to its lower storage and derivative-free information, the proposed method can be used to solve large-scale non-smooth problems. The global convergence of the proposed method is proved under the Lipschitz continuity assumption. Moreover, if the local error bound condition holds, the proposed method is shown to be linearly convergent. Preliminary numerical comparison shows that the proposed method is efficient and promising.
Iterative Methods for Solving a System of Linear Equations in a Bipolar Fuzzy Environment
We develop the solution procedures to solve the bipolar fuzzy linear system of equations (BFLSEs) with some iterative methods namely Richardson method, extrapolated Richardson (ER) method, Jacobi method, Jacobi over-relaxation (JOR) method, Gauss–Seidel (GS) method, extrapolated Gauss-Seidel (EGS) method and successive over-relaxation (SOR) method. Moreover, we discuss the properties of convergence of these iterative methods. By showing the validity of these methods, an example having exact solution is described. The numerical computation shows that the SOR method with ω = 1.25 is more accurate as compared to the other iterative methods.
New Estimation Method of an Error for J Iteration
The major aim of this article is to show how to estimate direct errors using the J iteration method. Direct error estimation of iteration processes is being investigated in different journals. We also illustrate that an error in the J iteration process can be controlled. Furthermore, we express J iteration convergence by using distinct initial values.
A new multi-step method for solving nonlinear systems with high efficiency indices
Solving nonlinear problems stands as a pivotal domain in scientific exploration. This study introduces a novel method comprising basic and multi-step components. The proposed iterative method has a convergence order of 2 m + 1 , where m is the step number for m ≥ 2 . Since our proposed method has only one Fréchet derivative evaluation and its inversion, the method has a higher efficiency index than previous methods. To comprehensively evaluate the method’s performance in efficiency, accuracy, and attraction basin behavior, numerical tests are presented. Furthermore, we applied the proposed method to solve renowned equations like Hammerstein’s integral equation and Berger’s equation after transforming them into nonlinear systems.
Fast Implementation of Generalized Koebe’s Iterative Method
Let G be a given bounded multiply connected domain of connectivity m+1 bounded by smooth Jordan curves. Koebe’s iterative method is a classical method for computing the conformal mapping from the domain G onto a bounded multiply connected circular domain obtained by removing m disks from the unit disk. Koebe’s method has been generalized to compute the conformal mapping from the domain G onto a bounded multiply connected circular domain obtained by removing m−1 disks from a circular ring. A fast numerical implementation of the generalized Koebe’s iterative method is presented in this paper. The proposed method is based on using the boundary integral equation with the generalized Neumann kernel. Several numerical examples are presented to demonstrate the accuracy and efficiency of the proposed method.
Chaos in a Cancer Model via Fractional Derivatives with Exponential Decay and Mittag-Leffler Law
In this paper, a three-dimensional cancer model was considered using the Caputo-Fabrizio-Caputo and the new fractional derivative with Mittag-Leffler kernel in Liouville-Caputo sense. Special solutions using an iterative scheme via Laplace transform, Sumudu-Picard integration method and Adams-Moulton rule were obtained. We studied the uniqueness and existence of the solutions. Novel chaotic attractors with total order less than three are obtained.