Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
48,536 result(s) for "Iterative methods"
Sort by:
Efficient preconditioning strategies for accelerating GMRES in block-structured nonlinear systems for image deblurring
We propose an efficient preconditioning strategy to accelerate the convergence of Krylov subspace methods, specifically for solving complex nonlinear systems with a block five-by-five structure, commonly found in cell-centered finite difference discretizations for image deblurring using mean curvature techniques. Our method introduces two innovative preconditioned matrices, analyzed spectrally to show a favorable eigenvalue distribution that accelerates convergence in the Generalized Minimal Residual (GMRES) method. This technique significantly improves image quality, as measured by peak signal-to-noise ratio (PSNR), and demonstrates faster convergence compared to traditional GMRES, requiring minimal CPU time and few iterations for exceptional deblurring performance. The preconditioned matrices’ eigenvalues cluster around 1, indicating a beneficial spectral distribution. The source code is available at https://github.com/shahbaz1982/Precondition-Matrix .
LSRN: A Parallel Iterative Solver for Strongly Over- or Underdetermined Systems
We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to min ‖ - ‖ , where ∈ ℝ with ≫ or ≪ , and where may be rank-deficient. Tikhonov regularization may also be included. Since is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min( )⌉ × min( ), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK's DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster.
Fixed point and Bregman iterative methods for matrix rank minimization
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htm for non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10 −5 in about 3 min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.
Randomized Extended Kaczmarz for Solving Least Squares
We present a randomized iterative algorithm that exponentially converges in the mean square to the minimum $\\ell_2$-norm least squares solution of a given linear system of equations. The expected number of arithmetic operations required to obtain an estimate of given accuracy is proportional to the squared condition number of the system multiplied by the number of nonzero entries of the input matrix. The proposed algorithm is an extension of the randomized Kaczmarz method that was analyzed by Strohmer and Vershynin. [PUBLICATION ABSTRACT]
LSMR: An Iterative Algorithm for Sparse Least-Squares Problems
An iterative method LSMR is presented for solving linear systems $Ax=b$ and least-squares problems $\\min \\|Ax-b\\|_2$, with $A$ being sparse or a fast linear operator. LSMR is based on the Golub-Kahan bidiagonalization process. It is analytically equivalent to the MINRES method applied to the normal equation $A^T\\! Ax = A^T\\! b$, so that the quantities $\\|A^T\\! r_k\\|$ are monotonically decreasing (where $r_k = b - Ax_k$ is the residual for the current iterate $x_k$). We observe in practice that $\\|r_k\\|$ also decreases monotonically, so that compared to LSQR (for which only $\\|r_k\\|$ is monotonic) it is safer to terminate LSMR early. We also report some experiments with reorthogonalization.
Exact Solutions of Nonlinear Partial Differential Equations via the New Double Integral Transform Combined with Iterative Method
This article demonstrates how the new Double Laplace–Sumudu transform (DLST) is successfully implemented in combination with the iterative method to obtain the exact solutions of nonlinear partial differential equations (NLPDEs) by considering specified conditions. The solutions of nonlinear terms of these equations were determined by using the successive iterative procedure. The proposed technique has the advantage of generating exact solutions, and it is easy to apply analytically on the given problems. In addition, the theorems handling the mode properties of the DLST have been proved. To prove the usability and effectiveness of this method, examples have been given. The results show that the presented method holds promise for solving other types of NLPDEs.
Machine Learning Approach to Quadratic Programming-Based Microwave Imaging for Breast Cancer Detection
In this work, a novel technique is proposed that combines the Born iterative method, based on a quadratic programming approach, with convolutional neural networks to solve the ill-framed inverse problem coming from microwave imaging formulation in breast cancer detection. The aim is to accurately recover the permittivity of breast phantoms, these typically being strong dielectric scatterers, from the measured scattering data. Several tests were carried out, using a circular imaging configuration and breast models, to evaluate the performance of the proposed scheme, showing that the application of convolutional neural networks allows clinicians to considerably reduce the reconstruction time with an accuracy that exceeds 90% in all the performed validations.
IR Tools: a MATLAB package of iterative regularization methods and large-scale test problems
This paper describes a new MATLAB software package of iterative regularization methods and test problems for large-scale linear inverse problems. The software package, called IR TOOLS, serves two related purposes: we provide implementations of a range of iterative solvers, including several recently proposed methods that are not available elsewhere, and we provide a set of large-scale test problems in the form of discretizations of 2D linear inverse problems. The solvers include iterative regularization methods where the regularization is due to the semi-convergence of the iterations, Tikhonov-type formulations where the regularization is explicitly formulated in the form of a regularization term, and methods that can impose bound constraints on the computed solutions. All the iterative methods are implemented in a very flexible fashion that allows the problem’s coefficient matrix to be available as a (sparse) matrix, a function handle, or an object. The most basic call to all of the various iterative methods requires only this matrix and the right hand side vector; if the method uses any special stopping criteria, regularization parameters, etc., then default values are set automatically by the code. Moreover, through the use of an optional input structure, the user can also have full control of any of the algorithm parameters. The test problems represent realistic large-scale problems found in image reconstruction and several other applications. Numerical examples illustrate the various algorithms and test problems available in this package.
A Strong Convergence Theorem for Solving Pseudo-monotone Variational Inequalities Using Projection Methods
Several iterative methods have been proposed in the literature for solving the variational inequalities in Hilbert or Banach spaces, where the underlying operator A is monotone and Lipschitz continuous. However, there are very few methods known for solving the variational inequalities, when the Lipschitz continuity of A is dispensed with. In this article, we introduce a projection-type algorithm for finding a common solution of the variational inequalities and fixed point problem in a reflexive Banach space, where A is pseudo-monotone and not necessarily Lipschitz continuous. Also, we present an application of our result to approximating solution of pseudo-monotone equilibrium problem in a reflexive Banach space. Finally, we present some numerical examples to illustrate the performance of our method as well as comparing it with related method in the literature.
Drawing Dynamical and Parameters Planes of Iterative Families and Methods
The complex dynamical analysis of the parametric fourth-order Kim’s iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).