Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
36,205 result(s) for "Regularization"
Sort by:
Finite temperature gluon propagator in Landau gauge: non-zero Matsubara frequencies and spectral densities
We report on the lattice computation of the Landau gauge gluon propagator at finite temperature, including the non-zero Matsubara frequencies. Moreover, the corresponding Källén-Lehmann spectral density is computed, using a Tikhonov regularisation together with the Morozov discrepancy principle. Implications for gluon confinement are also discussed.
3-D inversion of magnetic data based on the L1–L2 norm regularization
Magnetic inversion is one of the popular methods to obtain information about the subsurface structure. However, many of the conventional methods have a serious problem, that is, the linear equations to be solved become ill-posed, under-determined, and thus, the uniqueness of the solution is not guaranteed. As a result, several different models fit the observed magnetic data with the same accuracy. To reduce the non-uniqueness of the model, conventional studies introduced regularization method based on the quadratic solution norm. However, these regularization methods impose a certain level of smoothness, and as the result, the resultant model is likely to be blurred. To obtain a focused magnetic model, I introduce L1 norm regularization. As is widely known, L1 norm regularization promotes sparseness of the model. So, it is expected that, the resulting model is constructed only with the features truly required to reconstruct data and, as a result, a simple and focused model is obtained. However, by using L1 norm regularization solely, an excessively concentrated model is obtained due to the nature of the L1 norm regularization and a lack of linear independence of the magnetic equations. To overcome this problem, I use a combination of L1 and L2 norm regularization. To choose a feasible regularization parameter, I introduce a regularization parameter selection method based on the L-curve criterion with fixing the mixing ratio of L1 and L2 norm regularization. This inversion method is applied to a real magnetic anomaly data observed on Hokkaido Island, northern Japan and reveals the subsurface magnetic structure on this area.
Lower bounds for finding stationary points I
We prove lower bounds on the complexity of finding ϵ-stationary points (points x such that ‖∇f(x)‖≤ϵ) of smooth, high-dimensional, and potentially non-convex functions f. We consider oracle-based complexity measures, where an algorithm is given access to the value and all derivatives of f at a query point x. We show that for any (potentially randomized) algorithm A, there exists a function f with Lipschitz pth order derivatives such that A requires at least ϵ-(p+1)/p queries to find an ϵ-stationary point. Our lower bounds are sharp to within constants, and they show that gradient descent, cubic-regularized Newton’s method, and generalized pth order regularization are worst-case optimal within their natural function classes.
IR Tools: a MATLAB package of iterative regularization methods and large-scale test problems
This paper describes a new MATLAB software package of iterative regularization methods and test problems for large-scale linear inverse problems. The software package, called IR TOOLS, serves two related purposes: we provide implementations of a range of iterative solvers, including several recently proposed methods that are not available elsewhere, and we provide a set of large-scale test problems in the form of discretizations of 2D linear inverse problems. The solvers include iterative regularization methods where the regularization is due to the semi-convergence of the iterations, Tikhonov-type formulations where the regularization is explicitly formulated in the form of a regularization term, and methods that can impose bound constraints on the computed solutions. All the iterative methods are implemented in a very flexible fashion that allows the problem’s coefficient matrix to be available as a (sparse) matrix, a function handle, or an object. The most basic call to all of the various iterative methods requires only this matrix and the right hand side vector; if the method uses any special stopping criteria, regularization parameters, etc., then default values are set automatically by the code. Moreover, through the use of an optional input structure, the user can also have full control of any of the algorithm parameters. The test problems represent realistic large-scale problems found in image reconstruction and several other applications. Numerical examples illustrate the various algorithms and test problems available in this package.
Newton-type methods for non-convex optimization under inexact Hessian information
We consider variants of trust-region and adaptive cubic regularization methods for non-convex optimization, in which the Hessian matrix is approximated. Under certain condition on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve ε-approximate second-order optimality which have been shown to be tight. Our Hessian approximation condition offers a range of advantages as compared with the prior works and allows for direct construction of the approximate Hessian with a priori guarantees through various techniques, including randomized sampling methods. In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and adaptive cubic regularization methods.
A Unified Framework for High-Dimensional Analysis of M-Estimators with Decomposable Regularizers
High-dimensional statistical inference deals with models in which the the number of parameters ñ is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n → 0, a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank matrices and combinations thereof. In such settings, a general approach to estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed structure. This paper provides a unified framework for establishing consistency and convergence rates for such regularized Af-estimators under highdimensional scaling. We state one main theorem and show how it can be used to re-derive some existing results, and also to obtain a number of new results on consistency and convergence rates, in both ℓ₂-error and related norms. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure corresponding regularized M-estimators have fast convergence rates and which are optimal in many well-studied cases.
Constrained and unconstrained deep image prior optimization models with automatic regularization
Deep Image Prior (DIP) is currently among the most efficient unsupervised deep learning based methods for ill-posed inverse problems in imaging. This novel framework relies on the implicit regularization provided by representing images as the output of generative Convolutional Neural Network (CNN) architectures. So far, DIP has been shown to be an effective approach when combined with classical and novel regularizers. Unfortunately, to obtain appropriate solutions, all the models proposed up to now require an accurate estimate of the regularization parameter. To overcome this difficulty, we consider a locally adapted regularized unconstrained model whose local regularization parameters are automatically estimated for additively separable regularizers. Moreover, we propose a novel constrained formulation in analogy to Morozov’s discrepancy principle which enables the application of a broader range of regularizers. Both the unconstrained and the constrained models are solved via the proximal gradient descent-ascent method. Numerical results demonstrate the robustness with respect to image content, noise levels and hyperparameters of the proposed models on both denoising and deblurring of simulated as well as real natural and medical images.
Some point and singular potentials in one dimension: Standard arguments and new advances.1
We review some methods to obtain self-adjoint realizations of one dimensional Hamiltonians with contact interactions, such as Dirac deltas or their first derivatives, or singular interactions, such us the one dimensional Coulomb interaction. We fix these self adjoint realizations either by matching conditions at the points supporting the interaction or carrying the singularity, or by a distributional method used directly on the Schrödinger equation, or by a regularization process.
Microbe–disease associations prediction by graph regularized non‐negative matrix factorization with L2,1 $$ {L}_{2,1} $$norm regularization terms
Microbes are involved in a wide range of biological processes and are closely associated with disease. Inferring potential disease‐associated microbes as the biomarkers or drug targets may help prevent, diagnose and treat complex human diseases. However, biological experiments are time‐consuming and expensive. In this study, we introduced a new method called iPALM‐GLMF, which modelled microbe–disease association prediction as a problem of non‐negative matrix factorization with graph dual regularization terms and L2,1 $$ {L}_{2,1} $$norm regularization terms. The graph dual regularization terms were used to capture potential features in the microbe and disease space, and the L2,1 $$ {L}_{2,1} $$norm regularization terms were used to ensure the sparsity of the feature matrices obtained from the non‐negative matrix factorization and to improve the interpretability. To solve the model, iPALM‐GLMF used a non‐negative double singular value decomposition to initialize the matrix factorization and adopted an inertial Proximal Alternating Linear Minimization iterative process to obtain the final matrix factorization results. As a result, iPALM‐GLMF performed better than other existing methods in leave‐one‐out cross‐validation and fivefold cross‐validation. In addition, case studies of different diseases demonstrated that iPALM‐GLMF could effectively predict potential microbial‐disease associations. iPALM‐GLMF is publicly available at https://github.com/LiangzheZhang/iPALM‐GLMF.
MINIMAX ESTIMATION OF SMOOTH OPTIMAL TRANSPORT MAPS
Brenier’s theorem is a cornerstone of optimal transport that guarantees the existence of an optimal transport map T between two probability distributions P and Q over ℝ d under certain regularity conditions. The main goal of this work is to establish the minimax estimation rates for such a transport map from data sampled from P and Q under additional smoothness assumptions on T. To achieve this goal, we develop an estimator based on the minimization of an empirical version of the semidual optimal transport problem, restricted to truncated wavelet expansions. This estimator is shown to achieve near minimax optimality using new stability arguments for the semidual and a complementary minimax lower bound. Furthermore, we provide numerical experiments on synthetic data supporting our theoretical findings and highlighting the practical benefits of smoothness regularization. These are the first minimax estimation rates for transport maps in general dimension.