Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,144 result(s) for "Tikhonov"
Sort by:
Efficient Nonparametric Bayesian Inference For X-Ray Transforms
We consider the statistical inverse problem of recovering a function f : M → ℝ, where M is a smooth compact Riemannian manifold with boundary, from measurements of general X-ray transforms Iₐ(f) of f, corrupted by additive Gaussian noise. For M equal to the unit disk with “flat” geometry and a = 0 this reduces to the standard Radon transform, but our general setting allows for anisotropic media M and can further model local “attenuation” effects—both highly relevant in practical imaging problems such as SPECT tomography. We study a nonparametric Bayesian inference method based on standard Gaussian process priors for f. The posterior reconstruction of f corresponds to a Tikhonov regulariser with a reproducing kernel Hilbert space norm penalty that does not require the calculation of the singular value decomposition of the forward operator Iₐ. We prove Bernstein–von Mises theorems for a large family of one-dimensional linear functionals of f, and they entail that posterior-based inferences such as credible sets are valid and optimal from a frequentist point of view. In particular we derive the asymptotic distribution of smooth linear functionals of the Tikhonov regulariser, which attains the semiparametric information lower bound. The proofs rely on an invertibility result for the “Fisher information” operator I a * I a between suitable function spaces, a result of independent interest that relies on techniques from microlocal analysis. We illustrate the performance of the proposed method via simulations in various settings.
The Tikhonov regularization for vector equilibrium problems
We consider vector equilibrium problems in real Banach spaces and study their regularized problems. Based on cone continuity and generalized convexity properties of vector-valued mappings, we propose general conditions that guarantee existence of solutions to such problems in cases of monotonicity and nonmonotonicity. First, our study indicates that every Tikhonov trajectory converges to a solution to the original problem. Then, we establish the equivalence between the problem solvability and the boundedness of any Tikhonov trajectory. Finally, the convergence of the Tikhonov trajectory to the least-norm solution of the original problem is discussed.
Change Point Detection for Process Data Analytics Applied to a Multiphase Flow Facility
Change point detection becomes increasingly important because it can support data analysis by providing labels to the data in an unsupervised manner. In the context of process data analytics, change points in the time series of process variables may have an important indication about the process operation. For example, in a batch process, the change points can correspond to the operations and phases defined by the batch recipe. Hence identifying change points can assist labelling the time series data. Various unsupervised algorithms have been developed for change point detection, including the optimisation approach which minimises a cost function with certain penalties to search for the change points. The Bayesian approach is another, which uses Bayesian statistics to calculate the posterior probability of a specific sample being a change point. The paper investigates how the two approaches for change point detection can be applied to process data analytics. In addition, a new type of cost function using Tikhonov regularisation is proposed for the optimisation approach to reduce irrelevant change points caused by randomness in the data. The novelty lies in using regularisation-based cost functions to handle ill-posed problems of noisy data. The results demonstrate that change point detection is useful for process data analytics because change points can produce data segments corresponding to different operating modes or varying conditions, which will be useful for other machine learning tasks.
Optimized Process Parameters for a Reproducible Distribution of Relaxation Times Analysis of Electrochemical Systems
The distribution of relaxation times (DRT) analysis offers a model-free approach for a detailed investigation of electrochemical impedance spectra. Typically, the calculation of the distribution function is an ill-posed problem requiring regularization methods which are strongly parameter-dependent. Before statements on measurement data can be made, a process parameter study is crucial for analyzing the impact of the individual parameters on the distribution function. The optimal regularization parameter is determined together with the number of discrete time constants. Furthermore, the regularization term is investigated with respect to its mathematical background. It is revealed that the algorithm and its handling of constraints and the optimization function significantly determine the result of the DRT calculation. With optimized parameters, detailed information on the investigated system can be obtained. As an example of a complex impedance spectrum, a commercial Nickel–Manganese–Cobalt–Oxide (NMC) lithium-ion pouch cell is investigated. The DRT allows the investigation of the SOC dependency of the charge transfer reactions, solid electrolyte interphase (SEI) and the solid state diffusion of both anode and cathode. For the quantification of the single polarization contributions, a peak analysis algorithm based on Gaussian distribution curves is presented and applied.
A new regularization method for dynamic load identification
Dynamic forces are very important boundary conditions in practical engineering applications, such as structural strength analysis, health monitoring and fault diagnosis, and vibration isolation. Moreover, there are many applications in which we have found it very difficult to directly obtain the expected dynamic load which acts on a structure. Some traditional indirect inverse analysis techniques are developed for load identification by measured responses. These inverse problems about load identification mentioned above are complex and inherently ill-posed, while regularization methods can deal with this kind of problem. However, most of regularization methods are only limited to solve the pure mathematical numerical examples without application to practical engineering problems, and they should be improved to exclude jamming of noises in engineering. In order to solve these problems, a new regularization method is presented in this article to investigate the minimum of this minimization problem, and applied to reconstructing multi-source dynamic loads on the frame structure of hydrogenerator by its steady-state responses. Numerical simulations of the inverse analysis show that the proposed method is more effective and accurate than the famous Tikhonov regularization method. The proposed regularization method in this article is powerful in solving the dyanmic load identification problems.
LSRN: A Parallel Iterative Solver for Strongly Over- or Underdetermined Systems
We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to min ‖ - ‖ , where ∈ ℝ with ≫ or ≪ , and where may be rank-deficient. Tikhonov regularization may also be included. Since is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min( )⌉ × min( ), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK's DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster.
Regularized ensemble Kalman inversion for robust and efficient gravity data modeling to identify mineral and ore deposits
Modeling mineral and ore bodies from gravity anomalies remains challenging in geophysical exploration due to the ill-posed and non-unique nature of the inverse problem, particularly under conditions of noisy or sparse data. Established inversion methods, including local optimization and metaheuristic algorithms, often require extensive parameter tuning and may yield unstable or poorly constrained solutions. This study proposes a regularized ensemble Kalman inversion (EKI) framework enhanced by Tikhonov regularization to improve numerical stability and mitigate sensitivity to ensemble degeneracy, thereby enabling efficient uncertainty quantification through ensemble statistics. Controlled numerical experiments show that the ensemble size is larger than with moderate regularization, we can achieve an optimal balance between convergence stability and model resolution. Benchmarking against established metaheuristic algorithms (PSO, VFSA, and BA) suggests superior computational efficiency and stable convergence. Synthetic and real gravity data inversion (chromite, Pb-Zn, sulphide, and Cu-Au deposits) suggests that the regularized EKI yields stable, geologically consistent results with prior interpretations and drilling data. These results highlight the regularized EKI framework as a robust and efficient tool for mitigating mining risks and supporting strategic decision-making in mineral exploration.
Influence of the Tikhonov Regularization Parameter on the Accuracy of the Inverse Problem in Electrocardiography
The electrocardiogram (ECG) is the standard method in clinical practice to non-invasively analyze the electrical activity of the heart, from electrodes placed on the body’s surface. The ECG can provide a cardiologist with relevant information to assess the condition of the heart and the possible presence of cardiac pathology. Nonetheless, the global view of the heart’s electrical activity given by the ECG cannot provide fully detailed and localized information about abnormal electrical propagation patterns and corresponding substrates on the surface of the heart. Electrocardiographic imaging, also known as the inverse problem in electrocardiography, tries to overcome these limitations by non-invasively reconstructing the heart surface potentials, starting from the corresponding body surface potentials, and the geometry of the torso and the heart. This problem is ill-posed, and regularization techniques are needed to achieve a stable and accurate solution. The standard approach is to use zero-order Tikhonov regularization and the L-curve approach to choose the optimal value for the regularization parameter. However, different methods have been proposed for computing the optimal value of the regularization parameter. Moreover, regardless of the estimation method used, this may still lead to over-regularization or under-regularization. In order to gain a better understanding of the effects of the choice of regularization parameter value, in this study, we first focused on the regularization parameter itself, and investigated its influence on the accuracy of the reconstruction of heart surface potentials, by assessing the reconstruction accuracy with high-precision simultaneous heart and torso recordings from four dogs. For this, we analyzed a sufficiently large range of parameter values. Secondly, we evaluated the performance of five different methods for the estimation of the regularization parameter, also in view of the results of the first analysis. Thirdly, we investigated the effect of using a fixed value of the regularization parameter across all reconstructed beats. Accuracy was measured in terms of the quality of reconstruction of the heart surface potentials and estimation of the activation and recovery times, when compared with ground truth recordings from the experimental dog data. Results show that values of the regularization parameter in the range (0.01–0.03) provide the best accuracy, and that the three best-performing estimation methods (L-Curve, Zero-Crossing, and CRESO) give values in this range. Moreover, a fixed value of the regularization parameter could achieve very similar performance to the beat-specific parameter values calculated by the different estimation methods. These findings are relevant as they suggest that regularization parameter estimation methods may provide the accurate reconstruction of heart surface potentials only for specific ranges of regularization parameter values, and that using a fixed value of the regularization parameter may represent a valid alternative, especially when computational efficiency or consistency across time is required.