Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
27,614
result(s) for
"Iterative algorithms"
Sort by:
Information-Theoretic Generalization Bounds for Meta-Learning and Applications
2021
Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separate within-task training and test sets, like model agnostic meta-learning (MAML), or joint within-task training and test sets, like reptile. Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data. For the latter, the derived bound includes an additional MI between the output of the per-task learning procedure and corresponding data set to capture within-task uncertainty. Tighter bounds are then developed for the two classes via novel individual task MI (ITMI) bounds. Applications of the derived bounds are finally discussed, including a broad class of noisy iterative algorithms for meta-learning.
Journal Article
Gradient-based and least-squares-based iterative algorithms for Hammerstein systems using the hierarchical identification principle
2013
This study derives a least-squares-based iterative algorithm and a gradient-based iterative algorithm for Hammerstein systems using the decomposition-based hierarchical identification principle. The simulation results confirm that the proposed two algorithms can give satisfactory identification accuracies and the least-squares-based iterative algorithm has faster convergence rates than the gradient-based iterative algorithm.
Journal Article
Hierarchical Principle-Based Iterative Parameter Estimation Algorithm for Dual-Frequency Signals
by
Liu, Siyu
,
Ding, Feng
,
Xu, Ling
in
Algorithms
,
Computational efficiency
,
Iterative algorithms
2019
In this paper, we consider the parameter estimation problem of dual-frequency signals disturbed by stochastic noise. The signal model is a highly nonlinear function with respect to the frequencies and phases, and the gradient method cannot obtain the accurate parameter estimates. Based on the Newton search, we derive an iterative algorithm for estimating all parameters, including the unknown amplitudes, frequencies, and phases. Furthermore, by using the parameter decomposition, a hierarchical least squares and gradient-based iterative algorithm is proposed for improving the computational efficiency. A gradient-based iterative algorithm is given for comparisons. The numerical examples are provided to demonstrate the validity of the proposed algorithms.
Journal Article
Modified and accelerated relaxed gradient-based iterative algorithms for the complex conjugate and transpose matrix equations
2024
In this paper, by applying the updated technique to the relaxed gradient-based iterative algorithm proposed by Wang et al. (J. Appl. Math. Comput.
67
, 317–341
2021
), we develop the modified relaxed gradient-based iterative algorithm for the complex conjugate and transpose Sylvester matrix equations. Compared with the relaxed gradient-based iterative algorithm, the modified relaxed gradient-based iterative algorithm can make full use of the latest information to compute the next result and lead to a faster convergence rate. Furthermore, in order to reduce the computation cost of each iteration of the relaxed gradient-based iterative algorithm, by replacing the coefficient matrices by their diagonal parts, we construct the accelerated relaxed gradient-based iterative algorithm. By utilizing the properties of the real representation of a complex matrix, matrix norms, and techniques of inequalities, we prove that two proposed iterative algorithms are convergent under proper restrictions. And the quasi-optimal convergence factor of the accelerated relaxed gradient-based iterative algorithm is also derived. Also, some numerical examples are given to show the effectiveness and the superiorities of the proposed algorithms. Lastly, the application in time-varying linear system of the proposed algorithms is presented.
Journal Article
A new directional stability transformation method of chaos control for first order reliability analysis
by
Yang, Dixiong
,
Meng, Zeng
,
Li, Gang
in
Algorithms
,
Bifurcations
,
Computational Mathematics and Numerical Analysis
2017
The HL-RF iterative algorithm of the first order reliability method (FORM) is popularly applied to evaluate reliability index in structural reliability analysis and reliability-based design optimization. However, it sometimes suffers from non-convergence problems, such as bifurcation, periodic oscillation, and chaos for nonlinear limit state functions. This paper derives the formulation of the Lyapunov exponents for the HL-RF iterative algorithm in order to identify these complicated numerical instability phenomena of discrete chaotic dynamic systems. Moreover, the essential cause of low efficiency for the stability transform method (STM) of convergence control of FORM is revealed. Then, a novel method, directional stability transformation method (DSTM), is proposed to reduce the number of function evaluations of original STM as a chaos feedback control approach. The efficiency and convergence of different reliability evaluation methods, including the HL-RF algorithm, STM and DSTM, are analyzed and compared by several numerical examples. It is indicated that the proposed DSTM method is versatile, efficient and robust, and the bifurcation, periodic oscillation, and chaos of FORM is controlled effectively.
Journal Article
A shift-splitting hierarchical identification iterative algorithm for solving the matrix equation AX−X‾B=C
2025
Based on the Hermitian and skew Hermitian splitting of the coefficient matrices, we demonstrate a shift-splitting hierarchical identification (SSHI) iterative algorithm to solve the matrix equation AX−X‾B=C. For any initial value, the suggested method converges to the exact solution under certain conditions. Three numerical examples are presented to demonstrate the effectiveness of the shift-splitting hierarchical identification (SSHI) iterative method and to compare it to the Jacobi-gradient iterative algorithm (JGI) (Bayoumi in Appl. Math. Inf. Sci. (2021)) and the gradient iterative algorithm (GI).
Journal Article
Finite Element Analysis for the Stationary Navier–Stokes Equations with Mixed Boundary Conditions
2026
This paper studies the stationary incompressible Navier-Stokes equations with mixed boundary conditions using a velocity-pressure finite element formulation. We first establish a variational framework and prove existence of solutions under suitable regularity assumptions, followed by a Galerkin discretization with error estimates. Three iterative algorithms (the Stokes, Newton, and Oseen schemes) are then analyzed, with stability conditions and error bounds derived for each. Numerical experiments confirm the theoretical results: all methods achieve second-order convergence for velocity and pressure. Among the three schemes, the Newton iteration is the most efficient in terms of computational time, while the Oseen iteration exhibits the strongest robustness with respect to decreasing viscosity coefficients.
Journal Article
Modified Iteration Algorithm for Solving Absolute Value Equations
2025
A new iterative algorithm has been proposed in this study for the efficient solution of the absolute value equation (AVE). Through an equivalent transformation, the AVE has been restructured into a two-by-two block nonlinear equation, which led to the development of our new iteration method. The convergence characteristics and optimal parameters of this approach have been analyzed, and new convergence conditions, differing from previous findings, have been introduced. Numerical experiments have demonstrated that the proposed strategy is both feasible and effective.
Journal Article
New algorithms for trace-ratio problem with application to high-dimension and large-sample data dimensionality reduction
2024
Learning large-scale data sets with high dimensionality is a main concern in research areas including machine learning, visual recognition, information retrieval, to name a few. In many practical uses such as images, video, audio, and text processing, we have to face with high-dimension and large-sample data problems. The trace-ratio problem is a key problem for feature extraction and dimensionality reduction to circumvent the high dimensional space. However, it has been long believed that this problem has no closed-form solution, and one has to solve it by using some inner-outer iterative algorithms that are very time consuming. Therefore, efficient algorithms for high-dimension and large-sample trace-ratio problems are still lacking, especially for dense data problems. In this work, we present a closed-form solution for the trace-ratio problem, and propose two algorithms to solve it. Based on the formula and the randomized singular value decomposition, we first propose a randomized algorithm for solving high-dimension and large-sample dense trace-ratio problems. For high-dimension and large-sample sparse trace-ratio problems, we then propose an algorithm based on the closed-form solution and solving some consistent under-determined linear systems. Theoretical results are established to show the rationality and efficiency of the proposed methods. Numerical experiments are performed on some real-world data sets, which illustrate the superiority of the proposed algorithms over many state-of-the-art algorithms for high-dimension and large-sample dimensionality reduction problems.
Journal Article
The Augmented Lagrangian Method as a Framework for Stabilised Methods in Computational Mechanics
by
Hansbo, Peter
,
Burman, Erik
,
Larson, Mats G.
in
Approximation
,
Augmented lagrange multiplier methods
,
Augmented Lagrangian methods
2023
In this paper we will present a review of recent advances in the application of the augmented Lagrange multiplier method as a general approach for generating multiplier-free stabilised methods. The augmented Lagrangian method consists of a standard Lagrange multiplier method augmented by a penalty term, penalising the constraint equations, and is well known as the basis for iterative algorithms for constrained optimisation problems. Its use as a stabilisation methods in computational mechanics has, however, only recently been appreciated. We first show how the method generates Galerkin/Least Squares type schemes for equality constraints and then how it can be extended to develop new stabilised methods for inequality constraints. Application to several different problems in computational mechanics is given.
Journal Article