Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
253
result(s) for
"QR decomposition"
Sort by:
Accelerating Neural Network Training with FSGQR: A Scalable and High-Performance Alternative to Adam
by
Bilski, Jarosław
,
Xiao, Min
,
Kowalczyk, Bartosz
in
Accuracy
,
Algorithms
,
Artificial neural networks
2025
This paper introduces a significant advancement in neural network training algorithms through the development of a Fast Scaled Givens rotations in QR decomposition (FSGQR) method based on the recursive least squares (RLS) method. The algorithm represents an optimized variant of existing rotation-based training approaches, distinguished by its complete elimination of scale factors from calculations while maintaining mathematical precision. Through extensive experimentation across multiple benchmarks, including complex tasks like the MNIST digit recognition and concrete strength prediction, FSGQR demonstrates superior performance compared to the widely-used ADAM optimizer and other conventional training methods. The algorithm achieves faster convergence with fewer training epochs while maintaining or improving accuracy.In some tasks, FSGQR completed training in up to five times fewer epochs compared to the ADAM algorithm, while it achieved higher recognition accuracy in the MNIST training set. The paper provides comprehensive mathematical foundations for the optimization, detailed implementation guidelines, and extensive empirical validation across various neural network architectures. The results conclusively demonstrate that FSGQR offers a compelling alternative to current deep learning optimization methods, particularly for applications requiring rapid training convergence without sacrificing accuracy. The algorithm’s effectiveness is particularly noteworthy in feedforward neural networks with differentiable activation functions, making it a valuable tool for modern machine learning applications.
Journal Article
Rank‐revealing QR decomposition applied to damage localization in truss structures
by
Hołobut, Paweł
,
Błachowski, Bartłomiej
,
Zhong, Yue
in
Damage
,
Damage detection
,
Damage localization
2017
Summary The purpose of this work is the development of an efficient and high‐sensitive damage localization technique for truss structures, based on the rank‐revealing QR decomposition (RRQR) of the difference‐of‐flexibility matrix. The method is an enhancement of the existing techniques of damage detection, which rely on the set of so‐called damage locating vector (DLV). The advantages of the RRQR decomposition‐based DLV (RRQR‐DLV) method are its less computational effort and high sensitivity to damage. Compared with the frequently used stochastic DLV (SDLV) method, RRQR‐DLV offers higher sensitivity to damage, which has been validated based on the presented numerical simulation. The effectiveness of the proposed RRQR‐DLV method is also illustrated with the experimental validation based on a laboratory‐scale Bailey truss bridge model. The proposed method works under ambient excitation such as traffic excitation and wind excitation; therefore, it is promising for real‐time damage monitoring of truss structures. Copyright © 2016 John Wiley & Sons, Ltd.
Journal Article
Stable Evaluation of Gaussian Radial Basis Function Interpolants
2012
We provide a new way to compute and evaluate Gaussian radial basis function interpolants in a stable way with a special focus on small values of the shape parameter, i.e., for \"flat\" kernels. This work is motivated by the fundamental ideas proposed earlier by Bengt Fornberg and his coworkers. However, following Mercer's theorem, an L_2(\\mathbb{R}^d, \\rho)-orthonormal expansion of the Gaussian kernel allows us to come up with an algorithm that is simpler than the one proposed by Fornberg, Larsson, and Flyer and that is applicable in arbitrary space dimensions d. In addition to obtaining an accurate approximation of the radial basis function interpolant (using many terms in the series expansion of the kernel), we also propose and investigate a highly accurate least-squares approximation based on early truncation of the kernel expansion.
Journal Article
Efficient Implementations of the Generalized Lasso Dual Path Algorithm
2016
We consider efficient implementations of the generalized lasso dual path algorithm given by Tibshirani and Taylor in
2011
. We first describe a generic approach that covers any penalty matrix D and any (full column rank) matrix X of predictor variables. We then describe fast implementations for the special cases of trend filtering problems, fused lasso problems, and sparse fused lasso problems, both with X = I and a general matrix X. These specialized implementations offer a considerable improvement over the generic implementation, both in terms of numerical stability and efficiency of the solution path computation. These algorithms are all available for use in the genlasso R package, which can be found in the CRAN repository.
Journal Article
Local Levenberg-Marquardt Algorithm for Learning Feedforwad Neural Networks
by
Bilski, Jarosław
,
Zurada, Jacek M.
,
Kowalczyk, Bartosz
in
Algorithms
,
Complexity
,
feed-forward neural network
2020
This paper presents a local modification of the Levenberg-Marquardt algorithm (LM). First, the mathematical basics of the classic LM method are shown. The classic LM algorithm is very efficient for learning small neural networks. For bigger neural networks, whose computational complexity grows significantly, it makes this method practically inefficient. In order to overcome this limitation, local modification of the LM is introduced in this paper. The main goal of this paper is to develop a more complexity efficient modification of the LM method by using a local computation. The introduced modification has been tested on the following benchmarks: the function approximation and classification problems. The obtained results have been compared to the classic LM method performance. The paper shows that the local modification of the LM method significantly improves the algorithm’s performance for bigger networks. Several possible proposals for future works are suggested.
Journal Article
An Efficient Randomized Algorithm for Computing the Approximate Tucker Decomposition
2021
By combining the thin QR decomposition and the subsampled randomized Fourier transform (SRFT), we obtain an efficient randomized algorithm for computing the approximate Tucker decomposition with a given target multilinear rank. We also combine this randomized algorithm with the power iteration technique to improve the efficiency of the algorithm. By using the results about the singular values of the product of orthonormal matrices with the Kronecker product of SRFT matrices, we obtain the error bounds of these two algorithms. Finally, the efficiency of these algorithms is illustrated by several numerical examples.
Journal Article
Estimation of A Partial Linear Model with Instrumental Variable for the Longitudinal Data
by
Liu, Ruiping
,
Sheng, Lili
,
Kang, Fangyuan
in
Asymptotic methods
,
Longitudinal Data
,
Normality
2023
A partial linear model with instrumental variables was developed for longitudinal data. In the partially linear model, the explanatory variable is an endogenous variable, which is correlated with the error term. The endogenous variable was expressed by an instrumental variable and an error item. The endogenous variable was estimated by the instrumental variable through the least square method. B-spline regression combined with QR decomposition was used to approximate the nonparametric function. For the estimation of parametric, the Quadratic inference function and Secant method were applied. Under some conditions, the estimator was consistent and asymptotic normality. Some simulation was conducted to prove the finite sample behavior of the estimator.
Journal Article
GPS Phase Integer Ambiguity Resolution Based on Eliminating Coordinate Parameters and Ant Colony Algorithm
2025
Correctly fixing the integer ambiguity of GNSS is the key to realizing the application of GNSS high-precision positioning. When solving the float solution of ambiguity based on the double-difference model epoch by epoch, the common method for resolving the integer ambiguity needs to solve the coordinate parameter information, due to the influence of limited GNSS phase data observations. This type of method will lead to an increase in the ill-posedness of the double-difference solution equation, so that the fixed success rate of the integer ambiguity is not high. Therefore, a new integer ambiguity resolution method based on eliminating coordinate parameters and ant colony algorithm is proposed in this paper. The method eliminates the coordinate parameters in the observation equation using QR decomposition transformation, and only estimates the ambiguity parameters using the Kalman filter. On the basis that the Kalman filter will obtain the float solution of ambiguity, the decorrelation processing is carried out based on continuous Cholesky decomposition, and the optimal solution of integer ambiguity is searched using the ant colony algorithm. Two sets of static and dynamic GPS experimental data are used to verify the method and compared with conventional least squares and LAMBDA methods. The results show that the new method has good decorrelation effect, which can correctly and effectively realize the integer ambiguity resolution.
Journal Article