Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
10,599
result(s) for
"Numerical Algebra"
Sort by:
Robust optimisation algorithm for the measurement matrix in compressed sensing
by
Liu, Jixin
,
Zhou, Ying
,
Sun, Quansen
in
(B0260) Optimisation techniques
,
(B0290H) Linear algebra (numerical analysis)
,
(B6135) Optical, image and video signal processing
2018
The measurement matrix which plays an important role in compressed sensing has got a lot of attention. However, the existing measurement matrices ignore the energy concentration characteristic of the natural images in the sparse domain, which can help to improve the sensing efficiency and the construction efficiency. Here, the authors propose a simple but efficient measurement matrix based on the Hadamard matrix, named Hadamard-diagonal matrix (HDM). In HDM, the energy conservation in the sparse domain is maximised. In addition, considering the reconstruction performance can be further improved by decreasing the mutual coherence of the measurement matrix, an effective optimisation strategy is adopted in order to reducing the mutual coherence for better reconstruction quality. The authors conduct several experiments to evaluate the performance of HDM and the effectiveness of optimisation algorithm. The experimental results show that HDM performs better than other popular measurement matrices, and the optimisation algorithm can improve the performance of not only the HDM but also the other popular measurement matrices.
Journal Article
Optimization algorithms on matrix manifolds
2008
Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra.
Tensor Decompositions and Applications
2009
This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or N-way array. Decompositions of higher-order tensors (i.e., N-way arrays with N ≥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.
Journal Article
Rayleigh Quotient Methods for Estimating Common Roots of Noisy Univariate Polynomials
2019
The problem is considered of approximately solving a system of univariate polynomials with one or more common roots and its coefficients corrupted by noise. The goal is to estimate the underlying common roots from the noisy system. Symbolic algebra methods are not suitable for this. New Rayleigh quotient methods are proposed and evaluated for estimating the common roots. Using tensor algebra, reasonable starting values for the Rayleigh quotient methods can be computed. The new methods are compared to Gauss–Newton, solving an eigenvalue problem obtained from the generalized Sylvester matrix, and finding a cluster among the roots of all polynomials. In a simulation study it is shown that Gauss–Newton and a new Rayleigh quotient method perform best, where the latter is more accurate when other roots than the true common roots are close together.
Journal Article
Unified left eigenvector (ULEV) for blind source separation
by
Danesh, Mohammad
,
Naghsh, Erfan
,
Beheshti, Soosan
in
Correlation analysis
,
Datasets
,
Eigenvalues
2022
A joint analysis method is proposed for source separation from multiple datasets. In this method, sources with the greatest impact on the multiple datasets are identified and then are sequentially separated. The method utilizes the advantage of structure singular value decomposition through a novel approach that extracts only one unified left eigenvector. The Lagrangian multipliers are determined in two steps. In the first step, a projection procedure on optimal subspaces provides dimension reduction through singular value decomposition. In the second step, the number of main sources is automatically derived by minimizing the mean square error between the desired noiseless eigenvalues and estimated eigenvalues of the observations. The results show that the highest accuracy in source separation belongs to the proposed unified left eigenvector (ULEV) method compared to some of most popular approaches including ICA, jICA, MCCA and jICA+MCCA.
Journal Article
THE SOLUTION PATH OF THE GENERALIZED LASSO
2011
We present a path algorithm for the generalized lasso problem. This problem penalizes the ℓ 1 norm of a matrix D times the coefficient vector, and has a wide range of applications, dictated by the choice of D. Our algorithm is based on solving the dual of the generalized lasso, which greatly facilitates computation of the path. For D = I (the usual lasso), we draw a connection between our approach and the well-known LARS algorithm. For an arbitrary D, we derive an unbiased estimate of the degrees of freedom of the generalized lasso fit. This estimate turns out to be quite intuitive in many applications.
Journal Article
Normalized stochastic gradient descent learning of general complex‐valued models
2021
The stochastic gradient descent (SGD) method is one of the most prominent first‐order iterative optimisation algorithms, enabling linear adaptive filters as well as general nonlinear learning schemes. It is applicable to a wide range of objective functions, while featuring low computational costs for online operation. However, without a suitable step‐size normalisation, the convergence and tracking behaviour of the stochastic gradient descent method might be degraded in practical applications. In this letter, a novel general normalisation approach is provided for the learning of (non‐)holomorphic models with multiple independent parameter sets. The advantages of the proposed method are demonstrated by means of a specific widely‐linear estimation example.
Journal Article
Multi‐scale audio super resolution via deep pyramid wavelet convolutional neural network
by
Luo, Dongqi
,
Si, Binqiang
,
Zhu, Jihong
in
Approximation
,
Artificial neural networks
,
Communication
2021
In this letter, a pyramid wavelet convolutional neural network for audio super resolution is presented. Since the audio signal is non‐stationary, previous convolutional neural network based approaches may fail in capturing the details, these method usually focus on the global approximation error and thus produce over smooth results. To cope with this issue, it is suggested to predict the wavelet coefficients of the audio signal, and reconstruct the signal from these coefficients stage by stage rather. The prediction errors of the wavelet coefficients are included to the loss function to force the model to capture the detail components. Experimental results show that the approach, training on the VCTK public dataset, achieves more appealing results than state‐of‐the‐art methods.
Journal Article