Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,139
result(s) for
"Acceleration of convergence"
Sort by:
Convergence Acceleration Algorithm via an Equation Related to the Lattice Boussinesq Equation
by
He, Yi
,
Sun, Jian-Qing
,
Hu, Xing-Biao
in
Acceleration
,
Acceleration of convergence
,
Algorithms
2011
The molecule solution of an equation related to the lattice Boussinesq equation is derived with the help of determinantal identities. It is shown that this equation can for certain sequences be used as a numerical convergence acceleration algorithm. Numerical examples with applications of this algorithm are presented. [PUBLICATION ABSTRACT]
Journal Article
A Constrained ℓ1 Minimization Approach to Sparse Precision Matrix Estimation
by
Cai, Tony
,
Liu, Weidong
,
Luo, Xi
in
Acceleration of convergence
,
Analytical estimating
,
Applications
2011
This article proposes a constrained ℓ
1
minimization method for estimating a sparse inverse covariance matrix based on a sample of n iid p-variate random variables. The resulting estimator is shown to have a number of desirable properties. In particular, the rate of convergence between the estimator and the true s-sparse precision matrix under the spectral norm is
when the population distribution has either exponential-type tails or polynomial-type tails. We present convergence rates under the elementwise ℓ
∞
norm and Frobenius norm. In addition, we consider graphical model selection. The procedure is easily implemented by linear programming. Numerical performance of the estimator is investigated using both simulated and real data. In particular, the procedure is applied to analyze a breast cancer dataset and is found to perform favorably compared with existing methods.
Journal Article
Adaptive Thresholding for Sparse Covariance Matrix Estimation
by
Cai, Tony
,
Liu, Weidong
in
Acceleration of convergence
,
Analysis of covariance
,
Analytical estimating
2011
In this article we consider estimation of sparse covariance matrices and propose a thresholding procedure that is adaptive to the variability of individual entries. The estimators are fully data-driven and demonstrate excellent performance both theoretically and numerically. It is shown that the estimators adaptively achieve the optimal rate of convergence over a large class of sparse covariance matrices under the spectral norm. In contrast, the commonly used universal thresholding estimators are shown to be suboptimal over the same parameter spaces. Support recovery is discussed as well. The adaptive thresholding estimators are easy to implement. The numerical performance of the estimators is studied using both simulated and real data. Simulation results demonstrate that the adaptive thresholding estimators uniformly outperform the universal thresholding estimators. The method is also illustrated in an analysis on a dataset from a small round blue-cell tumor microarray experiment. A supplement to this article presenting additional technical proofs is available online.
Journal Article
NUCLEAR-NORM PENALIZATION AND OPTIMAL RATES FOR NOISY LOW-RANK MATRIX COMPLETION
2011
This paper deals with the trace regression model where n entries or linear combinations of entries of an unknown m₁ x m₂ matrix A₀ corrupted by noise are observed. We propose a new nuclear-norm penalized estimator of A₀ and establish a general sharp oracle inequality for this estimator for arbitrary values of n, m₁, m₂ under the condition of isometry in expectation. Then this method is applied to the matrix completion problem. In this case, the estimator admits a simple explicit form, and we prove that it satisfies oracle inequalities with faster rates of convergence than in the previous works. They are valid, in particular, in the high-dimensional setting m₁ m₂ ≫ n. We show that the obtained rates are optimal up to logarithmic factors in a minimax sense and also derive, for any fixed matrix A₀, a nonminimax lower bound on the rate of convergence of our estimator, which coincides with the upper bound up to a constant factor. Finally, we show that our procedure provides an exact recovery of the rank of A₀ with probability close to 1. We also discuss the statistical learning setting where there is no underlying model determined by A₀, and the aim is to find the best trace regression model approximating the data. As a by-product, we show that, under the restricted eigenvalue condition, the usual vector Lasso estimator satisfies a sharp oracle inequality (i.e., an oracle inequality with leading constant 1).
Journal Article
OPTIMAL RATES OF CONVERGENCE FOR COVARIANCE MATRIX ESTIMATION
2010
Covariance matrix plays a central role in multivariate statistical analysis. Significant advances have been made recently on developing both theory and methodology for estimating large covariance matrices. However, a minimax theory has yet been developed. In this paper we establish the optimal rates of convergence for estimating the convariance matrix under both operator norm and Frobenius norm. It is shown that optimal procedures under the two norms are different and consequently matrix estimation under the operator norm is fundamentally different from vector estimation. The minimax upper bound is obtained by constructing a special class of tapering estimators and by studying their risk properties. A key step in obtaining the optimal rate of convergence is the derivation of the minimax lower bound. The technical analysis requires new ideas that are quite different from those used in the more conventional function/sequence estimation problems.
Journal Article
SPARSISTENCY AND RATES OF CONVERGENCE IN LARGE COVARIANCE MATRIX ESTIMATION
2009
This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order (s n log p n /n) 1/2 , where s n is the number of nonzero elements, p n is the size of the covariance matrix and n is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter λ n goet to 0 have been made explicit and compared under different penalties. As a result, for the L₁-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: $s_{n}^{\\prime}=O(p_{n})$ at most, among $O(p_{n}^{2})$ parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where $s_{n}^{\\prime}$ is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.
Journal Article
On the Doubling Algorithm for a (Shifted) Nonsymmetric Algebraic Riccati Equation
by
Iannazzo, Bruno
,
Guo, Chun-Hua
,
Meini, Beatrice
in
Acceleration of convergence
,
Algebra
,
Algorithms
2007
Nonsymmetric algebraic Riccati equations for which the four coefficient matrices form an irreducible $M$-matrix $M$ are considered. The emphasis is on the case where $M$ is an irreducible singular $M$-matrix, which arises in the study of Markov models. The doubling algorithm is considered for finding the minimal nonnegative solution, the one of practical interest. The algorithm has been recently studied by others for the case where $M$ is a nonsingular $M$-matrix. A shift technique is proposed to transform the original Riccati equation into a new Riccati equation for which the four coefficient matrices form a nonsingular matrix. The convergence of the doubling algorithm is accelerated when it is applied to the shifted Riccati equation.
Journal Article
UNIFORM CONVERGENCE RATES FOR NONPARAMETRIC REGRESSION AND PRINCIPAL COMPONENT ANALYSIS IN FUNCTIONAL/LONGITUDINAL DATA
2010
We consider nonparametric estimation of the mean and covariance functions for functional/longitudinal data. Strong uniform convergence rates are developed for estimators that are local-linear smoothers. Our results are obtained in a unified framework in which the number of observations within each curve/cluster can be of any rate relative to the sample size. We show that the convergence rates for the procedures depend on both the number of sample curves and the number of observations on each curve. For sparse functional data, these rates are equivalent to the optimal rates in nonparametric regression. For dense functional data, root-n rates of convergence can be achieved with proper choices of bandwidths. We further derive almost sure rates of convergence for principal component analysis using the estimated covariance function. The results are illustrated with simulation studies.
Journal Article
Convergence Rates for Greedy Algorithms in Reduced Basis Methods
by
Cohen, Albert
,
Dahmen, Wolfgang
,
Wojtaszczyk, Przemyslaw
in
Acceleration of convergence
,
Algorithms
,
Approximation
2011
The reduced basis method was introduced for the accurate online evaluation of solutions to a parameter dependent family of elliptic PDEs. Abstractly, it can be viewed as determining a \"good\" n-dimensional space Η^sup n^ to be used in approximating the elements of a compact set [digamma] in a Hilbert space Η. One by now popular computational approach is to find Η^sup n^ through a greedy strategy. It is natural to compare the approximation performance of the Η^sup n^ generated by this strategy with that of the Kolmogorov widths d^sup n^([digamma]) since the latter gives the smallest error that can be achieved by subspaces of fixed dimension n. The first such comparisons show that the approximation error, ... obtained by the greedy strategy satisfies ... In this paper, various improvements of this result will be given. The exact greedy algorithm is not always computationally feasible, and a commonly used computationally friendly variant can be formulated as a \"weak greedy algorithm.\" The results of this paper are established for this version as well.(ProQuest: ... denotes formulae/symbols omitted.)
Journal Article
Square-root lasso: pivotal recovery of sparse signals via conic programming
by
BELLONI, A.
,
CHERNOZHUKOV, V.
,
WANG, L.
in
Acceleration of convergence
,
Algorithms
,
Applications
2011
We propose a pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors p is large, possibly much larger than n, but only s regressors are significant. The method is a modification of the lasso, called the square-root lasso. The method is pivotal in that it neither relies on the knowledge of the standard deviation σ nor does it need to pre-estimate σ. Moreover, the method does not rely on normality or sub-Gaussianity of noise. It achieves near-oracle performance, attaining the convergence rate σ{(s/n) log p} 1/2 in the prediction norm, and thus matching the performance of the lasso with known σ. These performance results are valid for both Gaussian and non-Gaussian errors, under some mild moment restrictions. We formulate the square-root lasso as a solution to a convex conic programming problem, which allows us to implement the estimator using efficient algorithmic methods, such as interior-point and first-order methods.
Journal Article