Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
19
result(s) for
"consistent selection procedure"
Sort by:
Consistent Moment Selection Procedures for Generalized Method of Moments Estimation
This paper considers a generalized method of moments (GMM) estimation problem in which one has a vector of moment conditions, some of which are correct and some incorrect. The paper introduces several procedures for consistently selecting the correct moment conditions. The procedures also can consistently determine whether there is a sufficient number of correct moment conditions to identify the unknown parameters of interest. The paper specifies moment selection criteria that are GMM analogues of the widely used BIC and AIC model selection criteria. (The latter is not consistent.) The paper also considers downward and upward testing procedures. All of the moment selection procedures discussed in this paper are based on the minimized values of the GMM criterion function for different vectors of moment conditions. The procedures are applicable in time-series and cross-sectional contexts. Application of the results of the paper to instrumental variables estimation problems yields consistent procedures for selecting instrumental variables.
Journal Article
SCAD-Penalized Regression in High-Dimensional Partially Linear Models
2009
We consider the problem of simultaneous variable selection and estimation in partially linear models with a divergent number of covariates in the linear part, under the assumption that the vector of regression coefficients is sparse. We apply the SCAD penalty to achieve sparsity in the linear part and use polynomial splines to estimate the nonparametric component. Under reasonable conditions, it is shown that consistency in terms of variable selection and estimation can be achieved simultaneously for the linear and nonparametric components. Furthermore, the SCAD-penalized estimators of the nonzero coefficients are shown to have the asymptotic oracle property, in the sense that it is asymptotically normal with the same means and covariances that they would have if the zero coefficients were known in advance. The finite sample behavior of the SCAD-penalized estimators is evaluated with simulation and illustrated with a data set.
Journal Article
Consistency of Cross Validation for Comparing Regression Procedures
2007
Theoretical developments on cross validation (CV) have mainly focused on selecting one among a list of finite-dimensional models (e.g., subset or order selection in linear regression) or selecting a smoothing parameter (e.g., bandwidth for kernel smoothing). However, little is known about consistency of cross validation when applied to compare between parametric and nonparametric methods or within nonparametric methods. We show that under some conditions, with an appropriate choice of data splitting ratio, cross validation is consistent in the sense of selecting the better procedure with probability approaching 1. Our results reveal interesting behavior of cross validation. When comparing two models (procedures) converging at the same nonparametric rate, in contrast to the parametric case, it turns out that the proportion of data used for evaluation in CV does not need to be dominating in size. Furthermore, it can even be of a smaller order than the proportion for estimation while not affecting the consistency property.
Journal Article
Variable Selection for Partially Linear Models With Measurement Errors
2009
This article focuses on variable selection for partially linear models when the covariates are measured with additive errors. We propose two classes of variable selection procedures, penalized least squares and penalized quantile regression, using the nonconvex penalized principle. The first procedure corrects the bias in the loss function caused by the measurement error by applying the so-called correction-for-attenuation approach, whereas the second procedure corrects the bias by using orthogonal regression. The sampling properties for the two procedures are investigated. The rate of convergence and the asymptotic normality of the resulting estimates are established. We further demonstrate that, with proper choices of the penalty functions and the regularization parameter, the resulting estimates perform asymptotically as well as an oracle procedure as proposed by Fan and Li. Choice of smoothing parameters is also discussed. Finite sample performance of the proposed variable selection procedures is assessed by Monte Carlo simulation studies. We further illustrate the proposed procedures by an application.
Journal Article
On the Asymptotic Validity of Fully Sequential Selection Procedures for Steady-State Simulation
by
Nelson, Barry L
,
Kim, Seong-Hee
in
Analysis
,
Applied sciences
,
Asymptotic efficiencies (Statistics)
2006
We present fully sequential procedures for steady-state simulation that are designed to select the best of a finite number of simulated systems when \"best\" is defined by the largest or smallest long-run average performance. We also provide a framework for establishing the asymptotic validity of such procedures and prove the validity of our procedures. An example based on the M/M /1 queue is given.
Journal Article
Variable Selection for Panel Count Data via Non-Concave Penalized Estimating Function
by
TONG, XINGWEI
,
SUN, LIUQUAN
,
SUN, JIANGUO
in
Coefficients
,
Consistent estimators
,
estimating function
2009
Variable selection is an important issue in all regression analyses, and in this paper we discuss this in the context of regression analysis of panel count data. Panel count data often occur in long-term studies that concern occurrence rate of a recurrent event, and their analysis has recently attracted a great deal of attention. However, there does not seem to exist any established approach for variable selection with respect to panel count data. For the problem, we adopt the idea behind the non-concave penalized likelihood approach and develop a non-concave penalized estimating function approach. The proposed methodology selects variables and estimates regression coefficients simultaneously, and an algorithm is presented for this process. We show that the proposed procedure performs as well as the oracle procedure in that it yields the estimates as if the correct submodel were known. Simulation studies are conducted for assessing the performance of the proposed approach and suggest that it works well for practical situations. An illustrative example from a cancer study is provided.
Journal Article
Estimating Models with Sample Selection Bias: A Survey
1998
This paper surveys the available methods for estimating models with sample selection bias. I initially examine the fully parameterized model proposed by Heckman (1979) before investigating departures in two directions. First, I consider the relaxation of distributional assumptions. In doing so I present the available semi-parametric procedures. Second, I investigate the ability to tackle different selection rules generating the selection bias. Finally, I discuss how the estimation procedures applied in the cross-sectional case can be extended to panel data.
Journal Article
Bootstrap Model Selection
1996
In a regression problem, typically there are p explanatory variables possibly related to a response variable, and we wish to select a subset of the p explanatory variables to fit a model between these variables and the response. A bootstrap variable/model selection procedure is to select the subset of variables by minimizing bootstrap estimates of the prediction error, where the bootstrap estimates are constructed based on a data set of size n. Although the bootstrap estimates have good properties, this bootstrap selection procedure is inconsistent in the sense that the probability of selecting the optimal subset of variables does not converge to 1 as n → ∞. This inconsistency can be rectified by modifying the sampling method used in drawing bootstrap observations. For bootstrapping pairs (response, explanatory variable), it is found that instead of drawing n bootstrap observations (a customary bootstrap sampling plan), much less bootstrap observations should be sampled: The bootstrap selection procedure becomes consistent if we draw m bootstrap observations with m → ∞ and m/n → 0. For bootstrapping residuals, we modify the bootstrap sampling procedure by increasing the variability among the bootstrap observations. The consistency of the modified bootstrap selection procedures is established in various situations, including linear models, nonlinear models, generalized linear models, and autoregressive time series. The choice of the bootstrap sample size m and some computational issues are also discussed. Some empirical results are presented.
Journal Article
Two-stage model selection procedures in partially linear regression
by
Bunea, Florentina
,
Wegkamp, Marten H.
in
Adaptive minimax estimation
,
Cauchy Schwarz inequality
,
consistent covariate selection
2004
The authors propose a two-stage estimation procedure for the partially linear model Y=f0(T)+X′β0+ε . They show how to estimate consistently the location of the nonzero components of β0. Their approach turns out to be compatible with minimax adaptive estimation of f0over Besov balls in the case of penalized least squares. Their proofs are based on a new type of oracle inequality. /// Les auteurs proposent une procédure d'estimation en deux temps pour le modèle partiellement linéaire Y=f0(T)+X′β0+ε . Ils montrent comment estimer de façon convergente la position des composantes non nulles de β0. Leur approche s'avère compatible avec l'estimation minimax adaptative de f0sur les boules de Besov dans le cas des moindres carrés pénalisés. Leurs démonstrations s'appuient sur une inégalité d'oracle d'un nouveau genre.
Journal Article
Nonparametric density estimation from data with a mixture of Berkson and classical errors
2007
The author considers density estimation from contaminated data where the measurement errors come from two very different sources. A first error, of Berkson type, is incurred before the experiment: the variable X of interest is unobservable and only a surrogate can be measured. A second error, of classical type, is incurred after the experiment: the surrogate can only be observed with measurement error. The author develops two nonparametric estimators of the density of X, valid whenever Berkson, classical or a mixture of both errors are present. Rates of convergence of the estimators are derived and a fully data-driven procedure is proposed. Finite sample performance is investigated via simulations and on a real data example. /// L'auteure s'intéresse à l'estimation d'une densité à partir de données contaminées, lorsque les erreurs de mesure proviennent de deux sources très différentes. Une première erreur, de type Berkson, surgit avant l'expérience: la variable X d'intérêt est inobservable et seule une autre variable liée à X peut être mesurée. Une deuxième erreur, de type classique, surgit après l'expérience: la variable de remplacement ne peut être observée qu'avec une erreur de mesure. L'auteure développe deux estimateurs non paramétriques de la densité de X, valides quand des erreurs de Berkson, des erreurs classiques ou un mélange des deux sont présentes. Les taux de convergence des estimateurs sont précisés et une procédure complètement automatique est proposée. La performance en taille finie est investiguée par voie de simulations et sur des données réelles.
Journal Article