Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
23 result(s) for "finite-sample properties"
Sort by:
Distribution‐free prediction bands for non‐parametric regression
We study distribution‐free, non‐parametric prediction bands with a focus on their finite sample behaviour. First we investigate and develop different notions of finite sample coverage guarantees. Then we give a new prediction band by combining the idea of ‘conformal prediction’ with non‐parametric conditional density estimation. The proposed estimator, called COPS (conformal optimized prediction set), always has a finite sample guarantee. Under regularity conditions the estimator converges to an oracle band at a minimax optimal rate. A fast approximation algorithm and a data‐driven method for selecting the bandwidth are developed. The method is illustrated in simulated and real data examples.
The Generative Adversarial Approach: A Cautionary Tale of Finite Samples
Given the relevance and wide use of the Generative Adversarial (GA) methodology, this paper focuses on finite samples to better understand its benefits and pitfalls. We focus on its finite-sample properties from both statistical and numerical perspectives. We set up a simple and ideal “controlled experiment” where the input data are an i.i.d. Gaussian series where the mean is to be learned, and the discriminant and generator are in the same distributional family, not a neural network (NN), as in the popular GAN. We show that, even with the ideal discriminant, the classical GA methodology delivers a biased estimator while producing multiple local optima, confusing numerical methods. The situation worsens when the discriminator is in the correct parametric family but is not the oracle, leading to the absence of a saddle point. To improve the quality of the estimators within the GA method, we propose an alternative loss function, the alternative GA method, that leads to a unique saddle point with better statistical properties. Our findings are intended to start a conversation on the potential pitfalls of GA and GAN methods. In this spirit, the ideas presented here should be explored in other distributional cases and will be extended to the actual use of an NN for discriminators and generators.
Optimal Weight Choice for Frequentist Model Average Estimators
There has been increasing interest recently in model averaging within the frequentist paradigm. The main benefit of model averaging over model selection is that it incorporates rather than ignores the uncertainty inherent in the model selection process. One of the most important, yet challenging, aspects of model averaging is how to optimally combine estimates from different models. In this work, we suggest a procedure of weight choice for frequentist model average estimators that exhibits optimality properties with respect to the estimator's mean squared error (MSE). As a basis for demonstrating our idea, we consider averaging over a sequence of linear regression models. Building on this base, we develop a model weighting mechanism that involves minimizing the trace of an unbiased estimator of the model average estimator's MSE. We further obtain results that reflect the finite sample as well as asymptotic optimality of the proposed mechanism. A Monte Carlo study based on simulated and real data evaluates and compares the finite sample properties of this mechanism with those of existing methods. The extension of the proposed weight selection scheme to general likelihood models is also considered. This article has supplementary material online.
Radius matching on the propensity score with bias adjustment: tuning parameters and finite sample behaviour
Using a simulation design that is based on empirical data, a recent study by Huber et al. (J Econom 175:1–21, 2013 ) finds that distance-weighted radius matching with bias adjustment as proposed in Lechneret et al. (J Eur Econ Assoc 9:742–784, 2011 ) is competitive among a broad range of propensity score-based estimators used to correct for mean differences due to observable covariates. In this companion paper, we further investigate the finite sample behaviour of radius matching with respect to various tuning parameters. The results are intended to help the practitioner to choose suitable values of these parameters when using this method, which has been implemented in the software packages GAUSS, STATA and R.
Maximum Pseudo-Likelihood Estimation of Copula Models and Moments of Order Statistics
It has been shown that, despite being consistent and in some cases efficient, maximum pseudo-likelihood (MPL) estimation for copula models overestimates the level of dependence, especially for small samples with a low level of dependence. This is especially relevant in finance and insurance applications when data are scarce. We show that the canonical MPL method uses the mean of order statistics, and we propose to use the median or the mode instead. We show that the MPL estimators proposed are consistent and asymptotically normal. In a simulation study, we compare the finite sample performance of the proposed estimators with that of the original MPL and the inversion method estimators based on Kendall’s tau and Spearman’s rho. In our results, the modified MPL estimators, especially the one based on the mode of the order statistics, have a better finite sample performance both in terms of bias and mean square error. An application to general insurance data shows that the level of dependence estimated between different products can vary substantially with the estimation method used.
Finite-Sample Theory and Bias Correction of Maximum Likelihood Estimators in the EGARCH Model
We derive the analytical expressions of bias approximations for maximum likelihood (ML) and quasi-maximum likelihood (QML) estimators of the EGARCH (1,1) parameters that enable us to correct after the bias of all estimators. The bias-correction mechanism is constructed under the specification of two methods that are analytically described. We also evaluate the residual bootstrapped estimator as a measure of performance. Monte Carlo simulations indicate that, for given sets of parameters values, the bias corrections work satisfactory for all parameters. The proposed full-step estimator performs better than the classical one and is also faster than the bootstrap. The results can be also used to formulate the approximate Edgeworth distribution of the estimators.
Information Theory Estimators for the First-Order Spatial Autoregressive Model
Information theoretic estimators for the first-order spatial autoregressive model are introduced, small sample properties are investigated, and the estimator is applied empirically. Monte Carlo experiments are used to compare finite sample performance of more traditional spatial estimators to three different information theoretic estimators, including maximum empirical likelihood, maximum empirical exponential likelihood, and maximum log Euclidean likelihood. Information theoretic estimators are found to be robust to selected specifications of spatial autocorrelation and may dominate traditional estimators in the finite sample situations analyzed, except for the quasi-maximum likelihood estimator which competes reasonably well. The information theoretic estimators are illustrated via an application to hedonic housing pricing.
Improvement in finite-sample properties of GMM-based Wald tests
GMM-based Wald tests tend to overreject when used for small samples, mainly due to inaccurate estimation of the weighting matrix. This article proposes applying the shrinkage method to address this problem. Using a possibly-misspecified factor model, the shrinkage method can provide a good estimator for the weighting matrix, and hence improve the finite-sample performance of the GMM-based Wald tests.
Sargan's Instrumental Variables Estimation and the Generalized Method of Moments
This article surveys J. D. Sargan's work on instrumental variables (IV) estimation and its connections with the generalized method of moments (GMM). First the modeling context in which Sargan motivated IV estimation is presented. Then the theory of IV estimation as developed by Sargan is discussed. His approach to efficiency, his minimax estimator, tests of overidentification and underidentification, and his later work on the finite-sample properties of IV estimators are reviewed. Next, his approach to modeling IV equations with serial correlation is discussed and compared with the GMM approach. Finally, Sargan's results for nonlinear-in-parameters IV models are described.