Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
4,813
result(s) for
"Bootstrap method"
Sort by:
BOOTSTRAP LONG MEMORY PROCESSES IN THE FREQUENCY DOMAIN
2021
The aim of the paper is to describe a bootstrap, contrary to the sieve bootstrap, valid under either long memory (LM) or short memory (SM) dependence. One of the reasons of the failure of the sieve bootstrap in our context is that under LM dependence, the sieve bootstrap may not be able to capture the true covariance structure of the original data. We also describe and examine the validity of the bootstrap scheme for the least squares estimator of the parameter in a regression model and for model specification. The motivation for the latter example comes from the observation that the asymptotic distribution of the test is intractable.
Journal Article
More reliable inference for the dissimilarity index of segregation
by
Windmeijer, Frank
,
Allen, Rebecca
,
Burgess, Simon
in
Adjustment
,
Bootstrap mechanism
,
Bootstrap method
2015
The most widely used measure of segregation is the so-called dissimilarity index. It is now well understood that this measure also reflects randomness in the allocation of individuals to units (i.e. it measures deviations from evenness, not deviations from randomness). This leads to potentially large values of the segregation index when unit sizes and/or minority proportions are small, even if there is no underlying systematic segregation. Our response to this is to produce adjustments to the index, based on an underlying statistical model. We specify the assignment problem in a very general way, with differences in conditional assignment probabilities underlying the resulting segregation. From this, we derive a likelihood ratio test for the presence of any systematic segregation, and bias adjustments to the dissimilarity index. We further develop the asymptotic distribution theory for testing hypotheses concerning the magnitude of the segregation index and show that the use of bootstrap methods can improve the size and power properties of test procedures considerably. We illustrate these methods by comparing dissimilarity indices across school districts in England to measure social segregation.
Journal Article
Standard Errors of IRT Parameter Scale Transformation Coefficients: Comparison of Bootstrap Method, Delta Method, and Multiple Imputation Method
2019
The present study evaluated the multiple imputation method, a procedure that is similar to the one suggested by Li and Lissitz (2004), and compared the performance of this method with that of the bootstrap method and the delta method in obtaining the standard errors for the estimates of the parameter scale transformation coefficients in item response theory (IRT) equating in the context of the common-item nonequivalent groups design. Two different estimation procedures for the variancecovariance matrix of the IRT item parameter estimates, which were used in both the delta method and the multiple imputation method, were considered: empirical cross-product (XPD) and supplemented expectation maximization (SEM). The results of the analyses with simulated and real data indicate that the multiple imputation method generally produced very similar results to the bootstrap method and the delta method in most of the conditions. The differences between the estimated standard errors obtained by the methods using the XPD matrices and the SEM matrices were very small when the sample size was reasonably large. When the sample size was small, the methods using the XPD matrices appeared to yield slight upward bias for the standard errors of the IRT parameter scale transformation coefficients.
Journal Article
Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error
by
Carroll, Raymond J.
,
Delaigle, Aurore
,
Hall, Peter
in
Applications
,
Bootstrap method
,
Bootstrap methods
2011
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y, is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this article we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
Journal Article
INFERENCE ON COUNTERFACTUAL DISTRIBUTIONS
by
Melly, Blaise
,
Fernández-Val, Iván
,
Chernozhukov, Victor
in
Analytical estimating
,
Bootstrap mechanism
,
Bootstrap method
2013
Counterfactual distributions are important ingredients for policy analysis and decomposition analysis in empirical economics. In this article, we develop modeling and inference tools for counterfactual distributions based on regression methods. The counterfactual scenarios that we consider consist of ceteris paribus changes in either the distribution of covariates related to the outcome of interest or the conditional distribution of the outcome given covariates. For either of these scenarios, we derive joint functional central limit theorems and bootstrap validity results for regression-based estimators of the status quo and counterfactual outcome distributions. These results allow us to construct simultaneous confidence sets for function-valued effects of the counterfactual changes, including the effects on the entire distribution and quantile functions of the outcome as well as on related functionals. These confidence sets can be used to test functional hypotheses such as no-effect, positive effect, or stochastic dominance. Our theory applies to general counterfactual changes and covers the main regression methods including classical, quantile, duration, and distribution regressions. We illustrate the results with an empirical application to wage decompositions using data for the United States. As a part of developing the main results, we introduce distribution regression as a comprehensive and flexible tool for modeling and estimating the entire conditional distribution. We show that distribution regression encompasses the Cox duration regression and represents a useful alternative to quantile regression. We establish functional central limit theorems and bootstrap validity results for the empirical distribution regression process and various related functionals.
Journal Article
Is there really any Contagion among Major Equity and Securitized Real Estate Markets? Analysis from a New Perspective
2018
This study examines contagion across general equity and securitized real estate markets of China, Hong Kong and the US during the Chinese financial crisis. This is the first study to combine the case-resampling bootstrap method with the coskewness and cokurtosis test. Thus the new method works well on data with a non-normal distribution or non-constant variance. Additional channels of contagion may also be detected to reflect a more precise pattern of contagion. In contrast to Hatemi-J and Hacker, Applied Financial Economics Letters, 1(6), 343-347 (2005)‘s result, we find that the case-resampling bootstrap method diminishes the overall effect of contagion. In particular, no additional channels of contagion can be found when the case-resampling bootstrap method is applied on the coskewness test, but when the case-resampling bootstrap method is applied on the cokurtosis test, additional channels of contagion are detected. Furthermore, the overall effect of contagion is greater on the general equity markets than on the securitized real estate markets. This study has useful implications to investors, regulators and policy makers.
Journal Article
New Algorithms and Methods to Estimate Maximum-Likelihood Phylogenies: Assessing the Performance of PhyML 3.0
2010
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696–704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira—Hasegawa—like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
Journal Article
CENTRAL LIMIT THEOREMS AND BOOTSTRAP IN HIGH DIMENSIONS
by
Kato, Kengo
,
Chetverikov, Denis
,
Chernozhukov, Victor
in
Approximation
,
Bootstrap method
,
Convex analysis
2017
This paper derives central limit and bootstrap theorems for probabilities that sums of centered high-dimensional random vectors hit hyperrectangles and sparsely convex sets. Specifically, we derive Gaussian and bootstrap approximations for probabilities $\\mathrm{P}({\\mathrm{n}}^{-1/2}{\\mathrm{\\Sigma }}_{\\mathrm{i}=1}^{\\mathrm{n}}{\\mathrm{X}}_{\\mathrm{i}}\\in \\mathrm{A})$ where X1,..., Xn are independent random vectors in ℝp and A is a hyperrectangle, or more generally, a sparsely convex set, and show that the approximation error converges to zero even if p = pn → ∞ as n → ∞ and p ≫ n; in particular, p can be as large as $\\mathrm{O}\\left({\\mathrm{e}}^{\\mathrm{C}{\\mathrm{n}}^{\\mathrm{c}}}\\right)$ for some constants c, C > 0. The result holds uniformly over all hyperrectangles, or more generally, sparsely convex sets, and does not require any restriction on the correlation structure among coordinates of Xi. Sparsely convex sets are sets that can be represented as intersections of many convex sets whose indicator functions depend only on a small subset of their arguments, with hyperrectangles being a special case.
Journal Article
Estimation and Accuracy After Model Selection
2014
Classical statistical theory ignores model selection in assessing estimation accuracy. Here we consider bootstrap methods for computing standard errors and confidence intervals that take model selection into account. The methodology involves bagging, also known as bootstrap smoothing, to tame the erratic discontinuities of selection-based estimators. A useful new formula for the accuracy of bagging then provides standard errors for the smoothed estimators. Two examples, nonparametric and parametric, are carried through in detail: a regression model where the choice of degree (linear, quadratic, cubic, …) is determined by the C ₚ criterion and a Lasso-based estimation problem.
Journal Article
Statistical Inference with PLSc Using Bootstrap Confidence Intervals
by
Rönkkö, Mikko
,
Aguirre-Urreta, Miguel I.
in
Bootstrap method
,
Confidence intervals
,
Estimation bias
2018
Partial least squares (PLS) is one of the most popular statistical techniques in use in the Information Systems field. When applied to data originating from a common factor model, as is often the case in the discipline, PLS will produce biased estimates. A recent development, consistent PLS (PLSc), has been introduced to correct for this bias. In addition, the common practice in PLS of comparing the ratio of an estimate to its standard error to a t distribution for the purposes of statistical inference has also been challenged. We contribute to the practice of research in the IS discipline by providing evidence of the value of employing bootstrap confidence intervals in conjunction with PLSc, which is a more appropriate alternative than PLS for many of the research scenarios that are of interest to the field. Such evidence is direly needed before a complete approach to the estimation of SEM that relies on both PLSc and bootstrap CIs can be widely adopted. We also provide recommendations for researchers on the use of confidence intervals with PLSc.
Journal Article