Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
16 result(s) for "Finite sample correction"
Sort by:
Sample Size Determination for GEE Analyses of Stepped Wedge Cluster Randomized Trials
In stepped wedge cluster randomized trials, intact clusters of individuals switch from control to intervention from a randomly assigned period onwards. Such trials are becoming increasingly popular in health services research. When a closed cohort is recruited from each cluster for longitudinal follow-up, proper sample size calculation should account for three distinct types of intraclass correlations: the within-period, the inter-period, and the within-individual correlations. Setting the latter two correlation parameters to be equal accommodates cross-sectional designs. We propose sample size procedures for continuous and binary responses within the framework of generalized estimating equations that employ a block exchangeable within-cluster correlation structure defined from the distinct correlation types. For continuous responses, we show that the intraclass correlations affect power only through two eigenvalues of the correlation matrix. We demonstrate that analytical power agrees well with simulated power for as few as eight clusters, when data are analyzed using bias-corrected estimating equations for the correlation parameters concurrently with a bias-corrected sandwich variance estimator.
Improved Lagrange multiplier tests in spatial autoregressions
For testing lack of correlation against spatial autoregressive alternatives, Lagrange multiplier tests enjoy their usual computational advantages, but the (X²) first-order asymptotic approximation to critical values can be poor in small samples. We develop refined tests for lack of spatial error correlation in regressions, based on Edgeworth expansion. In Monte Carlo simulations, these tests, and bootstrap tests, generally significantly outperform X²-based tests.
Disability and Employment
Measurement error in health and disability status has been widely accepted as a central problem in social science research. Long-standing debates about the prevalence of disability, the role of health in labor market outcomes, and the influence of federal disability policy on declining employment rates have all emphasized issues regarding the reliability of self-reported disability. In addition to random error, inaccuracy in survey datasets may be produced by a host of economic, social, and psychological factors that can lead respondents to misreport work capacity. We develop a nonparametric foundation for assessing how assumptions on the reporting error process affect inferences on the employment gap between the disabled and nondisabled. Rather than imposing the strong assumptions required to obtain point identification, we derive sets of bounds that formalize the identifying power of primitive nonparametric assumptions that appear to share broad consensus in the literature. Within this framework, we introduce a finite-sample correction for the analog estimator of the monotone instrumental variable (MIV) bound. Our empirical results suggest that conclusions derived from conventional latent variable reporting error models may be driven largely by ad hoc distributional and functional form restrictions. We also find that under relatively weak nonparametric assumptions, nonworkers appear to systematically overreport disability.
Refining mortality estimates in shark demographic analyses
Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. Updated distributions of Z and λ were produced for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking.
Two-stage cluster samples with ranked set sampling designs
This paper draws statistical inference for population characteristics using two-stage cluster samples. Cluster samples in each stage are constructed using ranked set sample (RSS), probability-proportional-to-size sample, or simple random sample (SRS) designs. Each RSS sampling design is implemented with and without replacement policies. The paper constructs design-unbiased estimators for population mean, total, and their variances. Efficiency improvement of all sampling designs over SRS sampling design is investigated. It is shown that the efficiency of the estimators depends on the intra-cluster correlation coefficient and choice of sampling designs in stage I and II sampling. The paper also constructs an approximate confidence interval for the population mean (total). For a fixed cost, the optimal sample sizes for stage I and stage II samples are constructed by maximizing the information content of the sample. The proposed sampling designs and estimators are applied to California School District Study and Ohio Corn Production Data.
Some Useful Moment Results in Sampling Problems
We consider the standard sampling problem involving a finite population of N objects and a sample of n objects taken from this population using simple random sampling without replacement. We consider the relationship between the moments of the sampled and unsampled parts and show how these are related to the population moments. We derive expectation, variance, and covariance results for the various quantities under consideration and use these to obtain standard sampling results with an extension to variance estimation with a \"finite population correction.\" This clarifies and extends standard results in sampling theory for the estimation of the mean and variance of a population.
Finite-Sample Theory and Bias Correction of Maximum Likelihood Estimators in the EGARCH Model
We derive the analytical expressions of bias approximations for maximum likelihood (ML) and quasi-maximum likelihood (QML) estimators of the EGARCH (1,1) parameters that enable us to correct after the bias of all estimators. The bias-correction mechanism is constructed under the specification of two methods that are analytically described. We also evaluate the residual bootstrapped estimator as a measure of performance. Monte Carlo simulations indicate that, for given sets of parameters values, the bias corrections work satisfactory for all parameters. The proposed full-step estimator performs better than the classical one and is also faster than the bootstrap. The results can be also used to formulate the approximate Edgeworth distribution of the estimators.
A Jackknife Variance Estimator for Unistage Stratified Samples with Unequal Probabilities
Existing jackknife variance estimators used with sample surveys can seriously overestimate the true variance under unistage stratified sampling without replacement with unequal probabilities. A novel jackknife variance estimator is proposed which is as numerically simple as existing jackknife variance estimators. Under certain regularity conditions, the proposed variance estimator is consistent under stratified sampling without replacement with unequal probabilities. The high entropy regularity condition necessary for consistency is shown to hold for the Rao-Sampford design. An empirical study of three unequal probability sampling designs supports our findings.
Uncertainty due to finite resolution measurements
We investigate the influence of finite resolution on measurement uncertainty from the perspective of the Guide to the Expression of Uncertainty in Measurement (GUM). Finite resolution in a measurement that is perturbed by Gaussian noise yields a distribution of results that strongly depends on the location of the true value relative to the resolution increment. We show that there is no simple expression relating the standard deviation of the distribution of measurement results to the associated uncertainty at a specified level of confidence. There is, however, an analytic relation between the mean value and the standard deviation of the measurement distribution. We further investigate the conflict between the GUM and ISO 14253-2 regarding the method of evaluating the standard uncertainty due to finite resolution and show that, on average, the GUM method is superior, but still approximate.
A Corrected Plug-in Method for Quantile Interval Construction Through a Transformed Regression
We propose a corrected plug-in method for constructing confidence intervals of the conditional quantiles of an original response variable through a transformed regression with heteroscedastic errors. The interval is easy to compute. Factors affecting the magnitude of the correction are examined analytically through the special case of Box-Cox regression. Monte Carlo simulations show that the new method works well in general and is superior over the commonly used delta method and the quantile regression method. An empirical application is presented.