Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
17 result(s) for "one‐way random effects model"
Sort by:
Introducing Robust Statistics in the Uncertainty Quantification of Nuclear Safeguards Measurements
The monitoring of nuclear safeguards measurements consists of verifying the coherence between the operator declarations and the corresponding inspector measurements on the same nuclear items. Significant deviations may be present in the data, as consequence of problems with the operator and/or inspector measurement systems. However, they could also be the result of data falsification. In both cases, quantitative analysis and statistical outcomes may be negatively affected by their presence unless robust statistical methods are used. This article aims to investigate the benefits deriving from the introduction of robust procedures in the nuclear safeguards context. In particular, we will introduce a robust estimator for the estimation of the uncertainty components of the measurement error model. The analysis will prove the capacity of robust procedures to limit the bias in simulated and empirical contexts to provide more sounding statistical outcomes. For these reasons, the introduction of robust procedures may represent a step forward in the still ongoing development of reliable uncertainty quantification methods for error variance estimation.
Density Estimation in the Presence of Heteroscedastic Measurement Error
We consider density estimation when the variable of interest is subject to heteroscedastic measurement error. The density is assumed to have a smooth but unknown functional form that we model with a penalized mixture of B-splines. We treat the situation in which multiple mismeasured observations of each variable of interest are observed for at least some of the subjects, and the measurement error is assumed to be additive and normal. The measurement error variance function is modeled with a second penalized mixture of B-splines. The article's main contributions are to address the effects of heteroscedastic measurement error effectively, explain the biases caused by ignoring heteroscedasticity, and present an equivalent kernel for a spline-based density estimator. Derivation of the equivalent kernel may be of independent interest. We use small-sigma asymptotics to approximate the biases incurred by assuming that the measurement error is homoscedastic when it actually is heteroscedastic. The biases incurred by misspecifying heteroscedastic measurement error as homoscedastic can be substantial. We fit the model using Bayesian methods and apply it to an example from nutritional epidemiology and a simulation experiment.
Fiducial Intervals for Variance Components in an Unbalanced Two-Component Normal Mixed Linear Model
In this article we propose a new method for constructing confidence intervals for σ α 2 , σ ϵ 2 , and the intraclass correlation ρ == σ α 2 ( σ α 2 ++ σ ε 2 ) in a two-component mixed-effects linear model. This method is based on an extension of R. A. Fisher's fiducial argument. We conducted a simulation study to compare the resulting interval estimates with other competing confidence interval procedures from the literature. Our results demonstrate that the proposed fiducial intervals have satisfactory performance in terms of coverage probability, as well as shorter average confidence interval lengths overall. We also prove that these fiducial intervals have asymptotically exact frequentist coverage probability. The computations for the proposed procedures are illustrated using real data examples.
Confidence distribution inferences in one-way random effects model
In this paper, we construct a new kind of confidence intervals for the parameters of interest through constructing confidence distributions for them in one-way random effects model. At first, we use the method of Singh et al. (Ann Stat 33:159–183, 2005 ) to derive combined asymptotic confidence distribution, then obtain the confidence interval naturally by the property of confidence distribution. Simulation results demonstrate that the new confidence intervals perform very well in terms of empirical coverage probability and average interval length. Although we focus on confidence interval estimation, our method can also be used to carry out hypothesis tests about the parameters of interest.
Models and Confidence Intervals for True Values in Interlaboratory Trials
We consider the one-way random-effects model with unequal sample sizes and heterogeneous variances. Using the method of generalized confidence intervals, we develop a new confidence interval procedure for the mean. Additionally, we investigate two alternative models based on different sets of assumptions regarding between-group variability and derive generalized confidence interval procedures for the mean. These procedures are applicable to small samples. Statistical simulation is used to demonstrate that the coverage probabilities of these procedures are close enough to the nominal value so that they are useful in practice. Although the methods are quite general, the procedures are explained with the backdrop of interlaboratory studies.
Nonparametric Bootstrap Confidence Intervals for Variance Components Applied to Interlaboratory Comparisons
Exact confidence intervals for variance components in linear mixed models rely heavily on normal distribution assumptions. If the random effects in the model are not normally distributed, then the true coverage probabilities of these conventional intervals may be erratic. In this paper we examine the performance of nonparametric bootstrap confidence intervals based on restricted maximum likelihood (REML) estimators. Asymptotic theory suggests that these intervals will achieve the nominal coverage value as the sample size increases. Incorporating a small-sample adjustment term in the bootstrap confidence interval construction process improves the performance of these intervals for small to intermediate sample sizes. Simulation studies suggest that the bootstrap standard method (with a transformation) and the bootstrap bias-corrected and accelerated (BC a ) method produce confidence intervals that have good coverage probabilities under a variety of distribution assumptions. For an interlaboratory comparison of mercury concentration in oyster tissue, a balanced one-way random effects model is used to quantify the proportion of the variation in mercury concentration that can be attributed to the laboratories. In this application the exact confidence interval using normal distribution theory produces misleading results and inferences based on nonparametric bootstrap procedures are more appropriate.
Understanding sufficiency in one-way random effects model
In this paper we consider one-way random effects analysis of variance model and examine the concept of sufficiency while comparing different allocation designs for the observations under the model. We explicitly provide the transformation leading to sufficiency of 'optimal' allocations as characterized in DeGroot (Ann Inst Stat Math 18:13—28, 1966) and Stepniak (Ann Inst Stat Math 34:175—180, 1982).
Improved U-tests for variance components in one-way random effects models
Based on a decomposition of a U-statistic, Nobre, Singer and Silvapulle (In Beyond Parametrics in Interdisciplinary Research, Festschrift to P.K. Sen (2008) 197–210 Institute of Mathematical Statistics) proposed a test for the hypothesis that the within-treatment variance component in a one-way random effects model is null, specially useful when very mild assumptions are imposed on the underlying distributions. We consider a bootstrap version of that U-test and evaluate its performance via simulation studies in different scenarios. The bootstrap U-test has better statistical properties than the original test even in small samples. Furthermore, it is easy to implement and has a low computational cost. We consider two examples with unbalanced small sample datasets, for illustrative purposes.
Measuring Relative Importance of Sources of Variation Without Using Variance
This article proposes a new parameter, the group or class dominance probability, to measure the relative importance of random effects in one-way random effects models. This parameter is the probability the group random effect is larger in absolute size than the individual (or error) random effect. The new parameter compares the middle part of the distributions of the two sources of variation, and pays little attention to the tails of the distributions. This is in contrast to the traditional approach of comparing the variances of the random effects, which can be heavily influenced by the tails of the distributions. We suggest parametric and nonparametric estimators of the group dominance probability, and demonstrate the applicability of the ideas using data on blood pressure measurements.
A Reminder of the Fallibility of the Wald Statistic
Computer programs often produce a parameter estimate and estimated variance ( ). Thus it is easy to compute a Wald statistic ( - θ 0 ){ ( )} −1/2 to test the null hypothesis θ = θ 0 . Hauck and Donner and Vaeth have identified situations in which the Wald statistic has poor power. We consider another example that is not in the classes discussed by those authors. We present data from a balanced one-way random effects analysis of variance (ANOVA) that illustrate the poor power of the Wald statistic compared to the usual F test. In this example the parameter of interest is the variance of the random effect. The power of the Wald test depends on the parameterization used, however, and a whole family of Wald statistics with p values ranging from 0 to 1 can be generated with power transformations of the random effect parameter.