Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
3 result(s) for "Berkson-type error"
Sort by:
Moment reconstruction and moment-adjusted imputation when exposure is generated by a complex, nonlinear random effects modeling process
For the classical, homoscedastic measurement error model, moment reconstruction (Freedman et al., 2004, 2008) and moment-adjusted imputation (Thomas et al., 2011) are appealing, computationally simple imputation-like methods for general model fitting. Like classical regression calibration, the idea is to replace the unobserved variable subject to measurement error with a proxy that can be used in a variety of analyses. Moment reconstruction and moment-adjusted imputation differ from regression calibration in that they attempt to match multiple features of the latent variable, and also to match some of the latent variable's relationships with the response and additional covariates. In this note, we consider a problem where true exposure is generated by a complex, nonlinear random effects modeling process, and develop analogues of moment reconstruction and moment-adjusted imputation for this case. This general model includes classical measurement errors, Berkson measurement errors, mixtures of Berkson and classical errors and problems that are not measurement error problems, but also cases where the data-generating process for true exposure is a complex, nonlinear random effects modeling process. The methods are illustrated using the National Institutes of Health-AARP Diet and Health Study where the latent variable is a dietary pattern score called the Healthy Eating Index-2005. We also show how our general model includes methods used in radiation epidemiology as a special case. Simulations are used to illustrate the methods.
Bias in the estimation of exposure effects with individual- or group-based exposure assessment
In this paper, we develop models of bias in estimates of exposure–disease associations for epidemiological studies that use group- and individual-based exposure assessments. In a study that uses a group-based exposure assessment, individuals are grouped according to shared attributes, such as job title or work area, and assigned an exposure score, usually the mean of some concentration measurements made on samples drawn from the group. We considered bias in the estimation of exposure effects in the context of both linear and logistic regression disease models, and the classical measurement error in the exposure model. To understand group-based exposure assessment, we introduced a quasi-Berkson error structure that can be justified with a moderate number of exposure measurements from each group. In the quasi-Berkson error structure, the true value is equal to the observed one plus error, and the error is not independent of the observed value. The bias in estimates with individual-based assessment depends on all variance components in the exposure model and is smaller when the between-group and between-subject variances are large. In group-based exposure assessment, group means can be assumed to be either fixed or random effects. Regardless of this assumption, the behavior of estimates is similar: the estimates of regression coefficients were less attenuated with a large sample size used to estimate group means, when between-subject variability was small and the spread between group means was large. However, if groups are considered to be random effects, bias is present, even with large number of measurements from each group. This does not occur when group effects are treated as fixed. We illustrate these models in analyses of the associations between exposure to magnetic fields and cancer mortality among electric utility workers and respiratory symptoms due to carbon black.
Bayesian Method for Improving Logistic Regression Estimates under Group-Based Exposure Assessment with Additive Measurement Errors
The group-based exposure assessment has been widely used in occupational epidemiology. When the sample size used to estimate group means is \"large\", this leads to negligible attenuation in the estimation of odds ratio. However, the bias is proportional to the between-subject variability and is affected by the difference in true group means. We explore a Bayesian method, which adjusts in a natural way for the extra uncertainty in the outcome model associated with using the predicted values as exposures. We aim to improve the estimate obtained in naïve analysis by exploiting the properties of Berkson type error structure. We consider cases where differences in the proximity of group means and the between-subject variance are both large. The results of the simulations show that our Bayesian measurement error adjustment method that follows group-based exposure assessment improves estimates of odds ratios when the between-subject variance is large and group means are far apart.