Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
21,236 result(s) for "Measurement errors"
Sort by:
Multivariate Functional Principal Component Analysis for Data Observed on Different (Dimensional) Domains
Existing approaches for multivariate functional principal component analysis are restricted to data on the same one-dimensional interval. The presented approach focuses on multivariate functional data on different domains that may differ in dimension, such as functions and images. The theoretical basis for multivariate functional principal component analysis is given in terms of a Karhunen-Loève Theorem. For the practically relevant case of a finite Karhunen-Loève representation, a relationship between univariate and multivariate functional principal component analysis is established. This offers an estimation strategy to calculate multivariate functional principal components and scores based on their univariate counterparts. For the resulting estimators, asymptotic results are derived. The approach can be extended to finite univariate expansions in general, not necessarily orthonormal bases. It is also applicable for sparse functional data or data with measurement error. A flexible R implementation is available on CRAN. The new method is shown to be competitive to existing approaches for data observed on a common one-dimensional domain. The motivating application is a neuroimaging study, where the goal is to explore how longitudinal trajectories of a neuropsychological test score covary with FDG-PET brain scans at baseline. Supplementary material, including detailed proofs, additional simulation results, and software is available online.
Using a Probabilistic Model to Assist Merging of Large-Scale Administrative Records
Since most social science research relies on multiple data sources, merging data sets is an essential part of researchers’ workflow. Unfortunately, a unique identifier that unambiguously links records is often unavailable, and data may contain missing and inaccurate information. These problems are severe especially when merging large-scale administrative records. We develop a fast and scalable algorithm to implement a canonical model of probabilistic record linkage that has many advantages over deterministic methods frequently used by social scientists. The proposed methodology efficiently handles millions of observations while accounting for missing data and measurement error, incorporating auxiliary information, and adjusting for uncertainty about merging in post-merge analyses. We conduct comprehensive simulation studies to evaluate the performance of our algorithm in realistic scenarios. We also apply our methodology to merging campaign contribution records, survey data, and nationwide voter files. An open-source software package is available for implementing the proposed methodology.
Measuring Stability and Change in Personal Culture Using Panel Data
Models of population-wide cultural change tend to invoke one of two broad models of individual change. One approach theorizes people actively updating their beliefs and behaviors in the face of new information. The other argues that, following early socialization experiences, dispositions are stable. We formalize these two models, elaborate empirical implications of each, and derive a simple combined model for comparing them using panel data. We test this model on 183 attitude and behavior items from the 2006 to 2014 rotating panels of the General Social Survey. The pattern of results is complex but more consistent with the settled dispositions model than with the active updating model. Most of the observed change in the GSS appears to be short-term attitude change or measurement error rather than persisting changes. When persistent change occurs, it is somewhat more likely to occur in younger people and for public behaviors and beliefs about high-profile issues than for private attitudes. We argue that we need both models in our theory of cultural evolution but that we need more research on the circumstances under which each is more likely to apply.
Measurement error is often neglected in medical literature: a systematic review
In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary.
Firm strategic behavior and the measurement of knowledge flows with patent citations
Research Summary This research addresses firms' use of external knowledge sources to develop patented inventions and explores the validity of patent citations as an indicator of interfirm knowledge flows. By comparing patent citations with primary data reported by the inventors, we uncover systematic measurement errors in patent citations and show that they depend on the firms' patent strategies (e.g., to reduce the risk of imitation or litigation), the source of knowledge employed (e.g., competitors, users), the technology of the underlying invention, and the institutional characteristics of the patent system. Our findings about the role of these factors in external knowledge sourcing and citing propensity highlight the importance of firms' strategic behavior and offer novel insights for the use of patent citations as an indicator of knowledge flows. Managerial Summary Firms' open innovation strategies rely on the sourcing of knowledge from other organizations. Tracing these knowledge flows is difficult, such that the empirical research on this matter typically uses citations that patents make to prior art in order to track them. However, patent citations might be added also for reasons other than the actual transfer of knowledge. We use primary information from a large survey of inventors to assess the accuracy of patent citations to measure knowledge flows, and we find evidence of measurement errors that depend on the applicants' patent strategies, the type of knowledge sources used, the filing jurisdiction, and the technology of the underlying invention. We offer insights to evaluate the settings in which patent citations are a reliable measure of knowledge flows.
INFERENCE ON CAUSAL EFFECTS IN A GENERALIZED REGRESSION KINK DESIGN
We consider nonparametric identification and estimation in a nonseparable model where a continuous regressor of interest is a known, deterministic, but kinked function of an observed assignment variable. We characterize a broad class of models in which a sharp \"Regression Kink Design\" (RKD or RK Design) identifies a readily interpretable treatment-on-the-treated parameter (Florens, Heckman, Meghir, and Vytlaèil (2008)). We also introduce a \"fuzzy regression kink design\" generalization that allows for omitted variables in the assignment rule, noncompliance, and certain types of measurement errors in the observed values of the assignment variable and the policy variable. Our identifying assumptions give rise to testable restrictions on the distributions of the assignment variable and predetermined covariates around the kink point, similar to the restrictions delivered by Lee (2008) for the regression discontinuity design. Using a kink in the unemployment benefit formula, we apply a fuzzy RKD to empirically estimate the effect of benefit rates on unemployment durations in Austria.
Measurement error, fixed effects, and false positives in accounting research
We show theoretically and empirically that measurement error can bias in favor of falsely rejecting a true null hypothesis (i.e., a “false positive”) and that regression models with high-dimensional fixed effects can exacerbate measurement error bias and increase the likelihood of false positives. We replicate inferences from prior work in a setting where we can directly observe the amount of measurement error and show that the combination of measurement error and fixed effects materially inflates coefficients and distorts inferences. We provide researchers with a simple diagnostic tool to assess the possibility that the combination of measurement error and fixed effects might give rise to a false positive, and encourage researchers to triangulate inferences across multiple empirical proxies and multiple fixed effect structures.
Sample size recommendations for studies on reliability and measurement error: an online application based on simulation studies
Simulation studies were performed to investigate for which conditions of sample size of patients (n) and number of repeated measurements (k) (e.g., raters) the optimal (i.e., balance between precise and efficient) estimations of intraclass correlation coefficients (ICCs) and standard error of measurements (SEMs) can be achieved. Subsequently, we developed an online application that shows the implications for decisions about sample sizes in reliability studies. We simulated scores for repeated measurements of patients, based on different conditions of n, k, the correlation between scores on repeated measurements (r), the variance between patients’ test scores (v), and the presence of systematic differences within k. The performance of the reliability parameters (based on one-way and two-way effects models) was determined by the calculation of bias, mean squared error (MSE), and coverage and width of the confidence intervals (CI). We showed that the gain in precision (i.e., largest change in MSE) of the ICC and SEM parameters diminishes at larger values of n or k. Next, we showed that the correlation and the presence of systematic differences have most influence on the MSE values, the coverage and the CI width. This influence differed between the models. As measurements can be expensive and burdensome for patients and professionals, we recommend to use an efficient design, in terms of the sample size and number of repeated measurements to come to precise ICC and SEM estimates. Utilizing the results, a user-friendly online application is developed to decide upon the optimal design, as ‘one size fits all’ doesn’t hold.
Functional and Structural Methods With Mixed Measurement Error and Misclassification in Covariates
Covariate measurement imprecision or errors arise frequently in many areas. It is well known that ignoring such errors can substantially degrade the quality of inference or even yield erroneous results. Although in practice both covariates subject to measurement error and covariates subject to misclassification can occur, research attention in the literature has mainly focused on addressing either one of these problems separately. To fill this gap, we develop estimation and inference methods that accommodate both characteristics simultaneously. Specifically, we consider measurement error and misclassification in generalized linear models under the scenario that an external validation study is available, and systematically develop a number of effective functional and structural methods. Our methods can be applied to different situations to meet various objectives.
Mind the Gap: Accounting for Measurement Error and Misclassification in Variables Generated via Data Mining
The application of predictive data mining techniques in information systems research has grown in recent years, likely because of their effectiveness and scalability in extracting information from large amounts of data. A number of scholars have sought to combine data mining with traditional econometric analyses. Typically, data mining methods are first used to generate new variables (e.g., text sentiment), which are added into subsequent econometric models as independent regressors. However, because prediction is almost always imperfect, variables generated from the first-stage data mining models inevitably contain measurement error or misclassification. These errors, if ignored, can introduce systematic biases into the second-stage econometric estimations and threaten the validity of statistical inference. In this commentary, we examine the nature of this bias, both analytically and empirically, and show that it can be severe even when data mining models exhibit relatively high performance. We then show that this bias becomes increasingly difficult to anticipate as the functional form of the measurement error or the specification of the econometric model grows more complex. We review several methods for error correction and focus on two simulation-based methods, SIMEX and MC-SIMEX, which can be easily parameterized using standard performance metrics from data mining models, such as error variance or the confusion matrix, and can be applied under a wide range of econometric specifications. Finally, we demonstrate the effectiveness of SIMEX and MC-SIMEX by simulations and subsequent application of the methods to econometric estimations employing variables mined from three real-world data sets related to travel, social networking, and crowdfunding campaign websites. The online appendix is available at https://doi.org/10.1287/isre.2017.0727 .