Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
705 result(s) for "external validity"
Sort by:
Unifying SoTL Methodology: Internal and External Validity
A broad consensus exists that the use of appropriate methods are important in the Scholarship of Teaching and Learning. However, methodological controversies arise around what constitutes acceptable evidence, if one needs a control group, how generalizable results must be, and other similar issues. Much SoTL work, I argue, asks questions about how much a particular treatment (innovation) caused an effect (student learning), and how the results found in one particular context can be extended outside that context (generalizability). These concepts, known as internal validity and external validity, respectively, provide a common point of departure for much scholarship on teaching and learning. This paper addresses these concepts and demonstrates how they can unite much of what divides us within the methodological realm of SoTL.
Causal inference and the data-fusion problem
We review concepts, principles, and tools that unify current approaches to causal analysis and attend to new challenges presented by big data. In particular, we address the problem of data fusion—piecing together multiple datasets collected under heterogeneous conditions (i.e., different populations, regimes, and sampling methods) to obtain valid answers to queries of interest. The availability of multiple heterogeneous datasets presents new opportunities to big data analysts, because the knowledge that can be acquired from combined data would not be possible from any individual source alone. However, the biases that emerge in heterogeneous environments require new analytical tools. Some of these biases, including confounding, sampling selection, and cross-population biases, have been addressed in isolation, largely in restricted parametric models. We here present a general, nonparametric framework for handling these biases and, ultimately, a theoretical solution to the problem of data fusion in causal inference tasks.
Generalizability of heterogeneous treatment effect estimates across samples
The extent to which survey experiments conducted with nonrepresentative convenience samples are generalizable to target populations depends critically on the degree of treatment effect heterogeneity. Recent inquiries have found a strong correspondence between sample average treatment effects estimated in nationally representative experiments and in replication studies conducted with convenience samples. We consider here two possible explanations: low levels of effect heterogeneity or high levels of effect heterogeneity that are unrelated to selection into the convenience sample. We analyze subgroup conditional average treatment effects using 27 original–replication study pairs (encompassing 101,745 individual survey responses) to assess the extent to which subgroup effect estimates generalize. While there are exceptions, the overwhelming pattern that emerges is one of treatment effect homogeneity, providing a partial explanation for strong correspondence across both unconditional and conditional average treatment effect estimates.
Smallholder farmers and contract farming in developing countries
Poverty is prevalent in the small-farm sector of many developing countries. A large literature suggests that contract farming—a preharvest agreement between farmers and buyers—can facilitate smallholder market participation, improve household welfare, and promote rural development. These findings have influenced the development policy debate, but the external validity of the extant evidence is limited. Available studies typically focus on a single contract scheme or on a small geographical area in one country. We generate evidence that is generalizable beyond a particular contract scheme, crop, or country, using nationally representative survey data from 6 countries. We focus on the implications of contract farming for household income and labor demand, finding that contract farmers obtain higher incomes than their counterparts without contracts only in some countries. Contract farmers in most countries exhibit increased demand for hired labor, which suggests that contract farming stimulates employment, yet we do not find evidence of spillover effects at the community level. Our results challenge the notion that contract farming unambiguously improves welfare. We discuss why our results may diverge from previous findings and propose research designs that yield greater internal and external validity. Implications for policy and research are relevant beyond contract farming.
Is it possible to overcome issues of external validity in preclinical animal research? Why most animal models are bound to fail
Background The pharmaceutical industry is in the midst of a productivity crisis and rates of translation from bench to bedside are dismal. Patients are being let down by the current system of drug discovery; of the several 1000 diseases that affect humans, only a minority have any approved treatments and many of these cause adverse reactions in humans. A predominant reason for the poor rate of translation from bench to bedside is generally held to be the failure of preclinical animal models to predict clinical efficacy and safety. Attempts to explain this failure have focused on problems of internal validity in preclinical animal studies (e.g. poor study design, lack of measures to control bias). However there has been less discussion of another key factor that influences translation, namely the external validity of preclinical animal models. Review of problems of external validity External validity is the extent to which research findings derived in one setting, population or species can be reliably applied to other settings, populations and species. This paper argues that the reliable translation of findings from animals to humans will only occur if preclinical animal studies are both internally and externally valid. We review several key aspects that impact external validity in preclinical animal research, including unrepresentative animal samples, the inability of animal models to mimic the complexity of human conditions, the poor applicability of animal models to clinical settings and animal–human species differences. We suggest that while some problems of external validity can be overcome by improving animal models, the problem of species differences can never be overcome and will always undermine external validity and the reliable translation of preclinical findings to humans. Conclusion We conclude that preclinical animal models can never be fully valid due to the uncertainties introduced by species differences. We suggest that even if the next several decades were spent improving the internal and external validity of animal models, the clinical relevance of those models would, in the end, only improve to some extent . This is because species differences would continue to make extrapolation from animals to humans unreliable. We suggest that to improve clinical translation and ultimately benefit patients, research should focus instead on human-relevant research methods and technologies.
Basic characteristics and representativeness of the German Disease Analyzer database
Purpose: The aim of this study was to evaluate the representativeness of diagnoses in the Disease Analyzer (DA) database for major chronic diseases (cancer, dementia, diabetes). Materials and methods: DA contains anonymized longitudinal data on drug prescriptions, diagnoses as well as medical and demographic data directly obtained from the computer system of a representative sample of practices throughout Germany. DA contains data from 2,498 practices with 7.8 million patients (2017). The distribution and sex-specific incidence of various cancer subsites among new cancer cases, the age- and sex-specific prevalence of dementia, and the prevalence of diabetes were assessed. National reference data were obtained from official sources. Results: Mean age (43 years) and sex distribution (47% men) of primary care patients in DA were similar to the German population. Among incident cancer cases, there was good agreement between DA data and national data with respect to the various cancer subsites (e.g., breast cancer: DA 17%; reference: 15%). Furthermore, sex distribution was largely similar. The age distribution of prevalent dementia was similar to national reference data, both in men (80 – 84 years: DA: 26.8%; reference: 27.0%) and in women (80 – 84 years: DA: 24.6%; reference: 24.1%). Diabetes prevalence in the DA (10.7%) was higher than in claims data from physicians (9.8%) or patients from statutory health insurances (9.9%). Conclusion: There was a good agreement of the incidence or prevalence of major chronic diseases in the outpatient DA with German reference data. The higher diabetes prevalence in the DA is due to the increased number of outpatient visits of diabetes patients.
Toward Establishing Internal Validity for Correlated Gene Expression Measures in Imaging Genomics of Functional Networks: Why Distance Corrections and External Face Validity Alone Fall Short. Reply to “Distance Is Not Everything in Imaging Genomics of Functional Networks: Reply to a Commentary on Correlated Gene Expression Supports Synchronous Activity in Brain Networks”
The primary claim of the Richiardi et al. (2015) Science article is that a measure of correlated gene expression, significant strength fraction (SSF), is related to resting state fMRI (rsfMRI) networks. However, there is still debate about this claim and whether spatial proximity, in the form of contiguous clusters, accounts entirely, or only partially, for SSF (Pantazatos and Li, 2017; Richiardi et al., 2017). Here, 13 distributed networks were simulated by combining 34 contiguous clusters randomly placed throughout cortex, with resulting edge distance distributions similar to rsfMRI networks. Cluster size was modulated (6-15 mm radius) to test its influence on SSF false positive rate (SSF-FPR) among the simulated \"noise\" networks. The contribution of rsfMRI networks on SSF-FPR was examined by comparing simulated networks whose clusters were sampled from: (1) all 1,777 cortical tissue samples, (2) all samples, but with non-rsfMRI cluster centers, and (3) only 1,276 non-rsfMRI samples. Results show that SSF-FPR is influenced only by cluster size ( > 0.9, < 0.001), not by rsfMRI samples. Simulations using 14 mm radius clusters most resembled rsfMRI networks. When thresholding at < 10 , the SSF-FPR was 0.47. Genes that maximize SF have high spatial autocorrelation. In conclusion, SSF is unrelated to rsfMRI networks. The main conclusion of Richiardi et al. (2015) is based on a finding that is ∼50% likely to be a false positive, not <0.01% as originally reported in the article (Richiardi et al., 2015). We discuss why distance corrections alone and external face validity are insufficient to establish a trustworthy relationship between correlated gene expression measures and rsfMRI networks, and propose more rigorous approaches to preclude common pitfalls in related studies.
Do individuals have consistent risk preferences across domains? Evidence from the Japanese insurance market
The risk attitude plays an important role in analyzing decision making under uncertainty. It is essential to confirm whether the risk aversion parameter in a certain situation, called \"domain,\" can be applied to other situations. Using a dataset on hospitalization insurance policies in Japan, this study tests whether individuals' risk preferences remain consistent across domains. Based on the assumption of expected utility maximizer, we derive a plausible distribution of the degree of risk aversion. We find that degree of risk aversion is consistent between hospitalization benefits and additional insurance for specific diseases. Contrarily, the degree of risk aversion from hospitalization benefits has a negative relationship with that based on a survey question on the self-assessment of general preferences. This result indicates that the imputation of risk aversion from the literature would distort research results markedly if characteristics of the domains targeted by both previous research and this study differ.
On the External Validity of Social Preference Games: A Systematic Lab-Field Study
We present a lab-field experiment designed to systematically assess the external validity of social preferences elicited in a variety of experimental games. We do this by comparing behavior in the different games with several behaviors elicited in the field and with self-reported behaviors exhibited in the past, using the same sample of participants. Our results show that the experimental social preference games do a poor job explaining both social behaviors in the field and social behaviors from the past. We also include a systematic review and meta-analysis of previous literature on the external validity of social preference games. Data are available at https://doi.org/10.1287/mnsc.2017.2908 . This paper was accepted by John List, behavioral economics.
Who is in this study, anyway? Guidelines for a useful Table 1
Epidemiologic and clinical research papers often describe the study sample in the first table. If well-executed, this “Table 1” can illuminate potential threats to internal and external validity. However, little guidance exists on best practices for designing a Table 1, especially for complex study designs and analyses. We aimed to summarize and extend the literature related to reporting descriptive statistics. In consultation with existing guidelines, we synthesized and developed reporting recommendations driven by study design and focused on transparency related to potential threats to internal and external validity. We describe a basic structure for Table 1 and discuss simple modifications in terms of columns, rows, and cells to enhance a reader's ability to judge both internal and external validity. We further highlight several analytic complexities common in epidemiologic research (missing data, sample weights, clustered data, and interaction) and describe possible variations to Table 1 to maintain and add clarity about study validity in light of these issues. We discuss considerations and tradeoffs in Table 1 related to breadth and comprehensiveness vs. parsimony and reader-friendliness. We anticipate that our work will guide authors considering layouts for Table 1, with attention to the reader's perspective.