Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
4,313 result(s) for "Experimental replication"
Sort by:
Stepping in the same river twice : replication in biological research
An international team of biologists, philosophers, and historians of science explores the critically important process of replication in biological and biomedical research. Without replication, the trustworthiness of scientific research remains in doubt. Although replication is increasingly recognized as a central problem in many scientific disciplines, repeating the same scientific observations of experiments or reproducing the same set of analyses from existing data is remarkably difficult. In this important volume, an international team of biologists, philosophers, and historians of science addresses challenges and solutions for valid replication of research in medicine, ecology, natural history, agriculture, physiology, and computer science. After the introduction to important concepts and historical background, the book offers paired chapters that provide theoretical overviews followed by detailed case studies. These studies range widely in topics, from infectious-diseases and environmental monitoring to museum collections, meta-analysis, bioinformatics, and more. The closing chapters explicate and quantify problems in the case studies, and the volume concludes with important recommendations for best practices. -- Provided by publisher.
The Value of Direct Replication
Reproducibility is the cornerstone of science. If an effect is reliable, any competent researcher should be able to obtain it when using the same procedures with adequate statistical power. Two of the articles in this special section question the value of direct replication by other laboratories. In this commentary, I discuss the problematic implications of some of their assumptions and argue that direct replication by multiple laboratories is the only way to verify the reliability of an effect.
Facts Are More Important Than Novelty: Replication in the Education Sciences
Despite increased attention to methodological rigor in education research, the field has focused heavily on experimental design and not on the merit of replicating important results. The present study analyzed the complete publication history of the current top 100 education journals ranked by 5-year impact factor and found that only 0.13% of education articles were replications. Contrary to previous findings in medicine, but similar to psychology, the majority of education replications successfully replicated the original studies. However, replications were significantly less likely to be successful when there was no overlap in authorship between the original and replicating articles. The results emphasize the importance of third-party, direct replications in helping education research improve its ability to shape education policy and practice.
The Alleged Crisis and the Illusion of Exact Replication
There has been increasing criticism of the way psychologists conduct and analyze studies. These critiques as well as failures to replicate several high-profile studies have been used as justification to proclaim a \"replication crisis\" in psychology. Psychologists are encouraged to conduct more \"exact\" replications of published studies to assess the reproducibility of psychological research. This article argues that the alleged \"crisis of replicability\" is primarily due to an epistemological misunderstanding that emphasizes the phenomenon instead of its underlying mechanisms. As a consequence, a replicated phenomenon may not serve as a rigorous test of a theoretical hypothesis because identical operationalizations of variables in studies conducted at different times and with different subject populations might test different theoretical constructs. Therefore, we propose that for meaningful replications, attempts at reinstating the original circumstances are not sufficient. Instead, replicators must ascertain that conditions are realized that reflect the theoretical variable(s) manipulated (and/or measured) in the original study.
Psychological Measurement and the Replication Crisis: Four Sacred Cows
Although there are surely multiple contributors to the replication crisis in psychology, one largely unappreciated source is a neglect of basic principles of measurement. We consider 4 sacred cows-widely shared and rarely questioned assumptions-in psychological measurement that may fuel the replicability crisis by contributing to questionable measurement practices. These 4 sacred cows are: (a) we can safely rely on the name of a measure to infer its content; (b) reliability is not a major concern for laboratory measures; (c) using measures that are difficult to collect obviates the need for large sample sizes; and (d) convergent validity data afford sufficient evidence for construct validity. For items a and d, we provide provisional data from recent psychological journals that support our assertion that such beliefs are prevalent among authors. To enhance the replicability of psychological science, researchers will need to become vigilant against erroneous assumptions regarding both the psychometric properties of their measures and the implications of these psychometric properties for their studies. Bien qu'il soit certain que de nombreux facteurs contribuent à la crise de la reproductibilité en psychologie, l'un d'entre eux, largement méconnu, est la négligence des principes de base de la mesure. Nous examinons quatre principes « intouchables » de la mesure en psychologie - des hypothèses largement diffusées et rarement remises en question - qui, en rendant les pratiques de mesure discutables, peuvent alimenter la crise de la reproductibilité. Ces quatre intouchables sont les suivants : (A) nous pouvons nous fier en toute confiance au nom d'une mesure pour en déduire le contenu; (b) la fiabilité n'est pas une préoccupation majeure pour les mesures en laboratoire; (c) le recours à des mesures qui sont difficiles à recueillir écarte le besoin d'échantillons de taille plus importante; (d) des données convergentes sur la validité constituent des éléments de preuve suffisants de la validité conceptuelle. Pour les éléments a et d, nous fournissons des données provisoires issues de revues de psychologie récentes qui soutiennent notre affirmation selon laquelle de telles croyances prévalent parmi les auteurs. Afin d'améliorer la reproductibilité de la science de la psychologie, les chercheurs devront être vigilants face aux suppositions erronées concernant les propriétés psychométriques de ces mesures et aux répercussions de ces propriétés psychométriques pour leurs études. Public Significance Statement This article outlines four widely held but erroneous measurement assumptions that may adversely affect the accuracy and replicability of psychological findings. The effects of questionable measurement practices stemming from these assumptions are discussed, and new data bearing on the prevalence of these assumptions in academic journals are presented. In addition, this article offers several potential remedies that researchers and journals can implement to improve the measurement of psychological constructs.
What Is Meant by \Replication\ and Why Does It Encounter Resistance in Economics?
This paper discusses recent trends in the use of replications in economics. We include the results of recent replication studies that have attempted to identify replication rates within the discipline. These studies generally find that replication rates are relatively low. We then consider obstacles to undertaking replication studies and highlight replication initiatives in psychology and political science, behind which economics appears to lag.
Replications in Psychology Research: How Often Do They Really Occur?
Recent controversies in psychology have spurred conversations about the nature and quality of psychological research. One topic receiving substantial attention is the role of replication in psychological science. Using the complete publication history of the 100 psychology journals with the highest 5-year impact factors, the current article provides an overview of replications in psychological research since 1900. This investigation revealed that roughly 1.6% of all psychology publications used the term replication in text. A more thorough analysis of 500 randomly selected articles revealed that only 68% of articles using the term replication were actual replications, resulting in an overall replication rate of 1.07%. Contrary to previous findings in other fields, this study found that the majority of replications in psychology journals reported similar findings to their original studies (i.e., they were successful replications). However, replications were significantly less likely to be successful when there was no overlap in authorship between the original and replicating articles. Moreover, despite numerous systemic biases, the rate at which replications are being published has increased in recent decades.
A Proposal to Organize and Promote Replications
We make a two-pronged proposal to (i) strengthen the incentives for replication work and (ii) better organize and draw attention to the replications that are conducted. First we propose that top journals publish short “replication reports.” These reports could summarize novel work replicating an existing high-impact paper, or they could highlight a replication result embedded in a wider-scope published paper. Second, we suggest incentivizing replications with the currency of our profession: citations. Enforcing a norm of citing replication work alongside the original would provide incentives for replications to both authors and journals.
The Purpose and Practice of Exploratory and Confirmatory Factor Analysis in Psychological Research: Decisions for Scale Development and Validation
There are many high-quality resources available which describe best practices in the implementation of both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Yet, partly owing to the complexity of these procedures, confusion persists among psychologists with respect to the implementation of EFA and CFA. Primary among these misunderstandings is the very mathematical distinction between EFA and CFA. The current paper uses a brief example to illustrate the difference between the statistical models underlying EFA and CFA, both of which are particular instantiations of the more general common factor model. Next, important considerations for the implementation of EFA and CFA discussed in this paper include the need to account for the categorical nature of item-level observed variables in factor analyses, the use of factor analysis in studies of the psychometric properties of new tests or questionnaires and previously developed tests, decisions about whether to use EFA or CFA in these contexts, and the importance of replication of factor analytic models in the ongoing pursuit of validation. De nombreuses ressources de haute qualité existent pour décrire les meilleures pratiques en matière de mise en œuvre de l'analyse factorielle exploratoire (AFE) et de l'analyse factorielle confirmatoire (AFC). Or, en partie dû à la complexité de ces procédures, une certaine confusion persiste entre les psychologues quant à la mise en œuvre de l'AFE et de l'AFC. L'une des principales sources de ces malentendus réside dans la distinction mathématique entre l'AFE et l'AFC. Le présent article utilise un bref exemple pour illustrer la différence entre les modèles statistiques sous-jacents à l'AFE et l'AFC, lesquels sont tous deux des instanciations particulières du modèle factoriel plus général. Ensuite, d'importantes considérations relatives à la mise en œuvre de l'AFE et de l'AFC, abordées dans le présent article, incluent la nécessité de tenir compte de la nature catégorique de variables observées au niveau des items dans les analyses factorielles, l'utilisation de l'analyse factorielle dans l'étude de propriétés psychométriques de nouveaux tests ou questionnaires et de tests élaborés dans le passé, des décisions quant à la procédure la plus appropriée - soit l'AFE ou l'AFC - dans ces contextes et l'importance de la reproduction de modèles d'analyse factorielle dans la poursuite de la validation en cours.
Fostering Transparency and Reproducibility in Psychological Science
Psychological science is hard. This short article focuses on two issues. One has to do with the importance of understanding statistical power and how post hoc data explorations and selective reporting can lead to exaggerated estimates of the size of effects and the strength of relationships (which in turn contribute to replication failures). The other topic is tools research psychologists can use to improve the reproducibility of their procedures and analyses. The article closes with a comment on the deeper challenge of improving the usefulness and testability of theories in psychology. La science de la psychologie est une discipline exigeante. Ce court article se concentre sur deux enjeux. Le premier concerne l'importance de comprendre la puissance statistique et la façon dont les explorations post hoc des données et les rapports sélectifs peuvent entraîner des surestimations de la taille des effets et de la force des relations (contribuant ainsi aux échecs de réplication). Le second porte sur les outils que les psychologues peuvent utiliser pour améliorer la reproductibilité de leurs procédures et analyses. La conclusion de l'article aborde le défi complexe lié à l'amélioration de la pertinence et de la validité des théories psychologiques. Public Significance Statement Psychologists (and other life scientists) often use statistical analyses of data collected from samples of participants to make inferences about causal and correlational relationships in populations. This article aimed to improve understanding of constraints on the uses of inferential statistical tests and to encourage psychological scientists to strive to make their statistical analyses appropriate, transparent and reproducible. The article closes with a brief comment on the importance of theory development in psychological science.