Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
130 result(s) for "Nosek, Brian A."
Sort by:
What is replication?
Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a study's procedure and observing whether the prior finding recurs. This definition is intuitive, easy to apply, and incorrect. We propose that replication is a study for which any outcome would be considered diagnostic evidence about a claim from prior research. This definition reduces emphasis on operational characteristics of the study and increases emphasis on the interpretation of possible outcomes. The purpose of replication is to advance theory by confronting existing understanding with new evidence. Ironically, the value of replication may be strongest when existing understanding is weakest. Successful replication provides evidence of generalizability across the conditions that inevitably differ from the original study; Unsuccessful replication indicates that the reliability of the finding may be more constrained than recognized previously. Defining replication as a confrontation of current theoretical expectations clarifies its important, exciting, and generative role in scientific progress.
The preregistration revolution
Progress in science relies in part on generating hypotheses with existing observations and testing hypotheses with new observations. This distinction between postdiction and prediction is appreciated conceptually but is not respected in practice. Mistaking generation of postdictions with testing of predictions reduces the credibility of research findings. However, ordinary biases in human reasoning, such as hindsight bias, make it hard to avoid this mistake. An effective solution is to define the research questions and analysis plan before observing the research outcomes—a process called preregistration. Preregistration distinguishes analyses and outcomes that result from predictions from those that result from postdictions. A variety of practical strategies are available to make the best possible use of preregistration in circumstances that fall short of the ideal application, such as when the data are preexisting. Services are now available for preregistration across all disciplines, facilitating a rapid increase in the practice. Widespread adoption of preregistration will increase distinctiveness between hypothesis generation and hypothesis testing and will improve the credibility of research findings.
Science becomes trustworthy by constantly questioning itself
What happens when the greatest strengths of science-openness, humility, self-criticism, and self-correction-are exploited for political gain? Scientists affirm the genuine application of those strengths as the source of its trustworthiness.
Challenges for assessing replicability in preclinical cancer biology
We conducted the Reproducibility Project: Cancer Biology to investigate the replicability of preclinical research in cancer biology. The initial aim of the project was to repeat 193 experiments from 53 high-impact papers, using an approach in which the experimental protocols and plans for data analysis had to be peer reviewed and accepted for publication before experimental work could begin. However, the various barriers and challenges we encountered while designing and conducting the experiments meant that we were only able to repeat 50 experiments from 23 papers. Here we report these barriers and challenges. First, many original papers failed to report key descriptive and inferential statistics: the data needed to compute effect sizes and conduct power analyses was publicly accessible for just 4 of 193 experiments. Moreover, despite contacting the authors of the original papers, we were unable to obtain these data for 68% of the experiments. Second, none of the 193 experiments were described in sufficient detail in the original paper to enable us to design protocols to repeat the experiments, so we had to seek clarifications from the original authors. While authors were extremely or very helpful for 41% of experiments, they were minimally helpful for 9% of experiments, and not at all helpful (or did not respond to us) for 32% of experiments. Third, once experimental work started, 67% of the peer-reviewed protocols required modifications to complete the research and just 41% of those modifications could be implemented. Cumulatively, these three factors limited the number of experiments that could be repeated. This experience draws attention to a basic and fundamental concern about replication – it is hard to assess whether reported findings are credible.
A comparative investigation of seven indirect attitude measures
We compared the psychometric qualities of seven indirect attitude measures across three attitude domains (race, politics, and self-esteem) with a large sample ( N  = 23,413). We compared the measures on internal consistency, sensitivity to known effects, relationships with indirect and direct measures of the same topic, the reliability and validity of single-category attitude measurement, their ability to detect meaningful variance among people with nonextreme attitudes, and their robustness to the exclusion of misbehaving or well-behaving participants. All seven indirect measures correlated with each other and with direct measures of the same topic. These relations were always weak for self-esteem, moderate for race, and strong for politics. This pattern suggests that some of the sources of variation in the reliability and predictive validity of the indirect measures is a function of the concepts rather than the methods. The Implicit Association Test (IAT) and Brief IAT (BIAT) showed the best overall psychometric quality, followed by the Go–No-Go association task, Single-Target IAT (ST-IAT), Affective Misattribution Procedure (AMP), Sorting Paired Features task, and Evaluative Priming. The AMP showed a steep decline in its psychometric qualities when people with extreme attitude scores were removed. Single-category attitude scores computed for the IAT and BIAT showed good relationships with other attitude measures but no evidence of discriminant validity between paired categories. The other measures, especially the AMP and ST-IAT, showed better evidence for discriminant validity. These results inform us on the validity of the measures as attitude assessments, but do not speak to the implicitness of the measured constructs.
Making sense of replications
The first results from the Reproducibility Project: Cancer Biology suggest that there is scope for improving reproducibility in pre-clinical cancer research.DOI:http://dx.doi.org/10.7554/eLife.23383.001
Implicit and Explicit Anti-Fat Bias among a Large Sample of Medical Doctors by BMI, Race/Ethnicity and Gender
Overweight patients report weight discrimination in health care settings and subsequent avoidance of routine preventive health care. The purpose of this study was to examine implicit and explicit attitudes about weight among a large group of medical doctors (MDs) to determine the pervasiveness of negative attitudes about weight among MDs. Test-takers voluntarily accessed a public Web site, known as Project Implicit®, and opted to complete the Weight Implicit Association Test (IAT) (N = 359,261). A sub-sample identified their highest level of education as MD (N = 2,284). Among the MDs, 55% were female, 78% reported their race as white, and 62% had a normal range BMI. This large sample of test-takers showed strong implicit anti-fat bias (Cohen's d = 1.0). MDs, on average, also showed strong implicit anti-fat bias (Cohen's d = 0.93). All test-takers and the MD sub-sample reported a strong preference for thin people rather than fat people or a strong explicit anti-fat bias. We conclude that strong implicit and explicit anti-fat bias is as pervasive among MDs as it is among the general public. An important area for future research is to investigate the association between providers' implicit and explicit attitudes about weight, patient reports of weight discrimination in health care, and quality of care delivered to overweight patients.
Investigating the replicability of preclinical cancer biology
Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set up to provide evidence about the replicability of preclinical research in cancer biology by repeating selected experiments from high-impact papers. A total of 50 experiments from 23 papers were repeated, generating data about the replicability of a total of 158 effects. Most of the original effects were positive effects (136), with the rest being null effects (22). A majority of the original effect sizes were reported as numerical values (117), with the rest being reported as representative images (41). We employed seven methods to assess replicability, and some of these methods were not suitable for all the effects in our sample. One method compared effect sizes: for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original. The other methods were binary – the replication was either a success or a failure – and five of these methods could be used to assess both positive and null effects when effect sizes were reported as numerical values. For positive effects, 40% of replications (39/97) succeeded according to three or more of these five methods, and for null effects 80% of replications (12/15) were successful on this basis; combining positive and null effects, the success rate was 46% (51/112). A successful replication does not definitively confirm an original finding or its theoretical interpretation. Equally, a failure to replicate does not disconfirm a finding, but it does suggest that additional investigation is needed to establish its reliability.
The Moral Stereotypes of Liberals and Conservatives: Exaggeration of Differences across the Political Spectrum
We investigated the moral stereotypes political liberals and conservatives have of themselves and each other. In reality, liberals endorse the individual-focused moral concerns of compassion and fairness more than conservatives do, and conservatives endorse the group-focused moral concerns of ingroup loyalty, respect for authorities and traditions, and physical/spiritual purity more than liberals do. 2,212 U.S. participants filled out the Moral Foundations Questionnaire with their own answers, or as a typical liberal or conservative would answer. Across the political spectrum, moral stereotypes about \"typical\" liberals and conservatives correctly reflected the direction of actual differences in foundation endorsement but exaggerated the magnitude of these differences. Contrary to common theories of stereotyping, the moral stereotypes were not simple underestimations of the political outgroup's morality. Both liberals and conservatives exaggerated the ideological extremity of moral concerns for the ingroup as well as the outgroup. Liberals were least accurate about both groups.
Implicit-Explicit Relations
Mental process and mental experience are not the same thing. The former is the operation of the mind; the latter is the subjective life that emerges from that operation. In social evaluation, implicit and explicit attitudes express this distinction. Although it is clear that they are not the same, how they differ is not. Across content domains, implicit and explicit attitude measures show substantial variability in the strength of correspondence, ranging from near zero to strongly positive. Variation in controllabiity, intentionality, awareness, or efficiency is thought to differentiate implicit and explicit attitudes. Dual-process theories and empirical evidence for moderating influences of implicit-explicit attitude relations provide a framework for comprehending relations between the operation and the experience of the mind.