Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
5,244 result(s) for "publication bias"
Sort by:
Publication bias in the social sciences since 1959: Application of a regression discontinuity framework
While publication bias has been widely documented in the social sciences, it is unclear whether the problem aggravated over the last decades due to an increasing pressure to publish. We provide an in-depth analysis of publication bias over time by creating a unique data set, consisting of 12340 test statistics extracted from 571 papers published in 1959-2018 in the Quarterly Journal of Economics. We, further, develop a new methodology to test for discontinuities at the thresholds of significance. Our findings reveal, that, first, in contrast to our expectations, publication bias was already present many decades ago, but that, second, bias patterns notably changed over time. As such, we observe a transition from bias at the 10 percent to bias at the 5 percent significance level. We conclude that these changes are influenced by increasing computational possibilities as well as changes in the acceptance rates of scientific top journals.
Quantifying Publication Bias in Meta-Analysis
Publication bias is a serious problem in systematic reviews and meta-analyses, which can affect the validity and generalization of conclusions. Currently, approaches to dealing with publication bias can be distinguished into two classes: selection models and funnel-plot-based methods. Selection models use weight functions to adjust the overall effect size estimate and are usually employed as sensitivity analyses to assess the potential impact of publication bias. Punnel-plot-based methods include visual examination of a funnel plot, regression and rank tests, and the nonparametric trim and fill method. Although these approaches have been widely used in applications, measures for quantifying publication bias are seldom studied in the literature. Such measures can be used as a characteristic of a meta-analysis; also, they permit comparisons of publication biases between different meta-analyses. Egger's regression intercept may be considered as a candidate measure, but it lacks an intuitive interpretation. This article introduces a new measure, the skewness of the standardized deviates, to quantify publication bias. This measure describes the asymmetry of the collected studies' distribution. In addition, a new test for publication bias is derived based on the skewness. Large sample properties of the new measure are studied, and its performance is illustrated using simulations and three case studies.
Empirical Comparison of Publication Bias Tests in Meta-Analysis
BackgroundDecision makers rely on meta-analytic estimates to trade off benefits and harms. Publication bias impairs the validity and generalizability of such estimates. The performance of various statistical tests for publication bias has been largely compared using simulation studies and has not been systematically evaluated in empirical data.MethodsThis study compares seven commonly used publication bias tests (i.e., Begg’s rank test, trim-and-fill, Egger’s, Tang’s, Macaskill’s, Deeks’, and Peters’ regression tests) based on 28,655 meta-analyses available in the Cochrane Library.ResultsEgger’s regression test detected publication bias more frequently than other tests (15.7% in meta-analyses of binary outcomes and 13.5% in meta-analyses of non-binary outcomes). The proportion of statistically significant publication bias tests was greater for larger meta-analyses, especially for Begg’s rank test and the trim-and-fill method. The agreement among Tang’s, Macaskill’s, Deeks’, and Peters’ regression tests for binary outcomes was moderately strong (most κ’s were around 0.6). Tang’s and Deeks’ tests had fairly similar performance (κ > 0.9). The agreement among Begg’s rank test, the trim-and-fill method, and Egger’s regression test was weak or moderate (κ < 0.5).ConclusionsGiven the relatively low agreement between many publication bias tests, meta-analysts should not rely on a single test and may apply multiple tests with various assumptions. Non-statistical approaches to evaluating publication bias (e.g., searching clinical trials registries, records of drug approving agencies, and scientific conference proceedings) remain essential.
Cluster randomized trials of individual-level interventions were at high risk of bias
•Due to the risks of identification and recruitment bias, opting for a cluster design when individual randomization would be feasible needs a strong justification. Concerns around contamination are unlikely to be acceptable justifications; although estimation of indirect effects might be.•When cluster randomization is adopted, we recommend that authors provide a clear justification for the choice of cluster randomization and clearly outline strategies to mitigate increased risks of bias. This should include identification and recruitment by someone blind to the treatment allocation and minimal or objective individual-level eligibility criteria.•Other good conduct procedures which are routinely implemented in individually randomized trials should be followed. These include implementation of the randomization using an accepted method of allocation concealment, for example, by using an independent statistician to generate the allocation sequence; blind outcome assessment when outcomes are subjective; and clear pre-specification (in a protocol or trial registration site) of the primary outcome including primary assessment time and method of primary analysis.•All these aspects should be clearly reported as per CONSORT guidelines. To ensure particular clarity around identification and recruitment, authors should also provide a timeline-cluster diagram. To describe the prevalence of risks of bias in cluster-randomized trials of individual-level interventions, according to the Cochrane Risk of Bias tool. Review undertaken in duplicate of a random sample of 40 primary reports of cluster-randomized trials of individual-level interventions. The most common reported reasons for adopting cluster randomization were the need to avoid contamination (17, 42.5%) and practical considerations (14, 35%). Of the 40 trials all but one was assessed as being at risk of bias. A majority (27, 67.5%) were assessed as at risk due to the timing of identification and recruitment of participants; many (21, 52.5%) due to an apparent lack of adequate allocation concealment; and many due to selectively reported results (22, 55%), arising from a mixture of reasons including lack of documentation of primary outcome. Other risks mostly occurred infrequently. Many cluster-randomized trials evaluating individual-level interventions appear to be at risk of bias, mostly due to identification and recruitment biases. We recommend that investigators carefully consider the need for cluster randomization; follow recommended procedures to mitigate risks of identification and recruitment bias; and adhere to good reporting practices including clear documentation of primary outcome and allocation concealment methods.
Assessing Publication Bias: a 7-Step User’s Guide with Best-Practice Recommendations
Meta-analytic reviews are a primary avenue for the generation of cumulative knowledge in the organizational and psychological sciences. Over the past decade or two, concern has been raised about the possibility of publication bias influencing meta-analytic results, which can distort our cumulative knowledge and lead to erroneous practical recommendations. Unfortunately, no clear guidelines exist for how meta-analysts ought to assess this bias. To address this issue, this paper develops a user’s guide with best-practice recommendations for the assessment of publication bias in meta-analytic reviews. To do this, we review the literature on publication bias and develop a step-by-step process to assess the presence of publication bias and gage its effects on meta-analytic results. Examples of tools and best practices are provided to aid meta-analysts when implementing the process in their own research. Although the paper is written primarily for organizational and psychological scientists, the guide and recommendations are not limited to any particular scientific domain.
Authors report lack of time as main reason for unpublished research presented at biomedical conferences: a systematic review
To systematically review reports that queried abstract authors about reasons for not subsequently publishing abstract results as full-length articles. Systematic review of MEDLINE, EMBASE, The Cochrane Library, ISI Web of Science, and study bibliographies for empirical studies in which investigators examined subsequent full publication of results presented at a biomedical conference and reasons for nonpublication. The mean full publication rate was 55.9% [95% confidence interval (CI): 54.8%, 56.9%] for 24 of 27 eligible reports providing this information and 73.0% (95% CI: 71.2%, 74.7%) for seven reports of abstracts describing clinical trials. Twenty-four studies itemized 1,831 reasons for nonpublication, and six itemized 428 reasons considered the most important reason. “Lack of time” was the most frequently reported reason [weighted average = 30.2% (95% CI: 27.9%, 32.4%)] and the most important reason [weighted average = 38.4% (95% CI: 33.7%, 43.2%)]. Other commonly stated reasons were “lack of time and/or resources,” “publication not an aim,” “low priority,” “incomplete study,” and “trouble with co-authors.” Across medical specialties, the main reasons for not subsequently publishing an abstract in full lie with factors related to the abstract author rather than with journals.
Patterns of preregistration and publication of trials in Cochrane systematic reviews of interventions
It is widely recognized that selective reporting clinical trial results based on their outcomes, in the forms of publication bias, outcome reporting bias, or p-hacking, has detrimental effects on the scientific literature and on evidence synthesis. This can be recognized and perhaps ameliorated with comprehensive trial registration. However, previous investigations of clinical trial registration focused on study-level examinations rather than the number of trial participants, which is often more relevant to meta-analysis. Our objective was to investigate the risk of bias from selective reporting considering both trials but also the number of included participants. We took a random sample of 50 Cochrane systematic reviews (SRs) of interventions which included randomized controlled trials, forming a retrospective cohort. Focusing on the primary outcome in the SR we used the review, published trial information, and public trial registration documents to collect information about the reviews themselves, as well as information about “included,” “ongoing,” and studies “awaiting classification”. In all 50 selected reviews, there were 423 “included” trials which examined the primary outcome, of which 109 (25.7%) were preregistered. There was substantial variability in proportions of preregistration of included trials among reviews, with a median of 16.0% (interquartile range 0%−79.6%). Registered trials covered 60.1% of all participants, suggesting larger studies were more likely to be preregistered. The proportion of participants in registered trials which were published was high (98.2%), but the proportion of registered trials which were published also varied substantially between reviews. We found that in Cochrane reviews, there remains a low rate of preregistration among included studies and evidence for a substantial rate of trial nonreporting of registered trials. However, preregistered trials contributed proportionally more patients to reviews, and findings remain unpublished for only a small proportion of participants in registered trials. Trials are an important form of evidence in scientific literature that are often combined into systematic reviews (SRs), which give an overview for the evidence in a specific topic. However, trials and therefore the reviews can be misleading when the authors change the outcomes that they report based on the results they find, which is called reporting bias. One way of minimizing reporting bias is to register the planned methods for the trial in advance, known as preregistration. It is known that the proportion of trials that is registered can be low, but when conducting SRs the results are affected more by the number of participants within trials than the number of trials, which has not been previously studied. We aimed to make an up-to-date estimate of the proportion of preregistered trials in Cochrane SRs but also find out the proportion of participants within registered trials in these reviews. To do this we took a random sample of 50 Cochrane SRs, which conduct careful searches for registered trials whether or not they are published, and examined the trials within these reviews that studied the outcome the review declared to be most important. We found that even in modern SRs, only 25% of trials were preregistered, and that this number was very variable in reviews of different clinical questions. However we found that 60% of the patients in the reviews were within the preregistered trials, indicating that preregistered trials are generally larger than unregistered trials. We also found that more than 90% of preregistered studies were published. However, a major problem with this approach is that we cannot detect trials that were started without registration and then never published, which means that our results may underestimate the problem. Overall, this indicates that the risk of reporting bias is somewhat lower when considering participants rather than trials, but the risk of reporting bias is still high even in modern, rigorous reviews in medical science. •Selective reporting of trials can bias the literature and affect evidence synthesis.•The rate of preregistration of clinical trials remains low.•Preregistered studies have more participants than comparable unregistered studies.
Correcting for publication bias in a meta-analysis with the p-uniform method
Publication bias is a major threat to the validity of a meta-analysis, resulting in overestimated effect sizes. We propose a generalization and improvement of the publication bias method p- uniform called p -uniform*. P -uniform* improves upon p -uniform in three ways, as it (i) entails a more efficient estimator, (ii) eliminates the overestimation of effect size caused by between-study variance in true effect sizes, and (iii) enables estimating and testing for the presence of the between-study variance. We compared the statistical properties of p -uniform* with p- uniform, two implementations of the three-parameter selection model (3PSM) approach, and the random-effects model. Statistical properties of p -uniform* and 3PSM were comparable and generally outperformed p- uniform and the random-effects model if publication bias was present. We explain that p- uniform* uses a more parsimonious model than 3PSM and demonstrate that both methods estimate average effect size and between-study variance rather well with ten or more studies in the meta-analysis when publication bias is not extreme. We re-analyze the data of two published meta-analyses using p -uniform, p- uniform*, and 3PSM to illustrate the impact of publication bias on the results. We also offer recommendations for applied researchers, and we share R code in an R package as well as an easy-to-use web application for applying p -uniform*.
The AMSTAR 2 publication lacks explicit instructions on how to assess the appropriateness of statistical methods (item 11) and publication bias (item 15) and does not reflect advances in meta-analysis
As the quantity of published systematic reviews has increased substantially over the years, concerns about the methodological quality of this growing body of literature have been validly raised. AMSTAR 2, the updated version of AMSTAR, is an endorsed appraisal instrument aiming to critically assess the methodological aspects of a systematic review from conception to conduct and interpretation of the findings. However, since the publication of AMSTAR 2, several critiques have been expressed targeting various aspects of the instrument. The present commentary focuses on the AMSTAR 2 items that involve the appropriateness of statistical methods (item 11) and publication bias (item 15). The refinements are based on the methodological advances in meta-analysis as summarized in the Cochrane Handbook and delineated in review methodological studies. Initially, the commentary outlines further issues and challenges with the formulation and implementation of AMSTAR 2 items 11 and 15, beyond those already raised by other authors. Then, refinements to the corresponding decision points of items 11 and 15 are suggested, with explanations for their importance in facilitating an evidence-based, transparent, and consistent evaluation among the involved appraisers. The commentary strongly recommends that appraisers consult with meta-analysts when assessing the statistical methods of a systematic review and refer to the Cochrane Handbook, as it is regularly updated with recent methodological advances in meta-analysis. The appraisal teams could use the suggested refinements as a basis to predetermine the decision points for items 11 and 15 that align with the statistical expectations of the assessed systematic review, thereby preventing any ambiguity during the rating process. Systematic reviews have been established as an essential research design to detect knowledge gaps and prepare evidence-based clinical guidelines, among others. High reporting and methodological quality are crucial for systematic reviews to meet the intended goals. The rapid advances in systematic review methodology and software availability have led to a staggering number of published systematic reviews. Several empirical studies have already revealed limitations in the methodological and reporting quality of many systematic reviews from various medical fields. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement and AMSTAR (A MeaSurement Tool to Assess systematic Reviews) instrument were developed in response to the concerning number of systematic reviews with questionable quality. AMSTAR was updated to AMSTAR 2 in 2017 to meet the advances in the systematic review methodology. AMSTAR 2 has been widely used by several researchers to critically appraise the methodological quality of published systematic reviews. However, several authors have raised valid points regarding the wording and completeness of several items underlying the AMSTAR 2 tool. In the present commentary, I focus on items 11 and 15 of AMSTAR 2 that revolve around meta-analysis and publication bias. I propose significant refinements to meet the advances in meta-analysis and align with the Cochrane Handbook, which is a recognized standard in preparing, maintaining, and promoting high-quality systematic reviews. The proposed refinements for item 11 elaborate on the appropriateness of the random-effects model for a typical systematic review that is based on literature searches, the interpretation of the summary treatment effect based on the Hartung-Knapp adjustment and prediction interval (when at least five studies populate the meta-analysis and statistical heterogeneity is not excessive), and the diligent assessment of sources of statistical heterogeneity. The refinements of item 15 emphasize the assessment of the risk of missing studies and outcomes using a validated tool, the appropriate interpretation of the relevant statistical analyses (if applied), and the acknowledgment of the implications for the conclusions. I use two published systematic reviews to help the appraisers apply the refined items 11 and 15 correctly. [Display omitted] •The proposed refinements to items 11 and 15 are built on best practices in meta-analysis.•They warrant an evidence-based assessment of the appropriateness of statistical methods.•The decision points for items 11 and 15 should align with the statistical expectations of the review.•AMSTAR 2 appraisers should assess the risk of missing evidence and use a sensitivity analysis.•AMSTAR 2 appraisers should consult with meta-analysts and the Cochrane Handbook.
Omega-3 fatty acids for the treatment of depression: systematic review and meta-analysis
We conducted a meta-analysis of randomized, placebo-controlled trials of omega-3 fatty acid (FA) treatment of major depressive disorder (MDD) in order to determine efficacy and to examine sources of heterogeneity between trials. PubMed (1965-May 2010) was searched for randomized, placebo-controlled trials of omega-3 FAs for MDD. Our primary outcome measure was standardized mean difference in a clinical measure of depression severity. In stratified meta-analysis, we examined the effects of trial duration, trial methodological quality, baseline depression severity, diagnostic indication, dose of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) in omega-3 preparations, and whether omega-3 FA was given as monotherapy or augmentation. In 13 randomized, placebo-controlled trials examining the efficacy of omega-3 FAs involving 731 participants, meta-analysis demonstrated no significant benefit of omega-3 FA treatment compared with placebo (standard mean difference (SMD)=0.11, 95% confidence interval (CI): −0.04, 0.26). Meta-analysis demonstrated significant heterogeneity and publication bias. Nearly all evidence of omega-3 benefit was removed after adjusting for publication bias using the trim-and-fill method (SMD=0.01, 95% CI: −0.13, 0.15). Secondary analyses suggested a trend toward increased efficacy of omega-3 FAs in trials of lower methodological quality, trials of shorter duration, trials which utilized completers rather than intention-to-treat analysis, and trials in which study participants had greater baseline depression severity. Current published trials suggest a small, non-significant benefit of omega-3 FAs for major depression. Nearly all of the treatment efficacy observed in the published literature may be attributable to publication bias.