Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
70 result(s) for "Shamseer, Larissa"
Sort by:
Registration of systematic reviews in PROSPERO: 30,000 records and counting
Background The International Prospective Register of Systematic Reviews (PROSPERO) was launched in February 2011 to increase transparency of systematic reviews (SRs). There have been few investigations of the content and use of the database. We aimed to investigate the number of PROSPERO registrations from inception to 2017, and website usage in the last year. We also aimed to explore the epidemiological characteristics of and completeness of primary outcome pre-specification in a sample of PROSPERO records from 2017. Methods The PROSPERO database managers provided us with data on the annual and cumulative number of SR registrations up to October 10, 2017, and the number of visits to the PROSPERO website over the year preceding October 10, 2017. One author collected data on the focus of the SR (e.g. therapeutic, diagnostic), health area addressed, funding source and completeness of outcome pre-specification in a random sample of 150 records of SRs registered in PROSPERO between April 1, 2017 and September 30, 2017. Results As of October 10, 2017, there were 26,535 SRs registered in PROSPERO; guided by current monthly submission rates, we anticipate this figure will reach over 30,000 by the end of 2017. There has been a 10-fold increase in registrations, from 63 SRs per month in 2012 to 800 per month in 2017. In the year preceding October 10, 2017, the PROSPERO website received more than 1.75 million page views. In the random sample of 150 registered SRs, the majority were focused on a therapeutic question (78/150 [52%]), while only a few focused on a diagnostic/prognostic question (11/150 [7%]). The 150 registered SRs addressed 18 different health areas. Any information about the primary outcome other than the domain (e.g. timing, effect measures) was not pre-specified in 44/150 records (29%). Conclusions Registration of SRs in PROSPERO increased rapidly between 2011 and 2017, thus benefiting users of health evidence who want to know about ongoing SRs. Further work is needed to explore how closely published SRs adhere to the planned methods, whether greater pre-specification of outcomes prevents selective inclusion and reporting of study results, and whether registered SRs address necessary questions.
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews
The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was developed to facilitate transparent and complete reporting of systematic reviews and has been updated (to PRISMA 2020) to reflect recent advances in systematic review methodology and terminology. Here, we present the explanation and elaboration paper for PRISMA 2020, where we explain why reporting of each item is recommended, present bullet points that detail the reporting recommendations, and present examples from published reviews. We hope that changes to the content and structure of PRISMA 2020 will facilitate uptake of the guideline and lead to more transparent, complete, and accurate reporting of systematic reviews.
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). [...]technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence [22,23,24], methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate [25,26,27], and new methods have been developed to assess the risk of bias in results of included studies [28, 29]. [...]the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols [33, 34], disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. [...]extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses [49], meta-analyses of individual participant data [50], systematic reviews of harms [51], systematic reviews of diagnostic test accuracy studies [52], and scoping reviews [53]; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.
Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison
Background The Internet has transformed scholarly publishing, most notably, by the introduction of open access publishing. Recently, there has been a rise of online journals characterized as ‘predatory’, which actively solicit manuscripts and charge publications fees without providing robust peer review and editorial services. We carried out a cross-sectional comparison of characteristics of potential predatory, legitimate open access, and legitimate subscription-based biomedical journals. Methods On July 10, 2014, scholarly journals from each of the following groups were identified – potential predatory journals (source: Beall’s List), presumed legitimate, fully open access journals (source: PubMed Central), and presumed legitimate subscription-based (including hybrid) journals (source: Abridged Index Medicus). MEDLINE journal inclusion criteria were used to screen and identify biomedical journals from within the potential predatory journals group. One hundred journals from each group were randomly selected. Journal characteristics (e.g., website integrity, look and feel, editors and staff, editorial/peer review process, instructions to authors, publication model, copyright and licensing, journal location, and contact) were collected by one assessor and verified by a second. Summary statistics were calculated. Results Ninety-three predatory journals, 99 open access, and 100 subscription-based journals were analyzed; exclusions were due to website unavailability. Many more predatory journals’ homepages contained spelling errors (61/93, 66%) and distorted or potentially unauthorized images (59/93, 63%) compared to open access journals (6/99, 6% and 5/99, 5%, respectively) and subscription-based journals (3/100, 3% and 1/100, 1%, respectively). Thirty-one (33%) predatory journals promoted a bogus impact metric – the Index Copernicus Value – versus three (3%) open access journals and no subscription-based journals. Nearly three quarters ( n  = 66, 73%) of predatory journals had editors or editorial board members whose affiliation with the journal was unverified versus two (2%) open access journals and one (1%) subscription-based journal in which this was the case. Predatory journals charge a considerably smaller publication fee (median $100 USD, IQR $63–$150) than open access journals ($1865 USD, IQR $800–$2205) and subscription-based hybrid journals ($3000 USD, IQR $2500–$3000). Conclusions We identified 13 evidence-based characteristics by which predatory journals may potentially be distinguished from presumed legitimate journals. These may be useful for authors who are assessing journals for possible submission or for others, such as universities evaluating candidates’ publications as part of the hiring process.
Epidemiology and Reporting Characteristics of Systematic Reviews of Biomedical Research: A Cross-Sectional Study
Systematic reviews (SRs) can help decision makers interpret the deluge of published biomedical literature. However, a SR may be of limited use if the methods used to conduct the SR are flawed, and reporting of the SR is incomplete. To our knowledge, since 2004 there has been no cross-sectional study of the prevalence, focus, and completeness of reporting of SRs across different specialties. Therefore, the aim of our study was to investigate the epidemiological and reporting characteristics of a more recent cross-section of SRs. We searched MEDLINE to identify potentially eligible SRs indexed during the month of February 2014. Citations were screened using prespecified eligibility criteria. Epidemiological and reporting characteristics of a random sample of 300 SRs were extracted by one reviewer, with a 10% sample extracted in duplicate. We compared characteristics of Cochrane versus non-Cochrane reviews, and the 2014 sample of SRs versus a 2004 sample of SRs. We identified 682 SRs, suggesting that more than 8,000 SRs are being indexed in MEDLINE annually, corresponding to a 3-fold increase over the last decade. The majority of SRs addressed a therapeutic question and were conducted by authors based in China, the UK, or the US; they included a median of 15 studies involving 2,072 participants. Meta-analysis was performed in 63% of SRs, mostly using standard pairwise methods. Study risk of bias/quality assessment was performed in 70% of SRs but was rarely incorporated into the analysis (16%). Few SRs (7%) searched sources of unpublished data, and the risk of publication bias was considered in less than half of SRs. Reporting quality was highly variable; at least a third of SRs did not report use of a SR protocol, eligibility criteria relating to publication status, years of coverage of the search, a full Boolean search logic for at least one database, methods for data extraction, methods for study risk of bias assessment, a primary outcome, an abstract conclusion that incorporated study limitations, or the funding source of the SR. Cochrane SRs, which accounted for 15% of the sample, had more complete reporting than all other types of SRs. Reporting has generally improved since 2004, but remains suboptimal for many characteristics. An increasing number of SRs are being published, and many are poorly conducted and reported. Strategies are needed to help reduce this avoidable waste in research.
Update on the endorsement of CONSORT by high impact factor journals: a survey of journal “Instructions to Authors” in 2014
Background The CONsolidated Standards Of Reporting Trials (CONSORT) Statement provides a minimum standard set of items to be reported in published clinical trials; it has received widespread recognition within the biomedical publishing community. This research aims to provide an update on the endorsement of CONSORT by high impact medical journals. Methods We performed a cross-sectional examination of the online “Instructions to Authors” of 168 high impact factor (2012) biomedical journals between July and December 2014. We assessed whether the text of the “Instructions to Authors” mentioned the CONSORT Statement and any CONSORT extensions, and we quantified the extent and nature of the journals’ endorsements of these. These data were described by frequencies. We also determined whether journals mentioned trial registration and the International Committee of Medical Journal Editors (ICMJE; other than in regards to trial registration) and whether either of these was associated with CONSORT endorsement (relative risk and 95 % confidence interval). We compared our findings to the two previous iterations of this survey (in 2003 and 2007). We also identified the publishers of the included journals. Results Sixty-three percent (106/168) of the included journals mentioned CONSORT in their “Instructions to Authors.” Forty-four endorsers (42 %) explicitly stated that authors “must” use CONSORT to prepare their trial manuscript, 38 % required an accompanying completed CONSORT checklist as a condition of submission, and 39 % explicitly requested the inclusion of a flow diagram with the submission. CONSORT extensions were endorsed by very few journals. One hundred and thirty journals (77 %) mentioned ICMJE, and 106 (63 %) mentioned trial registration. Conclusions The endorsement of CONSORT by high impact journals has increased over time; however, specific instructions on how CONSORT should be used by authors are inconsistent across journals and publishers. Publishers and journals should encourage authors to use CONSORT and set clear expectations for authors about compliance with CONSORT.
N-of-1 trials are a tapestry of heterogeneity
To summarize the methods of design, analysis, and meta-analysis used in N-of-1 trials. Electronic search for English language articles published from 1950 to 2013. N-of-1 trials were selected if they followed an ABAB design and if they assessed a health intervention for a medical condition. Elements of design, analysis, and meta-analysis were extracted. We included 100 reports representing 1,995 participants. N-of-1 trials have been conducted in over 50 health conditions. Most reports incorporated the use of elements that maintain methodological rigor, including randomization, blinding, and formal outcome assessment; however, many failed to address trial registration, funding source, and adverse events. Most reports statistically analyzed individual N-of-1 trials; however, only a small proportion of included series meta-analyzed their results. N-of-1 trials have the ability to assess treatment response in individual participants and can be used for a variety of health interventions for a wide range of medical conditions in both clinical and research settings. Considerable heterogeneity exists in the methods used in N-of-1 trials.
Rhodiola Rosea for Mental and Physical Fatigue in Nursing Students: A Randomized Controlled Trial
Fatigue is one of many unintended consequences of shift work in the nursing profession. Natural health products (NHPs) for fatigue are becoming an increasingly popular topic of clinical study; one such NHP is Rhodiola rosea. A well-designed, rigorously conducted randomized controlled trial is required before therapeutic claims for this product can be made. To compare the efficacy of R. rosea with placebo for reducing fatigue in nursing students on shift work. A parallel-group randomized, double-blinded, placebo-controlled trial of 18-55 year old students from the Faculty of Nursing from the University of Alberta, participating in clinical rotations between January 2011 and September 2011. Participants were randomized to take 364 mg of either R. rosea or identical placebo at the start of their wakeful period and up to one additional capsule within the following four hours on a daily basis over a 42-day period. The primary outcome was reduction in fatigue over the 42-day trial period measured using the Vitality-subscale of the RAND-36, cross-validated by the visual analogue scale for fatigue (VAS-F). Secondary outcomes included health-related quality of life, individualized outcomes assessment, and adverse events. A total of 48 participants were randomized to R. rosea (n = 24) or placebo (n = 24). The mean change in scores on the Vitality-subscale was significantly different between the study groups at day 42 in favor of placebo (-17.3 (95% CI -30.6, -3.9), p = 0.011), The mean change in scores on the VAS-F was also significantly difference between study groups at day 42 in favour of placebo (1.9 (95% CI 0.4, 3.5), p = 0.015). Total number of adverse events did not differ between R. rosea and placebo groups. This study indicates that among nursing students on shift work, a 42-day course of R. Rosea compared with placebo worsened fatigue; however, the results should be interpreted with caution. Clinicaltrials.gov NCT01278992.
How stakeholders can respond to the rise of predatory journals
Predatory journals are a global and growing problem contaminating all domains of science. A coordinated response by all stakeholders (researchers, institutions, funders, regulators and patients) will be needed to stop the influence of these illegitimate journals.