Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
41,981 result(s) for "Systematic Reviews as Topic"
Sort by:
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews
The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was developed to facilitate transparent and complete reporting of systematic reviews and has been updated (to PRISMA 2020) to reflect recent advances in systematic review methodology and terminology. Here, we present the explanation and elaboration paper for PRISMA 2020, where we explain why reporting of each item is recommended, present bullet points that detail the reporting recommendations, and present examples from published reviews. We hope that changes to the content and structure of PRISMA 2020 will facilitate uptake of the guideline and lead to more transparent, complete, and accurate reporting of systematic reviews.
PRECIS-2 for retrospective assessment of RCTs in systematic reviews
A randomized controlled trial (RCT) may be intended either to support real-world decisions on choice between alternative interventions or to help researchers understand mechanisms of action of an intervention. PRECIS-2 is widely used to help investigators match detailed design elements to their main intention for that RCT. PRECIS-2 is increasingly being used retrospectively for assessing RCTs within reviews. In this commentary, we counter arguments that RCTs with a placebo control group, masking/blinding of participants or providers, or conducted in a single center should be retrospectively assessed as completely explanatory, overriding a detailed PRECIS-2 assessment. We also counter arguments that a trial cannot be assessed using only the main peer-reviewed trial report. This is a commentary on the use of PRECIS-2 for systematic reviews. Although placebos are seldom openly prescribed in real-world care, knowing that an intervention achieves its impact via the placebo effect might change some clinical and policy decisions, which means that this feature does not always preclude decision-making use and so should not override a full PRECIS-2 assessment. A domain describing the comparator should be added to PRECIS-2. Conduct of an RCT in only a single centre should also not override PRECIS-2 as the decision support value of a single-centre RCT could be high for decision makers in that centre and others like it. Many journals require that submitted RCT reports meet CONSORT reporting guidelines, which standardizes the available information for all RCTs in systematic reviews; whereas information from registration and protocol documents is unstandardized and undermines comparison between RCTs and across reviews. Published RCT reports are thus more suitable for retrospective PRECIS-2 assessments, but PRECIS-2 domains with missing information should be scored as blank. Wider use of the CONSORT extension specific to pragmatic trials may reduce domains with missing data. PRECIS-2 can be used for retrospective assessments of trials in systematic reviews. The PRECIS-2 instrument should be expanded by including a domain describing the control group(s). Published RCT reports are suitable for retrospective PRECIS-2 assessments.
The PRISMA 2020 statement: An updated guideline for reporting systematic reviews
The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication. Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: EL is head of research for the BMJ; MJP is an editorial board member for PLOS Medicine; ACT is an associate editor and MJP, TL, EMW, and DM are editorial board members for the Journal of Clinical Epidemiology; DM and LAS were editors in chief, LS, JMT, and ACT are associate editors, and JG is an editorial board member for Systematic Reviews. [...]technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence,[22–24] methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate,[25–27] and new methods have been developed to assess the risk of bias in results of included studies. Summary points * To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did, and what they found * The PRISMA 2020 statement provides updated reporting guidance for systematic reviews that reflects advances in methods to identify, select, appraise, and synthesise studies * The PRISMA 2020 statement consists of a 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and revised flow diagrams for original and updated reviews * We anticipate that the PRISMA 2020 statement will benefit authors, editors, and peer reviewers of systematic reviews, and different users of reviews, including guideline developers, policy makers, healthcare providers, patients, and other stakeholders Development of PRISMA 2020 A complete description of the methods used to develop PRISMA 2020 is available elsewhere.
Cochrane's risk of bias tool for non-randomized studies (ROBINS-I) is frequently misapplied: A methodological systematic review
We aimed to review how ‘Risk of Bias In Non-randomized Studies–of Interventions’ (ROBINS-I), a Cochrane risk of bias assessment tool, has been used in recent systematic reviews. Database and citation searches were conducted in March 2020 to identify recently published reviews using ROBINS-I. Reported ROBINS-I assessments and data on how ROBINS-I was used were extracted from each review. Methodological quality of reviews was assessed using AMSTAR 2 (‘A MeaSurement Tool to Assess systematic Reviews’). Of 181 hits, 124 reviews were included. Risk of bias was serious/critical in 54% of assessments on average, most commonly due to confounding. Quality of reviews was mostly low, and modifications and incorrect use of ROBINS-I were common, with 20% reviews modifying the rating scale, 20% understating overall risk of bias, and 19% including critical-risk of bias studies in evidence synthesis. Poorly conducted reviews were more likely to report low/moderate risk of bias (predicted probability 57% [95% CI: 47–67] in critically low-quality reviews, 31% [19–46] in high/moderate-quality reviews). Low-quality reviews frequently apply ROBINS-I incorrectly, and may thus inappropriately include or give too much weight to uncertain evidence. Readers should be aware that such problems can lead to incorrect conclusions in reviews.
Machine learning reduced workload with minimal risk of missing studies: development and evaluation of a randomized controlled trial classifier for Cochrane Reviews
This study developed, calibrated, and evaluated a machine learning classifier designed to reduce study identification workload in Cochrane for producing systematic reviews. A machine learning classifier for retrieving randomized controlled trials (RCTs) was developed (the “Cochrane RCT Classifier”), with the algorithm trained using a data set of title–abstract records from Embase, manually labeled by the Cochrane Crowd. The classifier was then calibrated using a further data set of similar records manually labeled by the Clinical Hedges team, aiming for 99% recall. Finally, the recall of the calibrated classifier was evaluated using records of RCTs included in Cochrane Reviews that had abstracts of sufficient length to allow machine classification. The Cochrane RCT Classifier was trained using 280,620 records (20,454 of which reported RCTs). A classification threshold was set using 49,025 calibration records (1,587 of which reported RCTs), and our bootstrap validation found the classifier had recall of 0.99 (95% confidence interval 0.98–0.99) and precision of 0.08 (95% confidence interval 0.06–0.12) in this data set. The final, calibrated RCT classifier correctly retrieved 43,783 (99.5%) of 44,007 RCTs included in Cochrane Reviews but missed 224 (0.5%). Older records were more likely to be missed than those more recently published. The Cochrane RCT Classifier can reduce manual study identification workload for Cochrane Reviews, with a very low and acceptable risk of missing eligible RCTs. This classifier now forms part of the Evidence Pipeline, an integrated workflow deployed within Cochrane to help improve the efficiency of the study identification processes that support systematic review production. •Systematic review processes need to become more efficient.•Machine learning is sufficiently mature for real-world use.•A machine learning classifier was built using data from Cochrane Crowd.•It was calibrated to achieve very high recall.•It is now live and in use in Cochrane review production systems.
Small-sided games: An umbrella review of systematic reviews and meta-analyses
This umbrella review was conducted to summarize the evidence and qualify the methodological quality of SR and SRMA published on small-sided games in team ball sports. A systematic review of Web of Science, PubMed, Cochrane Library, Scopus, and SPORTDiscus databases was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. From the 176 studies initially identified, 12 (eight SR and four SRMA) were fully reviewed, and their outcome measures were extracted and analyzed. Methodological quality (with the use of AMSTAR-2) revealed that seven reviews had low quality and five had critically low quality. Two major types of effects of SSGs were observed: (i) short-term acute effects and (ii) long-term adaptations. Four broad dimensions of analysis were found: (i) physiological demands (internal load); (ii) physical demands (external load) or fitness status; (iii) technical actions; and (iv) tactical behavior and collective organization. The psychological domain was reduced to an analysis of enjoyment. The main findings from this umbrella review revealed that SSGs present positive effects in improving aerobic capacity and tactical/technical behaviors, while neuromuscular adaptations present more heterogeneous findings. Factors such as sex, age group, expertise, skill level, or fitness status are also determinants of some acute effects and adaptations. The current umbrella review allowed to identify that most of the systematic review and meta-analysis conducted in SSGs presents low methodological quality considering the standards. Most of the systematic reviews included in this umbrella revealed that task constraints significantly change the acute responses in exercise, while SSGs are effective in improving aerobic capacity. Future original studies in this topic should improve the methodological quality and improve the experimental study designs for assessing changes in tactical/technical skills.
Large language models for conducting systematic reviews: on the rise, but not yet ready for use—a scoping review
Machine learning promises versatile help in the creation of systematic reviews (SRs). Recently, further developments in the form of large language models (LLMs) and their application in SR conduct attracted attention. We aimed at providing an overview of LLM applications in SR conduct in health research. We systematically searched MEDLINE, Web of Science, IEEEXplore, ACM Digital Library, Europe PMC (preprints), Google Scholar, and conducted an additional hand search (last search: February 26, 2024). We included scientific articles in English or German, published from April 2021 onwards, building upon the results of a mapping review that has not yet identified LLM applications to support SRs. Two reviewers independently screened studies for eligibility; after piloting, 1 reviewer extracted data, checked by another. Our database search yielded 8054 hits, and we identified 33 articles from our hand search. We finally included 37 articles on LLM support. LLM approaches covered 10 of 13 defined SR steps, most frequently literature search (n = 15, 41%), study selection (n = 14, 38%), and data extraction (n = 11, 30%). The mostly recurring LLM was Generative Pretrained Transformer (GPT) (n = 33, 89%). Validation studies were predominant (n = 21, 57%). In half of the studies, authors evaluated LLM use as promising (n = 20, 54%), one-quarter as neutral (n = 9, 24%) and one-fifth as nonpromising (n = 8, 22%). Although LLMs show promise in supporting SR creation, fully established or validated applications are often lacking. The rapid increase in research on LLMs for evidence synthesis production highlights their growing relevance. Systematic reviews are a crucial tool in health research where experts carefully collect and analyze all available evidence on a specific research question. Creating these reviews is typically time- and resource-intensive, often taking months or even years to complete, as researchers must thoroughly search, evaluate, and synthesize an immense number of scientific studies. For the present article, we conducted a review to understand how new artificial intelligence (AI) tools, specifically large language models (LLMs) like Generative Pretrained Transformer (GPT), can be used to help create systematic reviews in health research. We searched multiple scientific databases and finally found 37 relevant articles. We found that LLMs have been tested to help with various parts of the systematic review process, particularly in 3 main areas: searching scientific literature (41% of studies), selecting relevant studies (38%), and extracting important information from these studies (30%). GPT was the most commonly used LLM, appearing in 89% of the studies. Most of the research (57%) focused on testing whether these AI tools actually work as intended in this context of systematic review production. The results were mixed: about half of the studies found LLMs promising, a quarter were neutral, and one-fifth found them not promising. While LLMs show potential for making the systematic review process more efficient, there is still a lack of fully tested and validated applications. However, the increasing number of studies in this field suggests that these AI tools are becoming increasingly important in creating systematic reviews. [Display omitted] •GPT was the most commonly used large language model (LLM).•LLM application included 10 of 13 defined SR steps, most often literature search.•Validation studies predominated, but fully established LLM applications are rare.•Our results highlight the increasing relevance of LLM use in the field.
Resource use during systematic review production varies widely: a scoping review
•Evidence on resource use is limited to studies reporting mostly on the resource “time” and not always under real life conditions.•Administration and project management, study selection, data extraction, and critical appraisal seem to be very resource intensive, varying with the number of included studies, while protocol development, literature search, and study retrieval take less time.•Lack of experience and domain knowledge, lack of collaborative and supportive software, as well as lack of good communication and management can increase resource use during the systematic review process. We aimed to map the resource use during systematic review (SR) production and reasons why steps of the SR production are resource intensive to discover where the largest gain in improving efficiency might be possible. We conducted a scoping review. An information specialist searched multiple databases (e.g., Ovid MEDLINE, Scopus) and implemented citation-based and grey literature searching. We employed dual and independent screenings of records at the title/abstract and full-text levels and data extraction. We included 34 studies. Thirty-two reported on the resource use—mostly time; four described reasons why steps of the review process are resource intensive. Study selection, data extraction, and critical appraisal seem to be very resource intensive, while protocol development, literature search, or study retrieval take less time. Project management and administration required a large proportion of SR production time. Lack of experience, domain knowledge, use of collaborative and SR-tailored software, and good communication and management can be reasons why SR steps are resource intensive. Resource use during SR production varies widely. Areas with the largest resource use are administration and project management, study selection, data extraction, and critical appraisal of studies.
Systematic reviews on the same topic are common but often fail to meet key methodological standards: a research-on-research study
To 1) assess the frequency of overlapping systematic reviews (SRs) on the same topic including overlap in outcomes, 2) assess whether SRs meet some key methodological characteristics, and 3) describe discrepancies in results. For this research-on-research study, we gathered a random sample of SRs with meta-analysis (MA) published in 2022, identified the questions they addressed and, for each question, searched all SRs with MA published from 2018 to 2023 to assess the frequency of overlap. We assessed whether SRs met a minimum set of six key methodological characteristics: protocol registration, search of major electronic databases, search of trial registries, double selection and extraction, use of the Cochrane Risk-of-Bias tool, and Grading of Recommendations, Assessment, Development, and Evaluations assessment. From a sample of 107 SRs with MA published in 2022, we extracted 105 different questions and identified 123 other SRs with MA published from 2018 to 2023. There were overlapping SRs for 33 questions (31.4%, 95% CI: 22.9–41.3), with a median of three overlapping SRs per question (IQR 2–6; range 2–19). Of the 230 SRs, 15 (6.5%) met the minimum set of six key methodological characteristics, and 12 (11.4%) questions had at least one SR meeting this criterion. Among the 33 questions with overlapping SRs, for 7 (21.2%), the SRs had discrepant results. One-third of the SRs published in 2022 had at least one overlapping SR published from 2018 to 2023, and most did not meet a minimum set of methodological standards. For one-fifth of the questions, overlapping SRs provided discrepant results. •Of 105 research questions, 33 had overlapping systematic reviews (31.4%).•Only 6.5% of SRs identified met a minimum set of key methodological standards.•Among 33 research questions with overlapping SRs, results were different for 21.2%.