Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,583 result(s) for "Evidence synthesis"
Sort by:
Overview of evidence synthesis types and modes
Evidence syntheses systematically compile and analyze information from multiple sources to support health-care decision-making. As many different types of questions need to be answered in health care, different evidence synthesis types have emerged. In this article, we introduce the most common types of evidence synthesis. We discuss the aims, key methodological features, and illustrative examples of different evidence synthesis types and modes, drawing on our work with the Evidence Synthesis Taxonomy Initiative (ESTI). Evidence synthesis types include systematic reviews, qualitative evidence syntheses, mixed methods reviews, overviews of reviews, and ‘big picture reviews’ (scoping reviews, mapping reviews, and evidence gap maps). Additionally, we focus on rapid and living reviews as modes and how they can be applied to different evidence synthesis types. It is essential to understand the main types of evidence synthesis to choose the most suitable method for addressing a specific health-related question. Health-care decisions should be based on the best available evidence. To bring together findings from many studies, researchers use evidence synthesis-structured methods that summarize what is known on a topic. Because health questions differ, various types of evidence syntheses exist, each designed for specific needs. This article explains the aims and characteristics of the most common types of evidence synthesis: systematic reviews, overviews of reviews, qualitative evidence syntheses, mixed methods reviews, and ‘big picture reviews’ (scoping reviews, mapping reviews, and evidence gap maps). We also describe two ways evidence syntheses can be carried out: rapid reviews (done quickly to support urgent decisions) and living reviews (regularly updated as new evidence becomes available). Understanding the different approaches helps clinicians, patients, and policymakers select the right type of review for their health questions. This ensures that decisions are guided by evidence that is both reliable and appropriate for the situation.
Future of evidence ecosystem series: 3. From an evidence synthesis ecosystem to an evidence ecosystem
The “one-off” approach of systematic reviews is no longer sustainable; we need to move toward producing “living” evidence syntheses (i.e., comprehensive, based on rigorous methods, and up-to-date). This implies rethinking the evidence synthesis ecosystem, its infrastructure, and management. The three distinct production systems—primary research, evidence synthesis, and guideline development—should work together to allow for continuous refreshing of synthesized evidence and guidelines. A new evidence ecosystem, not just focusing on synthesis, should allow for bridging the gaps between evidence synthesis communities, primary researchers, guideline developers, health technology assessment agencies, and health policy authorities. This network of evidence synthesis stakeholders should select relevant clinical questions considered a priority topic. For each question, a multidisciplinary community including researchers, health professionals, guideline developers, policymakers, patients, and methodologists needs to be established and commit to performing the initial evidence synthesis and keeping it up-to-date. Encouraging communities to work together continuously with bidirectional interactions requires greater incentives, rewards, and the involvement of health care policy authorities to optimize resources. A better evidence ecosystem with collaborations and interactions between each partner of the network of evidence synthesis stakeholders should permit living evidence syntheses to justify their status in evidence-informed decision-making.
An international modified Delphi process supported updating the web-based \right review\ tool
The proliferation of evidence synthesis methods makes it challenging for reviewers to select the ‘‘right’’ method. This study aimed to update the Right Review tool (a web-based decision support tool that guides users through a series of questions for recommending evidence synthesis methods) and establish a common set of questions for the synthesis of both quantitative and qualitative studies (https://rightreview.knowledgetranslation.net/). A 2-round modified international electronic modified Delphi was conducted (2022) with researchers, health-care providers, patients, and policy makers. Panel members rated the importance/clarity of the Right Review tool's guiding questions, evidence synthesis type definitions and tool output. High agreement was defined as at least 70% agreement. Any items not reaching high agreement after round 2 were discussed by the international Project Steering Group. Twenty-four experts from 9 countries completed round 1, with 12 completing round 2. Of the 46 items presented in round 1, 21 reached high agreement. Twenty-seven items were presented in round 2, with 8 reaching high agreement. The Project Steering Group discussed items not reaching high agreement, including 8 guiding questions, 9 review definitions (predominantly related to qualitative synthesis), and 2 output items. Three items were removed entirely and the remaining 16 revised and edited and/or combined with existing items. The final tool comprises 42 items; 9 guiding questions, 25 evidence synthesis definitions and approaches, and 8 tool outputs. The freely accessible Right Review tool supports choosing an appropriate review method. The design and clarity of this tool was enhanced by harnessing the Delphi technique to shape ongoing development. The updated tool is expected to be available in Quarter 1, 2025. •Right Review assists in identifying appropriate evidence synthesis methods.•Right Review was updated using an international Delphi process.•Right Review now has a single set of guiding questions.
Assessing data extraction in randomized clinical trials with large language models
Background Data extraction is an essential step in evidence synthesis but remains time-consuming and prone to human error. Large language models (LLMs) such as ChatGPT-4 and Claude 3 Opus may offer partial automation solutions. This proof-of-concept study evaluated their preliminary performance in extracting data from full-text randomized controlled trial (RCT) reports within systematic reviews. Methods Two previously validated systematic reviews published in European Urology (105 trials in total) were selected. Standardized prompts were created and optimized with ChatGPT-4 and tested independently across trials using both ChatGPT-4 (paid version) and Claude 3 Opus via the standard web interface. Each prompt was executed three times, and only the first output was used to calculate the proportion of correctly extracted items (Pacc). Extracted data were compared with verified gold-standard datasets. Results For binary outcomes, ChatGPT-4 and Claude 3 Opus showed high accuracy in group size extraction ( Pacc : 91%-94%) and moderate accuracy for event counts ( Pacc : 57%-71%). For continuous outcomes, group size accuracy was moderate ( Pacc : 59%-69%), while mean and standard deviation extraction was poor ( Pacc : 24%-56%). Test-retest reliability was substantial to almost perfect. Conclusions Current LLMs can assist in automating data extraction for binary outcomes but remain inconsistent for continuous outcomes. These preliminary results should be interpreted with caution. Further research using larger datasets and iterative prompt refinement is needed before LLMs can be reliably integrated into systematic review workflows. Highlights 1. What is already known on this topic. • Data extraction is an essential step in evidence synthesis but is highly time-consuming and prone to human errors, with reported error rates reaching up to 70%. • The advent of large language models (LLMs) in artificial intelligence (AI) presents a potential solution for automating this process. 2. What this study adds. • This case study evaluates the effectiveness of two AI tools for data extraction, demonstrating that AI can achieve equal or superior performance compared to manual extraction for binary outcomes in randomized trials. • However, AI tools exhibit poor performance when extracting data for continuous outcomes. 3. How this study might affect research, practice or policy. • Our findings highlight the significant potential of LLMs to assist in automating data extraction in evidence synthesis. Nonetheless, further research is required to enhance the accuracy of AI tools, particularly for continuous data extraction.
A leave-one-out algorithm for contribution analysis in component network meta-analysis
Background Component network meta-analysis (CNMA) enables disentangling individual component effects from multicomponent treatments. However, no established methods exist to quantify the contribution of evidence from constituent comparisons to the disentangled component effect estimates in CNMA, hindering the interpretability of results. Methods We proposed a leave-one-out algorithm to address this gap. The core approach iteratively excludes each constituent comparison (i.e., edge in the network), recomputes the variances of all component effects, and quantifies the precision leverage of each comparison based on the induced variance inflation. Contributions are assigned via a normalized matrix. We developed special rules to handle cases where exclusion renders component effects unidentifiable. The method also formally decomposes component estimates into direct and additive evidence sources. Its utility and validity were evaluated through implementation using hypothetical networks and a real-world dataset. Results The leave-one-out algorithm accurately identified pivotal evidence sources by capturing substantial variance fluctuations upon their exclusion. Contributions assigned via precision leverage effectively quantified the critical importance of comparisons isolating target components. Application to real-world data (66 comparisons, 21 components) also confirmed the method’s precision in discerning influential evidence within complex networks, and exhibited strong alignment with the parameter decomposition results. Crucially, validation revealed no inherent relationship exists between precision leverage and linear weighting. Conclusions The leave-one-out algorithm resolves a critical gap in CNMA methodology by providing a robust, variance-based framework for quantifying the contribution of constituent direct comparisons to component effect estimates. It reliably identifies pivotal evidence sources essential for component identifiability and precision across diverse network structures, enhancing the transparency and interpretability of evidence synthesis for complex interventions.
Ecological impacts of water-based recreational activities on freshwater ecosystems
Human presence at water bodies can have a range of ecological impacts, creating trade-offs between recreation as an ecosystem service and conservation. Conservation policies could be improved by relying on robust knowledge about the relative ecological impacts of water-based recreation. We present the first global synthesis on recreation ecology in aquatic ecosystems, differentiating the ecological impacts of shore use, (shoreline) angling, swimming and boating. Impacts were assessed at three levels of biological organization (individuals, populations and communities) for several taxa. We screened over 13 000 articles and identified 94 suitable studies that met the inclusion criteria, providing 701 effect sizes. Impacts of boating and shore use resulted in consistently negative, significant ecological impacts across all levels of biological organization. The results were less consistent for angling and swimming. The strongest negative effects were observed in invertebrates and plants. Recreational impacts on birds were most pronounced at the individual level, but not significant at the community level. Due to publication bias and knowledge gaps, generalizations of the ecological impacts of aquatic recreation are challenging. Impacts depend less on the form of recreation. Thus, selectively constraining specific types of recreation may have little conservation value, as long as other forms of water-based recreation continue.
How long does it take to complete and publish a systematic review of animal studies?
Introduction Conducting a rigorous systematic review of animal studies requires a priori registration of a study protocol. However, it remains unknown how many of these registered studies culminate in publication and how long it takes to complete such a systematic review. Thus, this study had two objectives: (1) to assess the proportion of registered protocols that result in publication, and (2) to determine the time required to complete and publish systematic reviews of animal studies after protocol registration. Methods All available systematic reviews protocols of animal study were manually downloaded from PROSPERO, the international registry of systematic review protocols. Start and completion date as well as topical and demographic data were extracted, complemented by a web-scraping approach. Assessment of publication status was achieved through a systematic literature search. Results From a total of 1,771 protocols, 406 were excluded due to recent start dates. This left 1,365 protocols eligible for the final analysis. Among these, 694 (51%) resulted in a published systematic review. Median time to complete and publish a systematic review was 11.5 months (range: 0.13–44.9 months) and 16.2 months (range: 1.0-49.7 months), respectively. This time was 69% more until submission than anticipated by the authors (6.8 months [range: 0.9–48.0]). Conclusion Only half of registered protocols resulted in publication, suggesting possible publication bias. Authors can expect to complete and publish an animal systematic review within approximately one year.
Emerging themes in Population Consequences of Disturbance models
Assessing the non-lethal effects of disturbance from human activities is necessary for wildlife conservation and management. However, linking short-term responses to long-term impacts on individuals and populations is a significant hurdle for evaluating the risks of a proposed activity. The Population Consequences of Disturbance (PCoD) framework conceptually describes how disturbance can lead to changes in population dynamics, and its real-world application has led to a suite of quantitative models that can inform risk assessments. Here, we review PCoD models that forecast the possible consequences of a range of disturbance scenarios for marine mammals. In so doing, we identify common themes and highlight general principles to consider when assessing risk. We find that, when considered holistically, these models provide valuable insights into which contextual factors influence a population’s degree of exposure and sensitivity to disturbance. We also discuss model assumptions and limitations, identify data gaps and suggest future research directions to enable PCoD models to better inform risk assessments and conservation and management decisions. The general principles explored can help wildlife managers and practitioners identify and prioritize the populations most vulnerable to disturbance and guide industry in planning activities that avoid or mitigate population-level effects.
Addressing evidence needs during health crises in the province of Quebec (Canada): a proposed action plan for rapid evidence synthesis
Background The COVID-19 pandemic necessitated the rapid availability of evidence to respond in a timely manner to the needs of practice settings and decision-makers in health and social services. Now that the pandemic is over, it is time to put in place actions to improve the capacity of systems to meet knowledge needs in a situation of crisis. The main objective of this project was thus to develop an action plan for the rapid syntheses of evidence in times of health crisis in Quebec (Canada). Methods We conducted a three-phase collaborative research project. First, we carried out a survey with producers and users of rapid evidence syntheses ( n  = 40) and a group interview with three patient partners to prioritize courses of action. In parallel, we performed a systematic mapping of the literature to identify rapid evidence synthesis initiatives developed during the pandemic. The results of these two phases were used in a third phase, in which we organized a deliberative workshop with 26 producers and users of rapid evidence syntheses to identifying strategies to operationalize priorities. The data collected at each phase were compared to identify common courses of action and integrated to develop an action plan. Results A total of 14 specific actions structured into four main axes were identified over the three phases. In axis 1, actions on raising awareness of the importance of evidence-informed decision-making among stakeholders in the health and social services network are presented. Axis 2 includes actions to promote optimal collaboration of key stakeholders in the production of rapid evidence synthesis to support decision-making. Actions advocating the use of a variety of rapid evidence synthesis methodologies known to be effective in supporting decision-making are presented in axis 3. Finally, axis 4 is about actions on the use of effective knowledge translation strategies to promote the use of rapid evidence synthesis products to support decision-making. Conclusions This project led to the development of a collective action plan aimed at preparing the Quebec ecosystem and other similar jurisdictions to meet knowledge needs more effectively in times of health emergency. The implementation of this plan and its evaluation will enable us to continue to fine-tune it.
On the use of computer‐assistance to facilitate systematic mapping
The volume of published academic research is growing rapidly and this new era of “big literature” poses new challenges to evidence synthesis, pushing traditional, manual methods of evidence synthesis to their limits. New technology developments, including machine learning, are likely to provide solutions to the problem of information overload and allow scaling of systematic maps to large and even vast literatures. In this paper, we outline how systematic maps lend themselves well to automation and computer‐assistance. We believe that it is a major priority to consolidate efforts to develop and validate efficient, rigorous and robust applications of these novel technologies, ensuring the challenges of big literature do not prevent the future production of systematic maps.