Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
36 result(s) for "Patricia Logullo"
Sort by:
ACCORD (ACcurate COnsensus Reporting Document): A reporting guideline for consensus methods in biomedicine developed via a modified Delphi
In biomedical research, it is often desirable to seek consensus among individuals who have differing perspectives and experience. This is important when evidence is emerging, inconsistent, limited, or absent. Even when research evidence is abundant, clinical recommendations, policy decisions, and priority-setting may still require agreement from multiple, sometimes ideologically opposed parties. Despite their prominence and influence on key decisions, consensus methods are often poorly reported. Our aim was to develop the first reporting guideline dedicated to and applicable to all consensus methods used in biomedical research regardless of the objective of the consensus process, called ACCORD (ACcurate COnsensus Reporting Document). We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines: a systematic review was followed by a Delphi process and meetings to finalize the ACCORD checklist. The preliminary checklist was drawn from the systematic review of existing literature on the quality of reporting of consensus methods and suggestions from the Steering Committee. A Delphi panel (n = 72) was recruited with representation from 6 continents and a broad range of experience, including clinical, research, policy, and patient perspectives. The 3 rounds of the Delphi process were completed by 58, 54, and 51 panelists. The preliminary checklist of 56 items was refined to a final checklist of 35 items relating to the article title (n = 1), introduction (n = 3), methods (n = 21), results (n = 5), discussion (n = 2), and other information (n = 3). The ACCORD checklist is the first reporting guideline applicable to all consensus-based studies. It will support authors in writing accurate, detailed manuscripts, thereby improving the completeness and transparency of reporting and providing readers with clarity regarding the methods used to reach agreement. Furthermore, the checklist will make the rigor of the consensus methods used to guide the recommendations clear for readers. Reporting consensus studies with greater clarity and transparency may enhance trust in the recommendations made by consensus panels.
Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence
IntroductionThe Transparent Reporting of a multivariable prediction model of Individual Prognosis Or Diagnosis (TRIPOD) statement and the Prediction model Risk Of Bias ASsessment Tool (PROBAST) were both published to improve the reporting and critical appraisal of prediction model studies for diagnosis and prognosis. This paper describes the processes and methods that will be used to develop an extension to the TRIPOD statement (TRIPOD-artificial intelligence, AI) and the PROBAST (PROBAST-AI) tool for prediction model studies that applied machine learning techniques.Methods and analysisTRIPOD-AI and PROBAST-AI will be developed following published guidance from the EQUATOR Network, and will comprise five stages. Stage 1 will comprise two systematic reviews (across all medical fields and specifically in oncology) to examine the quality of reporting in published machine-learning-based prediction model studies. In stage 2, we will consult a diverse group of key stakeholders using a Delphi process to identify items to be considered for inclusion in TRIPOD-AI and PROBAST-AI. Stage 3 will be virtual consensus meetings to consolidate and prioritise key items to be included in TRIPOD-AI and PROBAST-AI. Stage 4 will involve developing the TRIPOD-AI checklist and the PROBAST-AI tool, and writing the accompanying explanation and elaboration papers. In the final stage, stage 5, we will disseminate TRIPOD-AI and PROBAST-AI via journals, conferences, blogs, websites (including TRIPOD, PROBAST and EQUATOR Network) and social media. TRIPOD-AI will provide researchers working on prediction model studies based on machine learning with a reporting guideline that can help them report key details that readers need to evaluate the study quality and interpret its findings, potentially reducing research waste. We anticipate PROBAST-AI will help researchers, clinicians, systematic reviewers and policymakers critically appraise the design, conduct and analysis of machine learning based prediction model studies, with a robust standardised tool for bias evaluation.Ethics and disseminationEthical approval has been granted by the Central University Research Ethics Committee, University of Oxford on 10-December-2020 (R73034/RE001). Findings from this study will be disseminated through peer-review publications.PROSPERO registration numberCRD42019140361 and CRD42019161764.
ACcurate COnsensus Reporting Document (ACCORD) explanation and elaboration: Guidance and examples to support reporting consensus methods
When research evidence is limited, inconsistent, or absent, healthcare decisions and policies need to be based on consensus amongst interested stakeholders. In these processes, the knowledge, experience, and expertise of health professionals, researchers, policymakers, and the public are systematically collected and synthesised to reach agreed clinical recommendations and/or priorities. However, despite the influence of consensus exercises, the methods used to achieve agreement are often poorly reported. The ACCORD (ACcurate COnsensus Reporting Document) guideline was developed to help report any consensus methods used in biomedical research, regardless of the health field, techniques used, or application. This explanatory document facilitates the use of the ACCORD checklist. This paper was built collaboratively based on classic and contemporary literature on consensus methods and publications reporting their use. For each ACCORD checklist item, this explanation and elaboration document unpacks the pieces of information that should be reported and provides a rationale on why it is essential to describe them in detail. Furthermore, this document offers a glossary of terms used in consensus exercises to clarify the meaning of common terms used across consensus methods, to promote uniformity, and to support understanding for consumers who read consensus statements, position statements, or clinical practice guidelines. The items are followed by examples of reporting items from the ACCORD guideline, in text, tables and figures. The ACCORD materials - including the reporting guideline and this explanation and elaboration document - can be used by anyone reporting a consensus exercise used in the context of health research. As a reporting guideline, ACCORD helps researchers to be transparent about the materials, resources (both human and financial), and procedures used in their investigations so readers can judge the trustworthiness and applicability of their results/recommendations.
There is no reliable evidence that providing authors with customized article templates including items from reporting guidelines improves completeness of reporting: the GoodReports randomized trial (GRReaT)
Background Although medical journals endorse reporting guidelines, authors often struggle to find and use the right one for their study type and topic. The UK EQUATOR Centre developed the GoodReports website to direct authors to appropriate guidance. Pilot data suggested that authors did not improve their manuscripts when advised to use a particular reporting guideline by GoodReports.org at journal submission stage. User feedback suggested the checklist format of most reporting guidelines does not encourage use during manuscript writing. We tested whether providing customized reporting guidance within writing templates for use throughout the writing process resulted in clearer and more complete reporting than only giving advice on which reporting guideline to use. Design and methods GRReaT was a two-group parallel 1:1 randomized trial with a target sample size of 206. Participants were lead authors at an early stage of writing up a health-related study. Eligible study designs were cohort, cross-sectional, or case-control study, randomized trial, and systematic review. After randomization, the intervention group received an article template including items from the appropriate reporting guideline and links to explanations and examples. The control group received a reporting guideline recommendation and general advice on reporting. Participants sent their completed manuscripts to the GRReaT team before submitting for publication, for completeness of each item in the title, methods, and results section of the corresponding reporting guideline. The primary outcome was reporting completeness against the corresponding reporting guideline. Participants were not blinded to allocation. Assessors were blind to group allocation. As a recruitment incentive, all participants received a feedback report identifying missing or inadequately reported items in these three sections. Results Between 9 June 2021 and 30 June 2023, we randomized 130 participants, 65 to the intervention and 65 to the control group. We present findings from the assessment of reporting completeness for the 37 completed manuscripts we received, 18 in the intervention group and 19 in the control group. The mean (standard deviation) proportion of completely reported items from the title, methods, and results sections of the manuscripts (primary outcome) was 0.57 (0.18) in the intervention group and 0.50 (0.17) in the control group. The mean difference between the two groups was 0.069 (95% CI -0.046 to 0.184; p  = 0.231). In the sensitivity analysis, when partially reported items were counted as completely reported, the mean (standard deviation) proportion of completely reported items was 0.75 (0.15) in the intervention group and 0.71 (0.11) in the control group. The mean difference between the two groups was 0.036 (95% CI -0.127 to 0.055; p  = 0.423). Conclusion As the dropout rate was higher than expected, we did not reach the recruitment target, and the difference between groups was not statistically significant. We therefore found no evidence that providing authors with customized article templates including items from reporting guidelines, increases reporting completeness. We discuss the challenges faced when conducting the trial and suggest how future research testing innovative ways of improving reporting could be designed to improve recruitment and reduce dropouts.
Reporting guideline checklists are not quality evaluation forms: they are guidance for writing
Reporting guidelines were created to help researchers write reports that contain the minimum set of information necessary to allow readers to clearly understand what was done and found in a study and facilitate a formal risk of bias assessment (using tools such as the Cochrane Risk of Bias tool or QUADAS). Complete reporting can also allow replication of study methods and procedures. A reporting guideline is ‘a checklist, flow diagram, or explicit text to guide authors in reporting a specific type of research, developed using explicit methodology’. 6 Following the publication of the first reporting guideline for clinical trials, CONSORT, in 1996, 7 multiple reporting guidelines have been published, covering a range of study designs (eg, clinical trials, observational studies), clinical areas (eg, nutrition), or parts of a report (eg, abstracts), to help biomedical researchers write up their studies for publication. 8,9 Stakeholders in biomedical research have embraced reporting guidelines, with major funders and a large number of biomedical journals endorsing the guidelines and increasingly requiring their use. 10,11The most widely used and well‐known reporting guidelines usually consist of a statement paper that describes the process of developing the guideline and presents the guideline usually in the form of a ‘checklist’. 4 Each checklist consists of a different number of reporting content items, ranging from just a few to more than 30 items. These checklists are designed to be easy to use by authors when they start writing their manuscript. Many journals have recognised how useful they are and have implemented reporting guidelines in their submission and editorial processes. Several journals also require authors to submit a completed checklist indicating where in the manuscript each item has been reported.Reporting guidelines are (or at least should be) rigorously developed following an extensive process of expert consultation and should not reflect just the opinion of one individual 6; they should represent a consensus‐based minimal set of items that a group of experienced researchers, journal editors, policymakers, and other stakeholders (eg, funders, patient representatives) have determined should be reported.
Existing guidance on reporting of consensus methodology: a systematic review to inform ACCORD guideline development
ObjectiveTo identify evidence on the reporting quality of consensus methodology and to select potential checklist items for the ACcurate COnsensus Reporting Document (ACCORD) project to develop a consensus reporting guideline.DesignSystematic review.Data sourcesEmbase, MEDLINE, Web of Science, PubMed, Cochrane Library, Emcare, Academic Search Premier and PsycINFO from inception until 7 January 2022.Eligibility criteriaStudies, reviews and published guidance addressing the reporting quality of consensus methodology for improvement of health outcomes in biomedicine or clinical practice. Reports of studies using or describing consensus methods but not commenting on their reporting quality were excluded. No language restrictions were applied.Data extraction and synthesisScreening and data extraction of eligible studies were carried out independently by two authors. Reporting quality items addressed by the studies were synthesised narratively.ResultsEighteen studies were included: five systematic reviews, four narrative reviews, three research papers, three conference abstracts, two research guidance papers and one protocol. The majority of studies indicated that the quality of reporting of consensus methodology could be improved. Commonly addressed items were: consensus panel composition; definition of consensus and the threshold for achieving consensus. Items least addressed were: public patient involvement (PPI); the role of the steering committee, chair, cochair; conflict of interest of panellists and funding. Data extracted from included studies revealed additional items that were not captured in the data extraction form such as justification of deviation from the protocol or incentives to encourage panellist response.ConclusionThe results of this systematic review confirmed the need for a reporting checklist for consensus methodology and provided a range of potential checklist items to report. The next step in the ACCORD project builds on this systematic review and focuses on reaching consensus on these items to develop the reporting guideline.Protocol registration https://osf.io/2rzm9.
ACCORD guideline for reporting consensus-based methods in biomedical research and clinical practice: a study protocol
Background Structured, systematic methods to formulate consensus recommendations, such as the Delphi process or nominal group technique, among others, provide the opportunity to harness the knowledge of experts to support clinical decision making in areas of uncertainty. They are widely used in biomedical research, in particular where disease characteristics or resource limitations mean that high-quality evidence generation is difficult. However, poor reporting of methods used to reach a consensus – for example, not clearly explaining the definition of consensus, or not stating how consensus group panellists were selected – can potentially undermine confidence in this type of research and hinder reproducibility. Our objective is therefore to systematically develop a reporting guideline to help the biomedical research and clinical practice community describe the methods or techniques used to reach consensus in a complete, transparent, and consistent manner. Methods The ACCORD (ACcurate COnsensus Reporting Document) project will take place in five stages and follow the EQUATOR Network guidance for the development of reporting guidelines. In Stage 1, a multidisciplinary Steering Committee has been established to lead and coordinate the guideline development process. In Stage 2, a systematic literature review will identify evidence on the quality of the reporting of consensus methodology, to obtain potential items for a reporting checklist. In Stage 3, Delphi methodology will be used to reach consensus regarding the checklist items, first among the Steering Committee, and then among a broader Delphi panel comprising participants with a range of expertise, including patient representatives. In Stage 4, the reporting guideline will be finalised in a consensus meeting, along with the production of an Explanation and Elaboration (E&E) document. In Stage 5, we plan to publish the reporting guideline and E&E document in open-access journals, supported by presentations at appropriate events. Dissemination of the reporting guideline, including a website linked to social media channels, is crucial for the document to be implemented in practice. Discussion The ACCORD reporting guideline will provide a set of minimum items that should be reported about methods used to achieve consensus, including approaches ranging from simple unstructured opinion gatherings to highly structured processes.
GoodReports: developing a website to help health researchers find and use reporting guidelines
Background Th EQUATOR Network improves the quality and transparency in health research, primarily by promoting awareness and use of reporting guidelines. In 2018, the UK EQUATOR Centre launched GoodReports.org , a website that helps authors find and use reporting guidelines. This paper describes the tool’s development so far. We describe user experience and behaviour of using GoodReports.org both inside and outside a journal manuscript submission process. We intend to use our findings to inform future development and testing of the tool. Methods We conducted a survey to collect data on user experience of the GoodReports website. We cross-checked a random sample of 100 manuscripts submitted to a partner journal to describe the level of agreement between the tool’s checklist recommendation and what we would have recommended. We compared the proportion of authors submitting a completed reporting checklist alongside their manuscripts between groups exposed or not exposed to the GoodReports tool. We also conducted a study comparing completeness of reporting of manuscript text before an author received a reporting guideline recommendation from GoodReports.org with the completeness of the text subsequently submitted to a partner journal. Results Seventy percent (423/599) of survey respondents rated GoodReports 8 or more out of 10 for usefulness, and 74% (198/267) said they had made changes to their manuscript after using the website. We agreed with the GoodReports reporting guideline recommendation in 84% (72/86) of cases. Of authors who completed the guideline finder questionnaire, 14% (10/69) failed to submit a completed checklist compared to 30% (41/136) who did not use the tool. Of the 69 authors who received a GoodReports reporting guideline recommendation, 20 manuscript pairs could be reviewed before and after use of GoodReports. Five included more information in their methods section after exposure to GoodReports. On average, authors reported 57% of necessary reporting items before completing a checklist on GoodReports.org and 60% after. Conclusion The data suggest that reporting guidance is needed early in the writing process, not at submission stage. We are developing GoodReports by adding more reporting guidelines and by creating editable article templates. We will test whether GoodReports users write more complete study reports in a randomised trial targeting researchers starting to write health research articles.
Reporting quality and adherence of randomized controlled trials about statins and/or fibrates for diabetic retinopathy to the CONSORT checklist
Background A considerable amount of randomized controlled trials (RCTs) have been published on statins and/or fibrates for diabetic retinopathy, a clinical condition associated with high social and economic burden. Adherence to the CONSORT statement items is imperative to ensure transparency and reproducibility in clinical research. The aim of this study is to assess the reporting quality and the adherence to CONSORT of RCTs assessing statins and/or fibrates for diabetic retinopathy. Methods We conducted a critical appraisal study at Discipline of Evidence-based Medicine, Escola Paulista de Medicina, Universidade Federal de São Paulo (Unifesp). A sensitive literature search was performed to identify all relevant RCTs, with no time or language limits. Two authors independently evaluated the reporting quality of the selected RCTs using the CONSORT statement as a standard. Results Thirteen reports of RCTs were included in this study. The adherence of the reports to CONSORT items ranged from 24% to 68%. The median score was 11 (interquartile range (IQR) 8 to 13). When analyzed separately, the methods sections of the reports had a median of three items (IQR 2 to 4) judged adherent to the methods items of CONSORT (items 3 to 12). The most underreported items were those related to trial design, title and abstract, allocation concealment, implementation of the randomization sequence, and blinding. Other important items, such as the one related to the description of the inclusion criteria, also had low adherence. Conclusions The overall adherence to the CONSORT checklist items was poor, especially in the items related to the methods section. RCT reports on statins and/or fibrates for diabetic retinopathy must be optimized to avoid reporting biases and to improve transparency and reproducibility.
Improving medical research in the United Kingdom
Poor quality medical research causes serious harms by misleading healthcare professionals and policymakers, decreasing trust in science and medicine, and wasting public funds. Here we outline underlying problems including insufficient transparency, dysfunctional incentives, and reporting biases. We make the following recommendations to address these problems: Journals and funders should ensure authors fulfil their obligation to share detailed study protocols, analytical code, and (as far as possible) research data. Funders and journals should incentivise uptake of registered reports and establish funding pathways which integrate evaluation of funding proposals with initial peer review of registered reports. A mandatory national register of interests for all those who are involved in medical research in the UK should be established, with an expectation that individuals maintain the accuracy of their declarations and regularly update them. Funders and institutions should stop using metrics such as citations and journal’s impact factor to assess research and researchers and instead evaluate based on quality, reproducibility, and societal value. Employers and non-academic training programmes for health professionals (clinicians hired for patient care, not to do research) should not select based on number of research publications. Promotions based on publication should be restricted to those hired to do research.