Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
39 result(s) for "Cruz Rivera, Samantha"
Sort by:
Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension
The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human–AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes. The CONSORT-AI and SPIRIT-AI extensions improve the transparency of clinical trial design and trial protocol reporting for artificial intelligence interventions.
Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension
The SPIRIT 2013 statement aims to improve the completeness of clinical trial protocol reporting by providing evidence-based recommendations for the minimum set of items to be addressed. This guidance has been instrumental in promoting transparent evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate their impact on health outcomes. The SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence) extension is a new reporting guideline for clinical trial protocols evaluating interventions with an AI component. It was developed in parallel with its companion statement for trial reports: CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 26 candidate items, which were consulted upon by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The SPIRIT-AI extension includes 15 new items that were considered sufficiently important for clinical trial protocols of AI interventions. These new items should be routinely reported in addition to the core SPIRIT 2013 items. SPIRIT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention will be integrated, considerations for the handling of input and output data, the human–AI interaction and analysis of error cases. SPIRIT-AI will help promote transparency and completeness for clinical trial protocols for AI interventions. Its use will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the design and risk of bias for a planned clinical trial. The CONSORT-AI and SPIRIT-AI extensions improve the transparency of clinical trial design and trial protocol reporting for artificial intelligence interventions.
Assessing the impact of healthcare research: A systematic review of methodological frameworks
Increasingly, researchers need to demonstrate the impact of their research to their sponsors, funders, and fellow academics. However, the most appropriate way of measuring the impact of healthcare research is subject to debate. We aimed to identify the existing methodological frameworks used to measure healthcare research impact and to summarise the common themes and metrics in an impact matrix. Two independent investigators systematically searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), the Excerpta Medica Database (EMBASE), the Cumulative Index to Nursing and Allied Health Literature (CINAHL+), the Health Management Information Consortium, and the Journal of Research Evaluation from inception until May 2017 for publications that presented a methodological framework for research impact. We then summarised the common concepts and themes across methodological frameworks and identified the metrics used to evaluate differing forms of impact. Twenty-four unique methodological frameworks were identified, addressing 5 broad categories of impact: (1) 'primary research-related impact', (2) 'influence on policy making', (3) 'health and health systems impact', (4) 'health-related and societal impact', and (5) 'broader economic impact'. These categories were subdivided into 16 common impact subgroups. Authors of the included publications proposed 80 different metrics aimed at measuring impact in these areas. The main limitation of the study was the potential exclusion of relevant articles, as a consequence of the poor indexing of the databases searched. The measurement of research impact is an essential exercise to help direct the allocation of limited research resources, to maximise research benefit, and to help minimise research waste. This review provides a collective summary of existing methodological frameworks for research impact, which funders may use to inform the measurement of research impact and researchers may use to inform study design decisions aimed at maximising the short-, medium-, and long-term impact of their research.
The impact of patient-reported outcome data from clinical trials: perspectives from international stakeholders
Background Patient-reported outcomes (PROs) are increasingly collected in clinical trials as they provide unique information on the physical, functional and psychological impact of a treatment from the patient’s perspective. Recent research suggests that PRO trial data have the potential to inform shared decision-making, support pharmaceutical labelling claims and influence healthcare policy and practice. However, there remains limited evidence regarding the actual impact associated with PRO trial data and how to maximise PRO impact to benefit patients and society. Thus, our objective was to qualitatively explore international stakeholders’ perspectives surrounding: a) the impact of PRO trial data, b) impact measurement metrics, and c) barriers and facilitators to effectively maximise the impact of PRO trial data upon patients and society. Methods Semi-structured interviews with 24 international stakeholders were conducted between May and October 2018. Data were coded and analysed using reflexive thematic analysis. Results International stakeholders emphasised the impact of PRO trial data to benefit patients and society. Influence on policy-impact, including changes to clinical healthcare practice and guidelines, drug approval and promotional labelling claims were common types of PRO impact reported by interviewees. Interviewees suggested impact measurement metrics including: number of pharmaceutical labelling claims and interviews with healthcare practitioners to determine whether PRO data were incorporated in clinical decision-making. Key facilitators to PRO impact highlighted by stakeholders included: standardisation of PRO tools; consideration of health utilities when selecting PRO measures; adequate funding to support PRO research; improved reporting and dissemination of PRO trial data by key opinion leaders and patients; and development of legal enforcement of the collection of PRO data. Conclusions Determining the impact of PRO trial data is essential to better allocate funds, minimise research waste and to help maximise the impact of these data for patients and society. However, measuring the impact of PRO trial data through metrics is a challenging task, as current measures do not capture the total impact of PRO research. Broader international multi-stakeholder engagement and collaboration is needed to standardise PRO assessment and maximise the impact of PRO trial data to benefit patients and society.
Development of a core outcome set and identification of patient-reportable outcomes for primary brain tumour trials: protocol for the COBra study
IntroductionPrimary brain tumours, specifically gliomas, are a rare disease group. The disease and treatment negatively impacts on patients and those close to them. The high rates of physical and cognitive morbidity differ from other cancers causing reduced health-related quality of life. Glioma trials using outcomes that allow holistic analysis of treatment benefits and risks enable informed care decisions. Currently, outcome assessment in glioma trials is inconsistent, hindering evidence synthesis. A core outcome set (COS) - an agreed minimum set of outcomes to be measured and reported - may address this. International initiatives focus on defining core outcomes assessments across brain tumour types. This protocol describes the development of a COS involving UK stakeholders for use in glioma trials, applicable across glioma types, with provision to identify subsets as required. Due to stakeholder interest in data reported from the patient perspective, outcomes from the COS that can be patient-reported will be identified.Methods and analysisStage I: (1) trial registry review to identify outcomes collected in glioma trials and (2) systematic review of qualitative literature exploring glioma patient and key stakeholder research priorities. Stage II: semi-structured interviews with glioma patients and caregivers. Outcome lists will be generated from stages I and II. Stage III: study team will remove duplicate items from the outcome lists and ensure accessible terminology for inclusion in the Delphi survey. Stage IV: a two-round Delphi process whereby the outcomes will be rated by key stakeholders. Stage V: a consensus meeting where participants will finalise the COS. The study team will identify the COS outcomes that can be patient-reported. Further research is needed to match patient-reported outcomes to available measures.Ethics and disseminationEthical approval was obtained (REF SMREC 21/59, Cardiff University School of Medicine Research Ethics Committee). Study findings will be disseminated widely through conferences and journal publication. The final COS will be adopted and promoted by patient and carer groups and its use by funders encouraged.PROSPERO registration numberCRD42021236979.
‘Give Us The Tools!’: development of knowledge transfer tools to support the involvement of patient partners in the development of clinical trial protocols with patient-reported outcomes (PROs), in accordance with SPIRIT-PRO Extension
Objectives(a) To adapt the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT)-patient-reported outcome (PRO) Extension guidance to a user-friendly format for patient partners and (b) to codesign a web-based tool to support the dissemination and uptake of the SPIRIT-PRO Extension by patient partners.DesignA 1-day patient and public involvement session.ParticipantsSeven patient partners.MethodsA patient partner produced an initial lay summary of the SPIRIT-PRO guideline and a glossary. We held a 1-day PPI session in November 2019 at the University of Birmingham. Five patient partners discussed the draft lay summary, agreed on the final wording, codesigned and agreed the final content for both tools. Two additional patient partners were involved in writing the manuscript. The study compiled with INVOLVE guidelines and was reported according to the Guidance for Reporting Involvement of Patients and the Public 2 checklist.ResultsTwo user-friendly tools were developed to help patients and members of the public be involved in the codesign of clinical trials collecting PROs. The first tool presents a lay version of the SPIRIT-PRO Extension guidance. The second depicts the most relevant points, identified by the patient partners, of the guidance through an interactive flow diagram.ConclusionsThese tools have the potential to support the involvement of patient partners in making informed contributions to the development of PRO aspects of clinical trial protocols, in accordance with the SPIRIT-PRO Extension guidelines. The involvement of patient partners ensured the tools focused on issues most relevant to them.
Patient-reported outcomes in integrated health and social care: A scoping review
Background: Patient-reported outcomes (PROs) have potential to support integrated health and social care research and practice; however, evidence of their utilisation has not been synthesised. Objective: To identify PRO measures utilised in integrated care and adult social care research and practice and to chart the evidence of implementation factors influencing their uptake. Design: Scoping review of peer-reviewed literature. Data sources: Six databases (01 January 2010 to 19 May 2023). Study selection: Articles reporting PRO use with adults (18+ years) in integrated care or social care settings. Review methods: We screened articles against pre-specified eligibility criteria; 36 studies (23%) were extracted in duplicate for verification. We summarised the data using thematic analysis and descriptive statistics. Results: We identified 159 articles reporting on 216 PRO measures deployed in a social care or integrated care setting. Most articles used PRO measures as research tools. Eight (5.0%) articles used PRO measures as an intervention. Articles focused on community-dwelling participants (35.8%) or long-term care home residents (23.9%), with three articles (1.9%) focussing on integrated care settings. Stakeholders viewed PROs as feasible and acceptable, with benefits for care planning, health and wellbeing monitoring as well as quality assurance. Patient-reported outcome measure selection, administration and PRO data management were perceived implementation barriers. Conclusion: This scoping review showed increasing utilisation of PROs in adult social care and integrated care. Further research is needed to optimise PROs for care planning, design effective training resources and develop policies and service delivery models that prioritise secure, ethical management of PRO data.
Key considerations to reduce or address respondent burden in patient-reported outcome (PRO) data collection
Patient-reported outcomes (PROs) are used in clinical trials to provide evidence of the benefits and risks of interventions from a patient perspective and to inform regulatory decisions and health policy. The collection of PROs in routine practice can facilitate monitoring of patient symptoms; identification of unmet needs; prioritisation and/or tailoring of treatment to the needs of individual patients and inform value-based healthcare initiatives. However, respondent burden needs to be carefully considered and addressed to avoid high rates of missing data and poor reporting of PRO results, which may lead to poor quality data for regulatory decision making and/or clinical care. The collection of patient-reported outcomes (PROs) may capture patients’ assessments of their health status. Here authors highlight PRO-specific issues that should be considered to minimise respondent burden in clinical trials and routine care.
Protocol for a scoping review exploring the use of patient-reported outcomes in adult social care
IntroductionPatient-reported outcomes (PROs) are measures of a person’s own views of their health, functioning and quality of life. They are typically assessed using validated, self-completed questionnaires known as patient-reported outcome measures (PROMs). PROMs are used in healthcare settings to support care planning, clinical decision-making, patient–practitioner communication and quality improvement. PROMs have a potential role in the delivery of social care where people often have multiple and complex long-term health conditions. However, the use of PROMs in this context is currently unclear. The objective of this scoping review is to explore the evidence relating to the use of PROMs in adult social care.Methods and analysesThe electronic databases Medline (Ovid), PsychInfo (Ovid), ASSIA (ProQuest), Social Care Online (SCIE), Web of Science and EMBASE (Ovid) were searched on 29 September 2020 to identify eligible studies and other publically available documents published since 2010. A grey literature search and hand searching of citations and reference lists of the included studies will also be undertaken. No restrictions on study design or language of publication will be applied. Screening and data extraction will be completed independently by two reviewers. Quality appraisal of the included documents will use the Critical Appraisal Skills Programme and AACODS (Authority, Accuracy, Coverage, Objectivity, Date, Significance) checklists. A customised data charting table will be used for data extraction, with analysis of qualitative data using the framework method. The review findings will be presented as tables and in a narrative summary.Ethics and disseminationEthical review is not required as scoping reviews are a form of secondary data analysis that synthesise data from publically available sources. Review findings will be shared with service users and other relevant stakeholders and disseminated through a peer-reviewed publication and conference presentations. This protocol is registered on the Open Science Framework (www.osf.io).
Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension
AbstractThe CONSORT 2010 (Consolidated Standards of Reporting Trials) statement provides minimum guidelines for reporting randomised trials. Its widespread use has been instrumental in ensuring transparency when evaluating new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes.The CONSORT-AI extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI. Both guidelines were developed through a staged consensus process, involving a literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed on in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants).The CONSORT-AI extension includes 14 new items, which were considered sufficiently important for AI interventions, that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and providing analysis of error cases.CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer-reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.