Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,885 result(s) for "Checklist - methods"
Sort by:
GRIPP2 reporting checklists: tools to improve reporting of patient and public involvement in research
GRIPP2 (short form and long form) is the first international guidance for reporting of patient and public involvement in health and social care research. This paper describes the development of the GRIPP2 reporting checklists, which aim to improve the quality, transparency, and consistency of the international patient and public involvement (PPI) evidence base, to ensure that PPI practice is based on the best evidence
COSMIN Risk of Bias checklist for systematic reviews of Patient-Reported Outcome Measures
Purpose The original COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist was developed to assess the methodological quality of single studies on measurement properties of Patient-Reported Outcome Measures (PROMs). Now it is our aim to adapt the COSMIN checklist and its four-point rating system into a version exclusively for use in systematic reviews of PROMs, aiming to assess risk of bias of studies on measurement properties. Methods For each standard (i.e., a design requirement or preferred statistical method), it was discussed within the COSMIN steering committee if and how it should be adapted. The adapted checklist was pilot-tested to strengthen content validity in a systematic review on the quality of PROMs for patients with hand osteoarthritis. Results Most important changes were the reordering of the measurement properties to be assessed in a systematic review of PROMs; the deletion of standards that concerned reporting issues and standards that not necessarily lead to biased results; the integration of standards on general requirements for studies on item response theory with standards for specific measurement properties; the recommendation to the review team to specify hypotheses for construct validity and responsiveness in advance, and subsequently the removal of the standards about formulating hypotheses; and the change in the labels of the four-point rating system. Conclusions The COSMIN Risk of Bias checklist was developed exclusively for use in systematic reviews of PROMs to distinguish this application from other purposes of assessing the methodological quality of studies on measurement properties, such as guidance for designing or reporting a study on the measurement properties.
Recommended reporting items for epidemic forecasting and prediction research: The EPIFORGE 2020 guidelines
The importance of infectious disease epidemic forecasting and prediction research is underscored by decades of communicable disease outbreaks, including COVID-19. Unlike other fields of medical research, such as clinical trials and systematic reviews, no reporting guidelines exist for reporting epidemic forecasting and prediction research despite their utility. We therefore developed the EPIFORGE checklist, a guideline for standardized reporting of epidemic forecasting research. We developed this checklist using a best-practice process for development of reporting guidelines, involving a Delphi process and broad consultation with an international panel of infectious disease modelers and model end users. The objectives of these guidelines are to improve the consistency, reproducibility, comparability, and quality of epidemic forecasting reporting. The guidelines are not designed to advise scientists on how to perform epidemic forecasting and prediction research, but rather to serve as a standard for reporting critical methodological details of such studies. These guidelines have been submitted to the EQUATOR network, in addition to hosting by other dedicated webpages to facilitate feedback and journal endorsement.
TIDieR-Placebo: A guide and checklist for reporting placebo and sham controls
Placebo or sham controls are the standard against which the benefits and harms of many active interventions are measured. Whilst the components and the method of their delivery have been shown to affect study outcomes, placebo and sham controls are rarely reported and often not matched to those of the active comparator. This can influence how beneficial or harmful the active intervention appears to be. Without adequate descriptions of placebo or sham controls, it is difficult to interpret results about the benefits and harms of active interventions within placebo-controlled trials. To overcome this problem, we developed a checklist and guide for reporting placebo or sham interventions. We developed an initial list of items for the checklist by surveying experts in placebo research (n = 14). Because of the diverse contexts in which placebo or sham treatments are used in clinical research, we consulted experts in trials of drugs, surgery, physiotherapy, acupuncture, and psychological interventions. We then used a multistage online Delphi process with 53 participants to determine which items were deemed to be essential. We next convened a group of experts and stakeholders (n = 16). Our main output was a modification of the existing Template for Intervention Description and Replication (TIDieR) checklist; this allows the key features of both active interventions and placebo or sham controls to be concisely summarised by researchers. The main differences between TIDieR-Placebo and the original TIDieR are the explicit requirement to describe the setting (i.e., features of the physical environment that go beyond geographic location), the need to report whether blinding was successful (when this was measured), and the need to present the description of placebo components alongside those of the active comparator. We encourage TIDieR-Placebo to be used alongside TIDieR to assist the reporting of placebo or sham components and the trials in which they are used.
Delirium detection in older acute medical inpatients: a multicentre prospective comparative diagnostic test accuracy study of the 4AT and the confusion assessment method
Background Delirium affects > 15% of hospitalised patients but is grossly underdetected, contributing to poor care. The 4 ‘A’s Test (4AT, www.the4AT.com ) is a short delirium assessment tool designed for routine use without special training. The primary objective was to assess the accuracy of the 4AT for delirium detection. The secondary objective was to compare the 4AT with another commonly used delirium assessment tool, the Confusion Assessment Method (CAM). Methods This was a prospective diagnostic test accuracy study set in emergency departments or acute medical wards involving acute medical patients aged ≥ 70. All those without acutely life-threatening illness or coma were eligible. Patients underwent (1) reference standard delirium assessment based on DSM-IV criteria and (2) were randomised to either the index test (4AT, scores 0–12; prespecified score of > 3 considered positive) or the comparator (CAM; scored positive or negative), in a random order, using computer-generated pseudo-random numbers, stratified by study site, with block allocation. Reference standard and 4AT or CAM assessments were performed by pairs of independent raters blinded to the results of the other assessment. Results Eight hundred forty-three individuals were randomised: 21 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome, and 785 were included in the analysis. Mean age was 81.4 (SD 6.4) years. 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT had an area under the receiver operating characteristic curve of 0.90 (95% CI 0.84–0.96). The 4AT had a sensitivity of 76% (95% CI 61–87%) and a specificity of 94% (95% CI 92–97%). The CAM had a sensitivity of 40% (95% CI 26–57%) and a specificity of 100% (95% CI 98–100%). Conclusions The 4AT is a short, pragmatic tool which can help improving detection rates of delirium in routine clinical care. Trial registration International standard randomised controlled trial number (ISRCTN) 53388093 . Date applied 30/05/2014; date assigned 02/06/2014.
Getting messier with TIDieR: embracing context and complexity in intervention reporting
Background The Template for Intervention Description and Replication (TIDieR) checklist and guide was developed by an international team of experts to promote full and accurate description of trial interventions. It is now widely used in health research. The aim of this paper is to describe the experience of using TIDieR outside of trials, in a range of applied health research contexts, and make recommendations on its usefulness in such settings. Main body We used the TIDieR template for intervention description in six applied health research projects. The six cases comprise a diverse sample in terms of clinical problems, population, settings, stage of intervention development and whether the intervention was led by researchers or the service deliverers. There was also variation in how the TIDieR description was produced in terms of contributors and time point in the project. Researchers involved in the six cases met in two workshops to identify issues and themes arising from their experience of using TIDieR. We identified four themes which capture the difficulties or complexities of using TIDieR in applied health research: (i) fidelity and adaptation: all aspects of an intervention can change over time; (ii) voice: the importance of clarity on whose voice the TIDieR description represents; (iii) communication beyond the immediate context: the usefulness of TIDieR for wider dissemination and sharing; (iv) the use of TIDieR as a research tool. Conclusion We found TIDieR to be a useful tool for applied research outside the context of clinical trials and we suggest four revisions or additions to the original TIDieR which would enable it to better capture these complexities in applied health research: An additional item, ‘voice’ conveys who was involved in preparing the TIDieR template, such as researchers, service users or service deliverers. An additional item, ‘stage of implementation’ conveys what stage the intervention has reached, using a continuum of implementation research suggested by the World Health Organisation. A new column, ‘modification’ reminds authors to describe modifications to any item in the checklist. An extension of the ‘how well’ item encourages researchers to describe how contextual factors affected intervention delivery.
What do meta-analysts need in primary studies? Guidelines and the SEMI checklist for facilitating cumulative knowledge
Meta-analysis is often recognized as the highest level of evidence due to its notable advantages. Therefore, ensuring the precision of its findings is of utmost importance. Insufficient reporting in primary studies poses challenges for meta-analysts, hindering study identification, effect size estimation, and meta-regression analyses. This manuscript provides concise guidelines for the comprehensive reporting of qualitative and quantitative aspects in primary studies. Adhering to these guidelines may help researchers enhance the quality of their studies and increase their eligibility for inclusion in future research syntheses, thereby enhancing research synthesis quality. Recommendations include incorporating relevant terms in titles and abstracts to facilitate study retrieval and reporting sufficient data for effect size calculation. Additionally, a new checklist is introduced to help applied researchers thoroughly report various aspects of their studies.
Reliability and Validity of the Chinese Version of Modified Checklist for Autism in Toddlers, Revised, with Follow-Up (M-CHAT-R/F)
Although early detection of autism facilitates intervention, early detection strategies are not yet widespread in China. To improve the situation, the Chinese version of the Modified Checklist for Autism in Toddlers, Revised with Follow-Up (M-CHAT-R/F) was validated. The sample included 7928 toddlers, aged 16 to 30 months, screened during their routine care in six provinces of China. When the cut-off value was 3, the sensitivity and specificity of M-CHAT-R were 0.963 and 0.865. The inter-rater reliability and the test–retest reliability were also adequate (intraclass correlation coefficients were 0.853 and 0.759, both p s < .01). The Chinese version of M-CHAT-R/F is an effective tool for early detection of ASD and is applicable to early screening in China.
Validation of the Kihon Checklist and the frailty screening index for frailty defined by the phenotype model in older Japanese adults
Background The term “frailty” might appear simple, but the methods used to assess it differ among studies. Consequently, there is inconsistency in the classification of frailty and predictive capacity depending on the frailty assessment method utilised. We aimed to examine the diagnostic accuracy of several screening tools for frailty defined by the phenotype model in older Japanese adults. Methods This cross-sectional study included 1,306 older Japanese adults aged ≥ 65 years who underwent physical check-up by cluster random sampling as part of the Kyoto-Kameoka Study in Japan. We evaluated the diagnostic accuracy of several screening instruments for frailty using the revised Japanese version of the Cardiovascular Health Study criteria as the reference standard. These criteria are based on the Fried phenotype model and include five elements: unintentional weight loss, weakness (grip strength), exhaustion, slowness (normal gait speed), and low physical activity. The Kihon Checklist (KCL), frailty screening index (FSI), and self-reported health were evaluated using mailed surveys. We calculated the non-parametric area under the receiver operating characteristic curve (AUC ROC) for several screening tools against the reference standard. Results The participants’ mean (standard deviation) age was 72.8 (5.5) years. The prevalence of frailty based on the Fried phenotype model was 12.2% in women and 10.3% in men. The AUC ROC was 0.861 (95% confidence interval: 0.832–0.889) for KCL, 0.860 (0.831–0.889) for FSI, and 0.668 (0.629–0.707) for self-reported health. The cut-off for identifying frail individuals was ≥ 7 points in the KCL and ≥ 2 points in the FSI. Conclusions Our results indicated that the two instruments (KCL and FSI) had sufficient diagnostic accuracy for frailty based on the phenotype model for older Japanese adults. This may be useful for the early detection of frailty in high-risk older adults.
Reporting checklists in neuroimaging: promoting transparency, replicability, and reproducibility
Neuroimaging plays a crucial role in understanding brain structure and function, but the lack of transparency, reproducibility, and reliability of findings is a significant obstacle for the field. To address these challenges, there are ongoing efforts to develop reporting checklists for neuroimaging studies to improve the reporting of fundamental aspects of study design and execution. In this review, we first define what we mean by a neuroimaging reporting checklist and then discuss how a reporting checklist can be developed and implemented. We consider the core values that should inform checklist design, including transparency, repeatability, data sharing, diversity, and supporting innovations. We then share experiences with currently available neuroimaging checklists. We review the motivation for creating checklists and whether checklists achieve their intended objectives, before proposing a development cycle for neuroimaging reporting checklists and describing each implementation step. We emphasize the importance of reporting checklists in enhancing the quality of data repositories and consortia, how they can support education and best practices, and how emerging computational methods, like artificial intelligence, can help checklist development and adherence. We also highlight the role that funding agencies and global collaborations can play in supporting the adoption of neuroimaging reporting checklists. We hope this review will encourage better adherence to available checklists and promote the development of new ones, and ultimately increase the quality, transparency, and reproducibility of neuroimaging research.