Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
11,625 result(s) for "Multicenter studies"
Sort by:
A reinvestigation of recruitment to randomised, controlled, multicenter trials: a review of trials funded by two UK funding agencies
Background Randomised controlled trials (RCTs) are the gold standard assessment for health technologies. A key aspect of the design of any clinical trial is the target sample size. However, many publicly-funded trials fail to reach their target sample size. This study seeks to assess the current state of recruitment success and grant extensions in trials funded by the Health Technology Assessment (HTA) program and the UK Medical Research Council (MRC). Methods Data were gathered from two sources: the National Institute for Health Research (NIHR) HTA Journal Archive and the MRC subset of the International Standard Randomised Controlled Trial Number (ISRCTN) register. A total of 440 trials recruiting between 2002 and 2008 were assessed for eligibility, of which 73 met the inclusion criteria. Where data were unavailable from the reports, members of the trial team were contacted to ensure completeness. Results Over half (55%) of trials recruited their originally specified target sample size, with over three-quarters (78%) recruiting 80% of their target. There was no evidence of this improving over the time of the assessment. Nearly half (45%) of trials received an extension of some kind. Those that did were no more likely to successfully recruit. Trials with 80% power were less likely to successfully recruit compared to studies with 90% power. Conclusions While recruitment appears to have improved since 1994 to 2002, publicly-funded trials in the UK still struggle to recruit to their target sample size, and both time and financial extensions are often requested. Strategies to cope with such problems should be more widely applied. It is recommended that where possible studies are planned with 90% power.
Recruitment and retention of participants in randomised controlled trials: a review of trials funded and published by the United Kingdom Health Technology Assessment Programme
BackgroundSubstantial amounts of public funds are invested in health research worldwide. Publicly funded randomised controlled trials (RCTs) often recruit participants at a slower than anticipated rate. Many trials fail to reach their planned sample size within the envisaged trial timescale and trial funding envelope.ObjectivesTo review the consent, recruitment and retention rates for single and multicentre randomised control trials funded and published by the UK's National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme.Data sources and study selectionHTA reports of individually randomised single or multicentre RCTs published from the start of 2004 to the end of April 2016 were reviewed.Data extractionInformation was extracted, relating to the trial characteristics, sample size, recruitment and retention by two independent reviewers.Main outcome measuresTarget sample size and whether it was achieved; recruitment rates (number of participants recruited per centre per month) and retention rates (randomised participants retained and assessed with valid primary outcome data).ResultsThis review identified 151 individually RCTs from 787 NIHR HTA reports. The final recruitment target sample size was achieved in 56% (85/151) of the RCTs and more than 80% of the final target sample size was achieved for 79% of the RCTs (119/151). The median recruitment rate (participants per centre per month) was found to be 0.92 (IQR 0.43–2.79) and the median retention rate (proportion of participants with valid primary outcome data at follow-up) was estimated at 89% (IQR 79–97%).ConclusionsThere is considerable variation in the consent, recruitment and retention rates in publicly funded RCTs. Investigators should bear this in mind at the planning stage of their study and not be overly optimistic about their recruitment projections.
The Marburg-Münster Affective Disorders Cohort Study (MACS): A quality assurance protocol for MR neuroimaging data
Large, longitudinal, multi-center MR neuroimaging studies require comprehensive quality assurance (QA) protocols for assessing the general quality of the compiled data, indicating potential malfunctions in the scanning equipment, and evaluating inter-site differences that need to be accounted for in subsequent analyses. We describe the implementation of a QA protocol for functional magnet resonance imaging (fMRI) data based on the regular measurement of an MRI phantom and an extensive variety of currently published QA statistics. The protocol is implemented in the MACS (Marburg-Münster Affective Disorders Cohort Study, http://for2107.de/), a two-center research consortium studying the neurobiological foundations of affective disorders. Between February 2015 and October 2016, 1214 phantom measurements have been acquired using a standard fMRI protocol. Using 444 healthy control subjects which have been measured between 2014 and 2016 in the cohort, we investigate the extent of between-site differences in contrast to the dependence on subject-specific covariates (age and sex) for structural MRI, fMRI, and diffusion tensor imaging (DTI) data. We show that most of the presented QA statistics differ severely not only between the two scanners used for the cohort but also between experimental settings (e.g. hardware and software changes), demonstrate that some of these statistics depend on external variables (e.g. time of day, temperature), highlight their strong dependence on proper handling of the MRI phantom, and show how the use of a phantom holder may balance this dependence. Site effects, however, do not only exist for the phantom data, but also for human MRI data. Using T1-weighted structural images, we show that total intracranial (TIV), grey matter (GMV), and white matter (WMV) volumes significantly differ between the MR scanners, showing large effect sizes. Voxel-based morphometry (VBM) analyses show that these structural differences observed between scanners are most pronounced in the bilateral basal ganglia, thalamus, and posterior regions. Using DTI data, we also show that fractional anisotropy (FA) differs between sites in almost all regions assessed. When pooling data from multiple centers, our data show that it is a necessity to account not only for inter-site differences but also for hardware and software changes of the scanning equipment. Also, the strong dependence of the QA statistics on the reliable placement of the MRI phantom shows that the use of a phantom holder is recommended to reduce the variance of the QA statistics and thus to increase the probability of detecting potential scanner malfunctions. •Quality assurance (QA) protocol for large, longitudinal, multi-center MR neuroimaging studies.•Dependence of QA statistics on MR-scanner type, hardware and software changes and external variables (e.g., time of day, temperature).•Consequences of phantom data variations for human MRI data.•Dependence of MR phantom placement on QA statistics.
Harmonization of multi-site diffusion tensor imaging data
Diffusion tensor imaging (DTI) is a well-established magnetic resonance imaging (MRI) technique used for studying microstructural changes in the white matter. As with many other imaging modalities, DTI images suffer from technical between-scanner variation that hinders comparisons of images across imaging sites, scanners and over time. Using fractional anisotropy (FA) and mean diffusivity (MD) maps of 205 healthy participants acquired on two different scanners, we show that the DTI measurements are highly site-specific, highlighting the need of correcting for site effects before performing downstream statistical analyses. We first show evidence that combining DTI data from multiple sites, without harmonization, may be counter-productive and negatively impacts the inference. Then, we propose and compare several harmonization approaches for DTI data, and show that ComBat, a popular batch-effect correction tool used in genomics, performs best at modeling and removing the unwanted inter-site variability in FA and MD maps. Using age as a biological phenotype of interest, we show that ComBat both preserves biological variability and removes the unwanted variation introduced by site. Finally, we assess the different harmonization methods in the presence of different levels of confounding between site and age, in addition to test robustness to small sample size studies. •Significant site and scanner effects exist in DTI scalar maps.•Several multi-site harmonization methods are proposed.•ComBat performs the best at removing site effects in FA and MD.•Voxels associated with age in FA and MD are more replicable after ComBat.•ComBat is generalizable to other imaging modalities.
Harmonization of cortical thickness measurements across scanners and sites
With the proliferation of multi-site neuroimaging studies, there is a greater need for handling non-biological variance introduced by differences in MRI scanners and acquisition protocols. Such unwanted sources of variation, which we refer to as “scanner effects”, can hinder the detection of imaging features associated with clinical covariates of interest and cause spurious findings. In this paper, we investigate scanner effects in two large multi-site studies on cortical thickness measurements across a total of 11 scanners. We propose a set of tools for visualizing and identifying scanner effects that are generalizable to other modalities. We then propose to use ComBat, a technique adopted from the genomics literature and recently applied to diffusion tensor imaging data, to combine and harmonize cortical thickness values across scanners. We show that ComBat removes unwanted sources of scan variability while simultaneously increasing the power and reproducibility of subsequent statistical analyses. We also show that ComBat is useful for combining imaging data with the goal of studying life-span trajectories in the brain. •Cortical thickness (CT) measurements are highly scanner specific.•Identifying scanner effects is crucial for inference and biomarker development.•We propose to use ComBat to harmonize cortical thickness values across scanners.
Neoadjuvant chemoradiotherapy plus surgery versus active surveillance for oesophageal cancer: a stepped-wedge cluster randomised trial
Background Neoadjuvant chemoradiotherapy (nCRT) plus surgery is a standard treatment for locally advanced oesophageal cancer. With this treatment, 29% of patients have a pathologically complete response in the resection specimen. This provides the rationale for investigating an active surveillance approach. The aim of this study is to assess the (cost-)effectiveness of active surveillance vs. standard oesophagectomy after nCRT for oesophageal cancer. Methods This is a phase-III multi-centre, stepped-wedge cluster randomised controlled trial. A total of 300 patients with clinically complete response (cCR, i.e. no local or disseminated disease proven by histology) after nCRT will be randomised to show non-inferiority of active surveillance to standard oesophagectomy (non-inferiority margin 15%, intra-correlation coefficient 0.02, power 80%, 2-sided α 0.05, 12% drop-out). Patients will undergo a first clinical response evaluation (CRE-I) 4–6 weeks after nCRT, consisting of endoscopy with bite-on-bite biopsies of the primary tumour site and other suspected lesions. Clinically complete responders will undergo a second CRE (CRE-II), 6–8 weeks after CRE-I. CRE-II will include 18F–FDG-PET-CT, followed by endoscopy with bite-on-bite biopsies and ultra-endosonography plus fine needle aspiration of suspected lymph nodes and/or PET- positive lesions. Patients with cCR at CRE-II will be assigned to oesophagectomy (first phase) or active surveillance (second phase of the study). The duration of the first phase is determined randomly over the 12 centres, i.e., stepped-wedge cluster design. Patients in the active surveillance arm will undergo diagnostic evaluations similar to CRE-II at 6/9/12/16/20/24/30/36/48 and 60 months after nCRT. In this arm, oesophagectomy will be offered only to patients in whom locoregional regrowth is highly suspected or proven, without distant dissemination. The main study parameter is overall survival; secondary endpoints include percentage of patients who do not undergo surgery, quality of life, clinical irresectability (cT4b) rate, radical resection rate, postoperative complications, progression-free survival, distant dissemination rate, and cost-effectiveness. We hypothesise that active surveillance leads to non-inferior survival, improved quality of life and a reduction in costs, compared to standard oesophagectomy. Discussion If active surveillance and surgery as needed after nCRT leads to non-inferior survival compared to standard oesophagectomy, this organ-sparing approach can be implemented as a standard of care.
Strategies for the use of Ginkgo biloba extract, EGb 761®, in the treatment and management of mild cognitive impairment in Asia: Expert consensus
Background Mild cognitive impairment (MCI) is a neurocognitive state between normal cognitive aging and dementia, with evidence of neuropsychological changes but insufficient functional decline to warrant a diagnosis of dementia. Individuals with MCI are at increased risk for progression to dementia; and an appreciable proportion display neuropsychiatric symptoms (NPS), also a known risk factor for dementia. Cerebrovascular disease (CVD) is thought to be an underdiagnosed contributor to MCI/dementia. The Ginkgo biloba extract, EGb 761®, is increasingly being used for the symptomatic treatment of cognitive disorders with/without CVD, due to its known neuroprotective effects and cerebrovascular benefits. Aims To present consensus opinion from the ASian Clinical Expert group on Neurocognitive Disorders (ASCEND) regarding the role of EGb 761® in MCI. Materials & Methods The ASCEND Group reconvened in September 2019 to present and critically assess the current evidence on the general management of MCI, including the efficacy and safety of EGb 761® as a treatment option. Results EGb 761® has demonstrated symptomatic improvement in at least four randomized trials, in terms of cognitive performance, memory, recall and recognition, attention and concentration, anxiety, and NPS. There is also evidence that EGb 761® may help delay progression from MCI to dementia in some individuals. Discussion EGb 761® is currently recommended in multiple guidelines for the symptomatic treatment of MCI. Due to its beneficial effects on cerebrovascular blood flow, it is reasonable to expect that EGb 761® may benefit MCI patients with underlying CVD. Conclusion As an expert group, we suggest it is clinically appropriate to incorporate EGb 761® as part of the multidomain intervention for MCI. EGb 761® has demonstrated improvement in mild cognitive impairment (MCI) symptoms in at least four randomized trails, and is currently recommended for MCI in multiple clinical guidelines. The ASian clinical expert group on Neurocognitive Disorders (ASCEND) suggests that it is clinically appropriate to incorporate EGb 761® as part of the multidomain intervention for MCI, including cases with underlying CVD.
Managing clustering effects and learning effects in the design and analysis of multicentre randomised trials: a survey to establish current practice
Background Patient outcomes can depend on the treating centre, or health professional, delivering the intervention. A health professional’s skill in delivery improves with experience, meaning that outcomes may be associated with learning. Considering differences in intervention delivery at trial design will ensure that any appropriate adjustments can be made during analysis. This work aimed to establish practice for the allowance of clustering and learning effects in the design and analysis of randomised multicentre trials. Methods A survey that drew upon quotes from existing guidelines, references to relevant publications and example trial scenarios was delivered. Registered UK Clinical Research Collaboration Registered Clinical Trials Units were invited to participate. Results Forty-four Units participated ( N  = 50). Clustering was managed through design by stratification, more commonly by centre than by treatment provider. Managing learning by design through defining a minimum expertise level for treatment provider was common (89%). One-third reported experience in expertise-based designs. The majority of Units had adjusted for clustering during analysis, although approaches varied. Analysis of learning was rarely performed for the main analysis ( n  = 1), although it was explored by other means. The insight behind the approaches used within and reasons for, or against, alternative approaches were provided. Conclusions Widespread awareness of challenges in designing and analysing multicentre trials is identified. Approaches used, and opinions on these, vary both across and within Units, indicating that approaches are dependent on the type of trial. Agreeing principles to guide trial design and analysis across a range of realistic clinical scenarios should be considered.
Mitigating site effects in covariance for machine learning in neuroimaging data
To acquire larger samples for answering complex questions in neuroscience, researchers have increasingly turned to multi‐site neuroimaging studies. However, these studies are hindered by differences in images acquired across multiple sites. These effects have been shown to bias comparison between sites, mask biologically meaningful associations, and even introduce spurious associations. To address this, the field has focused on harmonizing data by removing site‐related effects in the mean and variance of measurements. Contemporaneously with the increase in popularity of multi‐center imaging, the use of machine learning (ML) in neuroimaging has also become commonplace. These approaches have been shown to provide improved sensitivity, specificity, and power due to their modeling the joint relationship across measurements in the brain. In this work, we demonstrate that methods for removing site effects in mean and variance may not be sufficient for ML. This stems from the fact that such methods fail to address how correlations between measurements can vary across sites. Data from the Alzheimer's Disease Neuroimaging Initiative is used to show that considerable differences in covariance exist across sites and that popular harmonization techniques do not address this issue. We then propose a novel harmonization method called Correcting Covariance Batch Effects (CovBat) that removes site effects in mean, variance, and covariance. We apply CovBat and show that within‐site correlation matrices are successfully harmonized. Furthermore, we find that ML methods are unable to distinguish scanner manufacturer after our proposed harmonization is applied, and that the CovBat‐harmonized data retain accurate prediction of disease group. Multi‐site neuroimaging studies are hindered by differences in images acquired across multiple sites, often referred to as site effects. In this work, we demonstrate that methods for removing site effects in mean and variance may not be sufficient for machine learning. After applying our proposed harmonization method CovBat, we find that machine learning methods are unable to distinguish scanner manufacturer after our proposed harmonization is applied, and that the CovBat‐harmonized data retain accurate prediction of disease group.
Barriers to the conduct of randomised clinical trials within all disease areas
Background Randomised clinical trials are key to advancing medical knowledge and to enhancing patient care, but major barriers to their conduct exist. The present paper presents some of these barriers. Methods We performed systematic literature searches and internal European Clinical Research Infrastructure Network (ECRIN) communications during face-to-face meetings and telephone conferences from 2013 to 2017 within the context of the ECRIN Integrating Activity (ECRIN-IA) project. Results The following barriers to randomised clinical trials were identified: inadequate knowledge of clinical research and trial methodology; lack of funding; excessive monitoring; restrictive privacy law and lack of transparency; complex regulatory requirements; and inadequate infrastructures. There is a need for more pragmatic randomised clinical trials conducted with low risks of systematic and random errors, and multinational cooperation is essential. Conclusions The present paper presents major barriers to randomised clinical trials. It also underlines the value of using a pan-European-distributed infrastructure to help investigators overcome barriers for multi-country trials in any disease area.