Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
24,958 result(s) for "Research Subjects"
Sort by:
Identifying trial recruitment uncertainties using a James Lind Alliance Priority Setting Partnership – the PRioRiTy (Prioritising Recruitment in Randomised Trials) study
Background Despite the problem of inadequate recruitment to randomised trials, there is little evidence to guide researchers on decisions about how people are effectively recruited to take part in trials. The PRioRiTy study aimed to identify and prioritise important unanswered trial recruitment questions for research. The PRioRiTy study - Priority Setting Partnership (PSP) included members of the public approached to take part in a randomised trial or who have represented participants on randomised trial steering committees, health professionals and research staff with experience of recruiting to randomised trials, people who have designed, conducted, analysed or reported on randomised trials and people with experience of randomised trials methodology. Methods This partnership was aided by the James Lind Alliance and involved eight stages: (i) identifying a unique, relevant prioritisation area within trial methodology; (ii) establishing a steering group (iii) identifying and engaging with partners and stakeholders; (iv) formulating an initial list of uncertainties; (v) collating the uncertainties into research questions; (vi) confirming that the questions for research are a current recruitment challenge; (vii) shortlisting questions and (viii) final prioritisation through a face-to-face workshop. Results A total of 790 survey respondents yielded 1693 open-text answers to 6 questions, from which 1880 potential questions for research were identified. After merging duplicates, the number of questions was reduced to 496. Questions were combined further, and those that were submitted by fewer than 15 people and/or fewer than 6 of the 7 stakeholder groups were excluded from the next round of prioritisation resulting in 31 unique questions for research. All 31 questions were confirmed as being unanswered after checking relevant, up-to-date research evidence. The 10 highest priority questions were ranked at a face-to-face workshop. The number 1 ranked question was “How can randomised trials become part of routine care and best utilise current clinical care pathways?” The top 10 research questions can be viewed at www.priorityresearch.ie . Conclusion The prioritised questions call for a collective focus on normalising trials as part of clinical care, enhancing communication, addressing barriers, enablers and motivators around participation and exploring greater public involvement in the research process.
Alternatives to the Randomized Controlled Trial
Public health researchers are addressing new research questions (e.g., effects of environmental tobacco smoke, Hurricane Katrina) for which the randomized controlled trial (RCT) may not be a feasible option. Drawing on the potential outcomes framework (Rubin Causal Model) and Campbellian perspectives, we consider alternative research designs that permit relatively strong causal inferences. In randomized encouragement designs, participants are randomly invited to participate in one of the treatment conditions, but are allowed to decide whether to receive treatment. In quantitative assignment designs, treatment is assigned on the basis of a quantitative measure (e.g., need, merit, risk). In observational studies, treatment assignment is unknown and presumed to be nonrandom. Major threats to the validity of each design and statistical strategies for mitigating those threats are presented.
Measuring the Prevalence of Problematic Respondent Behaviors among MTurk, Campus, and Community Participants
The reliance on small samples and underpowered studies may undermine the replicability of scientific findings. Large sample sizes may be necessary to achieve adequate statistical power. Crowdsourcing sites such as Amazon's Mechanical Turk (MTurk) have been regarded as an economical means for achieving larger samples. Because MTurk participants may engage in behaviors which adversely affect data quality, much recent research has focused on assessing the quality of data obtained from MTurk samples. However, participants from traditional campus- and community-based samples may also engage in behaviors which adversely affect the quality of the data that they provide. We compare an MTurk, campus, and community sample to measure how frequently participants report engaging in problematic respondent behaviors. We report evidence that suggests that participants from all samples engage in problematic respondent behaviors with comparable rates. Because statistical power is influenced by factors beyond sample size, including data integrity, methodological controls must be refined to better identify and diminish the frequency of participant engagement in problematic respondent behaviors.
Participants’ understanding of informed consent in clinical trials over three decades: systematic review and meta-analysis
To estimate the proportion of participants in clinical trials who understand different components of informed consent. Relevant studies were identified by a systematic review of PubMed, Scopus and Google Scholar and by manually reviewing reference lists for publications up to October 2013. A meta-analysis of study results was performed using a random-effects model to take account of heterogeneity. The analysis included 103 studies evaluating 135 cohorts of participants. The pooled proportion of participants who understood components of informed consent was 75.8% for freedom to withdraw at any time, 74.7% for the nature of study, 74.7% for the voluntary nature of participation, 74.0% for potential benefits, 69.6% for the study's purpose, 67.0% for potential risks and side-effects, 66.2% for confidentiality, 64.1% for the availability of alternative treatment if withdrawn, 62.9% for knowing that treatments were being compared, 53.3% for placebo and 52.1% for randomization. Most participants, 62.4%, had no therapeutic misconceptions and 54.9% could name at least one risk. Subgroup and meta-regression analyses identified covariates, such as age, educational level, critical illness, the study phase and location, that significantly affected understanding and indicated that the proportion of participants who understood informed consent had not increased over 30 years. The proportion of participants in clinical trials who understood different components of informed consent varied from 52.1% to 75.8%. Investigators could do more to help participants achieve a complete understanding.
The SHOW RESPECT adaptable framework of considerations for planning how to share trial results with participants, based on qualitative findings from trial participants and site staff
Background Sharing trial results with participants is a moral imperative, but too often does not happen in appropriate ways. Methods We carried out semi-structured interviews with patients ( n  = 13) and site staff ( n  = 11), and surveyed 180 patients and 68 site staff who were part of the Show RESPECT study, which tested approaches to sharing results with participants in the context of the ICON8 ovarian cancer trial (ISRCTN10356387). Qualitative and free-text data were analysed thematically, and findings used to develop the SHOW RESPECT adaptable framework of considerations for planning how to share trial results with participants. This paper presents the framework, with illustrations drawn from the Show RESPECT study. Results Our adaptable ‘SHOW RESPECT’ framework covers (1) Supporting and preparing trial participants to receive results, (2) HOw will the results reach participants?, (3) Who are the trial participants?, (4) REsults—what do they show?, (5) Special considerations, (6) Provider—who will share results with participants?, (7) Expertise and resources, (8) Communication tools and (9) Timing of sharing results. While the data upon which the framework is based come from a single trial, many of our findings are corroborated by findings from other studies in this area, supporting the transferability of our framework to trials beyond the UK ovarian cancer setting in which our work took place. Conclusions This adaptable ‘SHOW RESPECT’ framework can guide researchers as they plan how to share aggregate trial results with participants. While our data are drawn from a single trial context, the findings from Show RESPECT illustrate how approaches to communication in a specific trial can influence patient and staff experiences of feedback of trial results. The framework generated from these findings can be adapted to fit different trial contexts and used by other researchers to plan the sharing of results with their own participants. Trial registration ISRCTN96189403. Registered on February 26, 2019. Show RESPECT was supported by the Medical Research Council (MC_UU_12023/24 and MC_UU_00004/08) and the NIHR CRN.
Randomised controlled trial of incentives to improve online survey completion among internet-using men who have sex with men
BackgroundHIV prevention research often involves the use of online surveys as data collection instruments. Incomplete responses to these surveys can introduce bias. We aimed to develop and assess innovative methods to incentivise respondents to complete surveys.MethodsAdult men who have sex with men (MSM) living in the USA were recruited through banner advertisements on Facebook from 27 April 2015 to 6 May 2015 to participate in an online survey about HIV prevention and risk behaviours. Participants were randomised to one of four conditions: a monetary incentive; a series of altruistic messages highlighting the importance of participating in research; access to a dashboard comparing their responses with statistics from other participants after completion; and no incentive. Kaplan-Meier survival methods and univariate Cox proportional hazard models were used to evaluate survey dropout by incentive group and demographic variables of interest.ResultsThere were a total of 1178 participants randomised to the four treatment groups. The rate of survey dropout among participants in the altruistic (HR=0.68, 95% CI 0.49 to 0.93), monetary (HR=0.44, 95% CI 0.32 to 0.61) and dashboard (HR=0.78, 95% CI 0.58 to 1.06) groups was lower than the non-incentivised control group. Regardless of condition, survey dropout was also lower among MSM aged 28–34 (HR=0.67, 95% CI 0.50 to 0.90) compared with those aged 18–22 years old, and MSM who were white (HR=0.78, 95% CI 0.60 to 1.02) compared with non-white participants.ConclusionMonetary incentives and altruistic messaging can improve survey completion in online behavioural HIV prevention research among MSM.Trial registration number NCT02139566.
A multimedia consent tool for research participants in the Gambia: a randomized controlled trial
To assess the effectiveness of a multimedia informed consent tool for adults participating in a clinical trial in the Gambia. Adults eligible for inclusion in a malaria treatment trial (n = 311) were randomized to receive information needed for informed consent using either a multimedia tool (intervention arm) or a standard procedure (control arm). A computerized, audio questionnaire was used to assess participants' comprehension of informed consent. This was done immediately after consent had been obtained (at day 0) and at subsequent follow-up visits (days 7, 14, 21 and 28). The acceptability and ease of use of the multimedia tool were assessed in focus groups. On day 0, the median comprehension score in the intervention arm was 64% compared with 40% in the control arm (P = 0.042). The difference remained significant at all follow-up visits. Poorer comprehension was independently associated with female sex (odds ratio, OR: 0.29; 95% confidence interval, CI: 0.12-0.70) and residing in Jahaly rather than Basse province (OR: 0.33; 95% CI: 0.13-0.82). There was no significant independent association with educational level. The risk that a participant's comprehension score would drop to half of the initial value was lower in the intervention arm (hazard ratio 0.22, 95% CI: 0.16-0.31). Overall, 70% (42/60) of focus group participants from the intervention arm found the multimedia tool clear and easy to understand. A multimedia informed consent tool significantly improved comprehension and retention of consent information by research participants with low levels of literacy.
Patient-centered recruitment and retention for a randomized controlled study
Background Recruitment and retention strategies for patient-centered outcomes research are evolving and research on the subject is limited. In this work, we present a conceptual model of patient-centered recruitment and retention, and describe the recruitment and retention activities and related challenges in a patient-centered comparative effectiveness trial. Methods This is a multicenter, longitudinal randomized controlled trial in localized prostate cancer patients. Results We recruited 743 participants from three sites over 15 months period (January 2014 to March 2015), and followed them for 24 months. At site 1, of the 773 eligible participants, 551 (72%) were enrolled. At site 2, 34 participants were eligible and 23 (68%) enrolled. Of the 434 eligible participants at site 3, 169 (39%) enrolled. We observed that strategies related to the concepts of trust (e.g., physician involvement, ensuring protection of information), communication (e.g., brochures and pamphlets in physicians’ offices, continued contact during regular clinic visits and calling/emailing assessment), attitude (e.g., emphasizing the altruistic value of research, positive attitude of providers and research staff), and expectations (e.g., full disclosure of study requirements and time commitment, update letters) facilitated successful patient recruitment and retention. A stakeholders’ advisory board provided important input for the recruitment and retention activities. Active engagement, reminders at the offices, and personalized update letters helped retention during follow-up. Usefulness of telephone recruitment was site specific and, at one site, the time requirement for telephone recruitment was a challenge. Conclusions We have presented multilevel strategies for successful recruitment and retention in a clinical trial using a patient-centered approach. Our strategies were flexible to accommodate site-level requirements. These strategies as well as the challenges can aid recruitment and retention efforts of future large-scale, patient-centered research studies. Trial registration Clinicaltrials.gov , ID: NCT02032550 . Registered on 22 November 2013.
Beyond Nazi War Crimes Experiments: The Voluntary Consent Requirement of the Nuremberg Code at 70
The year 2017 marks both the 70th anniversary of the Nuremberg Code and the first major revisions of federal research regulations in almost 3 decades. I suggest that the informed consent provisions of the federal research regulations continue to follow the requirements of the Nuremberg Code. However, modifications are needed to the informed consent (and institutional review board) provisions to make the revised federal regulations more effective in promoting a genuine conversation between the researcher and the research subject. This conversation must take seriously both the therapeutic illusion and the desire of both the researcher and the research subject not to engage in sharing uncertainty.