Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,731
result(s) for
"Crowdsourcing - methods"
Sort by:
Crowdsourcing HIV Test Promotion Videos: A Noninferiority Randomized Controlled Trial in China
2016
Background. Crowdsourcing, the process of shifting individual tasks to a large group, may enhance human immunodeficiency virus (HIV) testing interventions. We conducted a noninferiority, randomized controlled trial to compare first-time HIV testing rates among men who have sex with men (MSM) and transgender individuals who received a crowdsourced or a health marketing HIV test promotion video. Methods. Seven hundred twenty-one MSM and transgender participants (≥ 16 years old, never before tested for HIV) were recruited through 3 Chinese MSM Web portals and randomly assigned to 1 of 2 videos. The crowdsourced video was developed using an open contest and formal transparent judging while the evidence-based health marketing video was designed by experts. Study objectives were to measure HIV test uptake within 3 weeks of watching either HIV test promotion video and cost per new HIV test and diagnosis. Results. Overall, 624 of 721 (87%) participants from 31 provinces in 217 Chinese cities completed the study. HIV test uptake was similar between the crowdsourced arm (37% [114/307]) and the health marketing arm (35% [111/317]). The estimated difference between the interventions was 2.1% (95% confidence interval, −5.4% to 9.7%). Among those tested, 31% (69/225) reported a new HIV diagnosis. The crowdsourced intervention cost substantially less than the health marketing intervention per first-time HIV test (US$131 vs US$238 per person) and per new HIV diagnosis (US$415 vs US$799 per person). Conclusions. Our nationwide study demonstrates that crowdsourcing may be an effective tool for improving HIV testing messaging campaigns and could increase community engagement in health campaigns. Clinical Trials Registration. NCT02248558.
Journal Article
The effectiveness of warning statements in reducing careless responding in crowdsourced online surveys
by
Memeti, Zgjim
,
Perrig, Sebastian A. C.
,
Brühlmann, Florian
in
Adult
,
Attention
,
Behavioral Science and Psychology
2024
Carelessness or insufficient effort responding is a widespread problem in online research, with estimates ranging from 3% to almost 50% of participants in online surveys being inattentive. While detecting carelessness has been subject to multiple studies, the factors that reduce or prevent carelessness are not as well understood. Initial evidence suggests that warning statements prior to study participation may reduce carelessness, but there is a lack of conclusive high-powered studies. This preregistered randomized controlled experiment aimed to test the effectiveness of a warning statement and an improved implementation of a warning statement in reducing participant inattention. A study with 812 participants recruited on Amazon Mechanical Turk was conducted. Results suggest that presenting a warning statement is not effective in reducing carelessness. However, requiring participants to actively type the warning statement statistically significantly reduced carelessness as measured with self-reported diligence, even-odd consistency, psychometric synonyms and antonyms, and individual response variability. The active warning statements also led to statistically significantly more attrition and potentially deterred those who were likely to be careless from even participating in this study. We show that the current standard practice of implementing warning statements is ineffective and novel methods to prevent and deter carelessness are needed.
Journal Article
Efficacy of a Web-Based, Crowdsourced Peer-To-Peer Cognitive Reappraisal Platform for Depression: Randomized Controlled Trial
by
Schueller, Stephen M
,
Morris, Robert R
,
Picard, Rosalind W
in
Adolescent
,
Adult
,
Crowdsourcing
2015
Self-guided, Web-based interventions for depression show promising results but suffer from high attrition and low user engagement. Online peer support networks can be highly engaging, but they show mixed results and lack evidence-based content.
Our aim was to introduce and evaluate a novel Web-based, peer-to-peer cognitive reappraisal platform designed to promote evidence-based techniques, with the hypotheses that (1) repeated use of the platform increases reappraisal and reduces depression and (2) that the social, crowdsourced interactions enhance engagement.
Participants aged 18-35 were recruited online and were randomly assigned to the treatment group, \"Panoply\" (n=84), or an active control group, online expressive writing (n=82). Both are fully automated Web-based platforms. Participants were asked to use their assigned platform for a minimum of 25 minutes per week for 3 weeks. Both platforms involved posting descriptions of stressful thoughts and situations. Participants on the Panoply platform additionally received crowdsourced reappraisal support immediately after submitting a post (median response time=9 minutes). Panoply participants could also practice reappraising stressful situations submitted by other users. Online questionnaires administered at baseline and 3 weeks assessed depression symptoms, reappraisal, and perseverative thinking. Engagement was assessed through self-report measures, session data, and activity levels.
The Panoply platform produced significant improvements from pre to post for depression (P=.001), reappraisal (P<.001), and perseverative thinking (P<.001). The expressive writing platform yielded significant pre to post improvements for depression (P=.02) and perseverative thinking (P<.001), but not reappraisal (P=.45). The two groups did not diverge significantly at post-test on measures of depression or perseverative thinking, though Panoply users had significantly higher reappraisal scores (P=.02) than expressive writing. We also found significant group by treatment interactions. Individuals with elevated depression symptoms showed greater comparative benefit from Panoply for depression (P=.02) and perseverative thinking (P=.008). Individuals with baseline reappraisal deficits showed greater comparative benefit from Panoply for depression (P=.002) and perseverative thinking (P=.002). Changes in reappraisal mediated the effects of Panoply, but not the expressive writing platform, for both outcomes of depression (ab=-1.04, SE 0.58, 95% CI -2.67 to -.12) and perseverative thinking (ab=-1.02, SE 0.61, 95% CI -2.88 to -.20). Dropout rates were similar for the two platforms; however, Panoply yielded significantly more usage activity (P<.001) and significantly greater user experience scores (P<.001).
Panoply engaged its users and was especially helpful for depressed individuals and for those who might ordinarily underutilize reappraisal techniques. Further investigation is needed to examine the long-term effects of such a platform and whether the benefits generalize to a more diverse population of users.
ClinicalTrials.gov NCT02302248; https://clinicaltrials.gov/ct2/show/NCT02302248 (Archived by WebCite at http://www.webcitation.org/6Wtkj6CXU).
Journal Article
Improving the Efficacy of Cognitive Training for Digital Mental Health Interventions Through Avatar Customization: Crowdsourced Quasi-Experimental Study
2019
The success of internet-based mental health interventions in practice-that is, in the wild-depends on the uptake and retention of the application and the user's focused attention in the moment of use. Incorporating game-based motivational design into digital interventions delivered in the wild has been shown to increase uptake and retention in internet-based training; however, there are outstanding questions about the potential of game-based motivational strategies to increase engagement with a task in the moment of use and the effect on intervention efficacy.
Designers of internet-based interventions need to know whether game-based motivational design strategies can increase in-the-moment engagement and thus improve digital interventions. The aim of this study was to investigate the effects of 1 motivational design strategy (avatar customization) in an example mental health intervention (computerized cognitive training for attention bias modification).
We assigned 317 participants to either a customized avatar or an assigned avatar condition. After measuring state anxiety (State-Trait Anxiety Inventory), we randomly assigned half of the participants in each condition to either an attentional retraining condition (Attention Bias Modification Training) or a control condition. After training, participants were exposed to a negative mood induction using images with strong negative valance (International Affective Picture System), after which we measured state anxiety again.
Avatar customization decreased posttraining state anxiety when controlling for baseline state anxiety for those in the attentional retraining condition; however, those who did not train experienced decreased resilience to the negative mood induction (F
=6.86, P=.009, η
=.027). This interaction effect suggests that customization increased task engagement with the intervention in the moment of use. Avatar customization also increased avatar identification (F
=12.46, P<.001, R
=.23), regardless of condition (F
=.79, P=.38). Avatar identification reduced anxiety after the negative mood induction for participants who underwent training but increased poststimulus anxiety for participants who did not undergo training, further suggesting that customization increases engagement in the task (F
=6.19, P=.01). The beneficial effect of avatar customization on training was driven by participants who were low in their basic satisfaction of relatedness (F
=18.5, P<.001, R
=.43), which is important because these are the participants who are most likely in need of digital interventions for mental health.
Our results suggest that applying motivational design-specifically avatar customization-is a viable strategy to increase engagement and subsequently training efficacy in a computerized cognitive task.
Journal Article
Assessing Interventions on Crowdsourcing Platforms to Nudge Patients for Engagement Behaviors in Primary Care Settings: Randomized Controlled Trial
2023
Engaging patients in health behaviors is critical for better outcomes, yet many patient partnership behaviors are not widely adopted. Behavioral economics-based interventions offer potential solutions, but it is challenging to assess the time and cost needed for different options. Crowdsourcing platforms can efficiently and rapidly assess the efficacy of such interventions, but it is unclear if web-based participants respond to simulated incentives in the same way as they would to actual incentives.
The goals of this study were (1) to assess the feasibility of using crowdsourced surveys to evaluate behavioral economics interventions for patient partnerships by examining whether web-based participants responded to simulated incentives in the same way they would have responded to actual incentives, and (2) to assess the impact of 2 behavioral economics-based intervention designs, psychological rewards and loss of framing, on simulated medication reconciliation behaviors in a simulated primary care setting.
We conducted a randomized controlled trial using a between-subject design on a crowdsourcing platform (Amazon Mechanical Turk) to evaluate the effectiveness of behavioral interventions designed to improve medication adherence in primary care visits. The study included a control group that represented the participants' baseline behavior and 3 simulated interventions, namely monetary compensation, a status effect as a psychological reward, and a loss frame as a modification of the status effect. Participants' willingness to bring medicines to a primary care visit was measured on a 5-point Likert scale. A reverse-coding question was included to ensure response intentionality.
A total of 569 study participants were recruited. There were 132 in the baseline group, 187 in the monetary compensation group, 149 in the psychological reward group, and 101 in the loss frame group. All 3 nudge interventions increased participants' willingness to bring medicines significantly when compared to the baseline scenario. The monetary compensation intervention caused an increase of 17.51% (P<.001), psychological rewards on status increased willingness by 11.85% (P<.001), and a loss frame on psychological rewards increased willingness by 24.35% (P<.001). Responses to the reverse-coding question were consistent with the willingness questions.
In primary care, bringing medications to office visits is a frequently advocated patient partnership behavior that is nonetheless not widely adopted. Crowdsourcing platforms such as Amazon Mechanical Turk support efforts to efficiently and rapidly reach large groups of individuals to assess the efficacy of behavioral interventions. We found that crowdsourced survey-based experiments with simulated incentives can produce valid simulated behavioral responses. The use of psychological status design, particularly with a loss framing approach, can effectively enhance patient engagement in primary care. These results support the use of crowdsourcing platforms to augment and complement traditional approaches to learning about behavioral economics for patient engagement.
Journal Article
Reimagining Health Communication: A Noninferiority Randomized Controlled Trial of Crowdsourced Intervention in China
2019
BACKGROUNDCrowdsourcing, the process of shifting individual tasks to a large group, may be useful for health communication, making it more people-centered. We aimed to evaluate whether a crowdsourced video is noninferior to a social marketing video in promoting condom use.
METHODSMen who have sex with men (≥16 years old, had condomless sex within 3 months) were recruited and randomly assigned to watch 1 of the 2 videos in 2015. The crowdsourced video was developed through an open contest, and the social marketing video was designed by using social marketing principles. Participants completed a baseline survey and follow-up surveys at 3 weeks and 3 months postintervention. The outcome was compared with a noninferiority margin of +10%.
RESULTSAmong the 1173 participants, 907 (77%) and 791 (67%) completed the 3-week and 3-month follow-ups. At 3 weeks, condomless sex was reported by 146 (33.6%) of 434 participants and 153 (32.3%) 473 participants in the crowdsourced and social marketing arms, respectively. The crowdsourced intervention achieved noninferiority (estimated difference, +1.3%; 95% confidence interval, −4.8% to 7.4%). At 3 months, 196 (52.1%) of 376 individuals and 206 (49.6%) of 415 individuals reported condomless sex in the crowdsourced and social-marketing arms (estimated difference+2.5%, 95% confidence interval, −4.5 to 9.5%). The 2 arms also had similar human immunodeficiency virus testing rates and other condom-related secondary outcomes.
CONCLUSIONSOur study demonstrates that crowdsourced message is noninferior to a social marketing intervention in promoting condom use among Chinese men who have sex with men. Crowdsourcing contests could have a wider reach than other approaches and create more people-centered intervention tools for human immunodeficiency virus control.
Journal Article
Improving Electronic Health Record Note Comprehension With NoteAid: Randomized Trial of Electronic Health Record Note Comprehension Interventions With Crowdsourced Workers
2019
Patient portals are becoming more common, and with them, the ability of patients to access their personal electronic health records (EHRs). EHRs, in particular the free-text EHR notes, often contain medical jargon and terms that are difficult for laypersons to understand. There are many Web-based resources for learning more about particular diseases or conditions, including systems that directly link to lay definitions or educational materials for medical concepts.
Our goal is to determine whether use of one such tool, NoteAid, leads to higher EHR note comprehension ability. We use a new EHR note comprehension assessment tool instead of patient self-reported scores.
In this work, we compare a passive, self-service educational resource (MedlinePlus) with an active resource (NoteAid) where definitions are provided to the user for medical concepts that the system identifies. We use Amazon Mechanical Turk (AMT) to recruit individuals to complete ComprehENotes, a new test of EHR note comprehension.
Mean scores for individuals with access to NoteAid are significantly higher than the mean baseline scores, both for raw scores (P=.008) and estimated ability (P=.02).
In our experiments, we show that the active intervention leads to significantly higher scores on the comprehension test as compared with a baseline group with no resources provided. In contrast, there is no significant difference between the group that was provided with the passive intervention and the baseline group. Finally, we analyze the demographics of the individuals who participated in our AMT task and show differences between groups that align with the current understanding of health literacy between populations. This is the first work to show improvements in comprehension using tools such as NoteAid as measured by an EHR note comprehension assessment tool as opposed to patient self-reported scores.
Journal Article
Promoting routine syphilis screening among men who have sex with men in China: study protocol for a randomised controlled trial of syphilis self-testing and lottery incentive
by
Cheng, Weibin
,
Fu, Hongyun
,
Marks, Michael
in
AIDS Serodiagnosis - methods
,
China
,
Clinical trials
2020
Background
Men who have sex with men (MSM) bear a high burden of syphilis infection. Expanding syphilis testing to improve timely diagnosis and treatment is critical to improve syphilis control. However, syphilis testing rates remain low among MSM, particularly in low- and middle-income countries. We describe the protocol for a randomised controlled trial (RCT) to assess whether provision of syphilis self-testing services can increase the uptake of syphilis testing among MSM in China.
Methods
Four hundred forty-four high-risk MSM will be recruited online and randomized in a 1:1:1 ratio to (1) standard syphilis self-testing arm; (2) a self-testing arm program enhanced with crowdsourcing and a lottery-based incentive, and (3) a standard of care (control). Self-testing services include a free syphilis self-test kit through the mail at monthly intervals. Participants in the lottery incentive arm will additionally receive health promotion materials generated from an open crowdsourcing contest and be given a lottery draw with a 10% chance to win 100 RMB (approximately 15 US Dollars) upon confirmed completion of syphilis testing. Syphilis self-test kits have step-by-step instructions and an instructional video. This is a non-blinded, open-label, parallel RCT. Participants in each arm will be followed-up at three and 6 months through WeChat (a social media app like Facebook messenger). Confirmation of syphilis self-test use will be determined by requiring participants to submit a photo of the used test kit to study staff via secure data messaging. Both self-testing and facility-based testing will be ascertained by sending a secure photographic image of the completed kit through an existing digital platform. The primary outcome is the proportion of participants who tested for syphilis in the past 3 months.
Discussion
Findings from this study will provide much needed insight on the impact of syphilis self-testing on promoting routine syphilis screening among MSM. The findings will also contribute to our understanding of the safety, effectiveness and acceptability of syphilis self-testing. These findings will have important implications for self-testing policy, both in China and internationally.
Trial registration
ChiCTR1900022409
(10 April, 2019).
Journal Article
Crowdsourcing to promote hepatitis C testing and linkage-to-care in China: a randomized controlled trial protocol
2020
Background
Hepatitis C virus (HCV) is a growing public health problem with a large disease burden worldwide. In China many people living with HCV are unaware of their hepatitis status and not connected to care and treatment. Crowdsourcing is a technique that invites the public to create health promotion materials and has been found to increase HIV testing uptake, including in China. This trial aims to evaluate crowdsourcing as a strategy to improve HCV awareness, testing and linkage-to-care in China.
Methods
A randomized controlled, two-armed trial (RCT) is being conducted in Shenzhen with 1006 participants recruited from primary care sectors of The University of Hong Kong-Shenzhen Hospital. Eligible participants are ≥30 years old; a resident in Shenzhen for at least one month after recruitment; no screening for HCV within the past 12 months and not known to have chronic HCV; and, having a WeChat social media account. Allocation is 1:1. Both groups will be administered a baseline and a follow-up survey (4-week post-enrollment). The intervention group will receive crowdsourcing materials to promote HCV testing once a week for two weeks and feedback will be collected thereafter, while the control group will receive no promotional materials. Feedback collected will be judged by a panel and selected to be implemented to improve the intervention continuously.
Those identified positive for HCV antibodies will be referred to gastroenterologists for confirmation and treatment. The primary outcome will be confirmed HCV testing uptake, and secondary outcomes include HCV confirmatory testing and initiation of HCV treatment with follow-ups with specialist providers. Data will be collected on Survey Star
@
via mobile devices.
Discussion
This will be the first study to evaluate the impact of crowdsourcing to improve viral hepatitis testing and linkage-to-care in the health facilities. This RCT will contribute to the existing literature on interventions to improve viral hepatitis testing in primary care setting, and inform future strategies to improve HCV care training for primary care providers in China.
Trial registration
Chinese Clinical Trial Registry. ChiCTR1900025771. Registered September 7th, 2019,
http://www.chictr.org.cn/showprojen.aspx?proj=42788
Journal Article
Reputation as a sufficient condition for data quality on Amazon Mechanical Turk
by
Vosgerau, Joachim
,
Peer, Eyal
,
Acquisti, Alessandro
in
Behavior
,
Behavioral Research - methods
,
Behavioral Science and Psychology
2014
Data quality is one of the major concerns of using crowdsourcing websites such as Amazon Mechanical Turk (MTurk) to recruit participants for online behavioral studies. We compared two methods for ensuring data quality on MTurk: attention check questions (ACQs) and restricting participation to MTurk workers with high reputation (above 95% approval ratings). In Experiment
1
, we found that high-reputation workers rarely failed ACQs and provided higher-quality data than did low-reputation workers; ACQs improved data quality only for low-reputation workers, and only in some cases. Experiment
2
corroborated these findings and also showed that more productive high-reputation workers produce the highest-quality data. We concluded that sampling high-reputation workers can ensure high-quality data without having to resort to using ACQs, which may lead to selection bias if participants who fail ACQs are excluded post-hoc.
Journal Article