Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
76,911
result(s) for
"Data Collection - methods"
Sort by:
Improving Health Research on Small Populations
by
Services, Board on Health Care
,
Division, Health and Medicine
,
National Academies of Sciences, Engineering, and Medicine
in
Health surveys
,
Health surveys-United States-Statistical methods-Congresses
,
Public health
2018
The increasing diversity of population of the United States presents many challenges to conducting health research that is representative and informative. Dispersion and accessibility issues can increase logistical costs; populations for which it is difficult to obtain adequate sample size are also likely to be expensive to study. Hence, even if it is technically feasible to study a small population, it may not be easy to obtain the funding to do so. In order to address the issues associated with improving health research of small populations, the National Academies of Sciences, Engineering, and Medicine convened a workshop in January 2018. Participants considered ways of addressing the challenges of conducting epidemiological studies or intervention research with small population groups, including alternative study designs, innovative methodologies for data collection, and innovative statistical techniques for analysis.
Addressing missing data in randomized clinical trials: A causal inference perspective
2020
The importance of randomization in clinical trials has long been acknowledged for avoiding selection bias. Yet, bias concerns re-emerge with selective attrition. This study takes a causal inference perspective in addressing distinct scenarios of missing outcome data (MCAR, MAR and MNAR).
This study adopts a causal inference perspective in providing an overview of empirical strategies to estimate the average treatment effect, improve precision of the estimator, and to test whether the underlying identifying assumptions hold. We propose to use Random Forest Lee Bounds (RFLB) to address selective attrition and to obtain more precise average treatment effect intervals.
When assuming MCAR or MAR, the often untenable identifying assumptions with respect to causal inference can hardly be verified empirically. Instead, missing outcome data in clinical trials should be considered as potentially non-random unobserved events (i.e. MNAR). Using simulated attrition data, we show how average treatment effect intervals can be tightened considerably using RFLB, by exploiting both continuous and discrete attrition predictor variables.
Bounding approaches should be used to acknowledge selective attrition in randomized clinical trials in acknowledging the resulting uncertainty with respect to causal inference. As such, Random Forest Lee Bounds estimates are more informative than point estimates obtained assuming MCAR or MAR.
Journal Article
Assessment of the NIOSH Head-and-Face Anthropometric Survey of U.S. Respirator Users
by
National Academy of Sciences (U.S.)
,
Bailar, John C. (John Christian)
,
Institute of Medicine (U.S.). Committee for the Assessment of the NIOSH Head-and-Face Anthropometric Survey of U.S. Respirator Users
in
Anthropometry
,
Anthropometry -- United States
,
Breathing apparatus
2007
NIOSH and the Occupational Safety and Health Administration (OSHA) share responsibility for overseeing respiratory protection in the workplace and have established regulations for this purpose. Specifically, NIOSH has issued regulations which define respirator testing and certification. OSHA has issued regulations which define conditions under which employers are required to maintain respiratory protection programs in general industry, shipyards, marine terminals, and construction.
In 2005, NIOSH contracted with the Institute of Medicine (IOM) to study the NIOSH-sponsored Anthrotech study along with its supporting information and reports, and to examine and report on the adequacy and relevance of the study protocol, the analyses conducted, the resulting anthropometric dataset, and the appropriateness of the respirator fit-test panels derived from the new dataset.
Assessment of the NIOSH Head-and-Face Anthropometric Survey of U.S. Respirator Users focuses on the establishment of the scientific base required for certification standards of respirators, not their use in the workplace. This report describes and analyzes the anthropometric measurements performed by Anthrotech for its NIOSH-sponsored study. This report looks at the survey methods used by Anthrotech in the study, and examines how Anthrotech analyzed its data to derive fit-test panels and suggests some ways that analysis could be improved. This report discusses future directions, pointing toward further analyses of the data and offers suggestions for moving from research to practice.
Study protocol of an equivalence randomized controlled trial to evaluate the effectiveness of three different approaches to collecting Patient Reported Outcome Measures (PROMs) data using the Prostate Cancer Outcomes Registry-Victoria (PCOR-VIC)
by
Ruseckaite, Rasa
,
Sampurno, Fanny
,
Evans, Sue M.
in
Conferences
,
Cost-Benefit Analysis
,
Costs
2017
Background
Patient-reported outcome measures (PROMs) are used by clinical quality registries to assess patients’ perspectives of care outcomes and quality of life. PROMs can be assessed through a self-administered survey or by a third party. Use of mixed mode approaches where PROMs are completed using a single or combination of administration method is emerging. The aim of this study is to identify the most cost-effective efficient approach to collecting PROMs among three modes (telephone, postal service/mail and email) in a population-based clinical quality registry monitoring survivorship after a diagnosis of prostate cancer. This is important to assist the registry in achieving representative PROMs capture using the most cost-effective technique and in developing cost projections for national scale-up.
Methods/design
This study will adopt an equivalence randomised controlled design. Participants are men diagnosed with and/or treated for prostate cancer (PCa) participating in PCOR-VIC and meet the criteria for 12-month follow-up. Participants will be individually randomized to three independent groups: telephone, mail/postal, or email to complete the 26-item Expanded Prostate Cancer Index Composite (EPIC-26) survey. It is estimated each group will have 229 respondents. We will compare the proportion of completed surveys across the three groups.
The economic evaluation will be undertaken from the perspective of the data collection centre and consider all operating costs (personnel, supplies, training, operation and maintenance). Cost data will be captured using an Activity Based Costs method. To estimate the most cost-effective approach, we will calculate incremental cost-effectiveness ratios. A cost projection model will be developed based on most cost-effective approach for nationwide scale-up of the PROMs tool for follow-up of PCa patients in Australia.
Discussion
This study will identify the most cost-effective approach for collecting PROMs from men with PCa, and enable estimation of costs for national implementation of the PCa PROMs survey. The findings will be of interest to other registries embarking on PROMs data collection.
Trial registration
ACTRN12615001369516
(Registered on December 16, 2015)
Journal Article
Measuring Specific Mental Illness Diagnoses with Functional Impairment
by
Policy, Board on Health Sciences
,
Medicine, Institute of
,
National Academies of Sciences, Engineering, and Medicine
in
Mental illness
,
Mental illness-Diagnosis
2016
The workshop summarized in this report was organized as part of a study sponsored by the Substance Abuse and Mental Health Services Administration (SAMHSA) and the Office of the Assistant Secretary for Planning and Evaluation of the U.S. Department of Health and Human Services, with the goal of assisting SAMHSA in its responsibilities of expanding the collection of behavioral health data in several areas. The workshop brought together experts in mental health, psychiatric epidemiology and survey methods to facilitate discussion of the most suitable measures and mechanisms for producing estimates of specific mental illness diagnoses with functional impairment. The report discusses existing measures and data on mental disorders and functional impairment, challenges associated with collecting these data in large-scale population-based studies, as well as study design and estimation options.
The role of evidence, context, and facilitation in an implementation trial: implications for the development of the PARIHS framework
by
Allen, Claire
,
Seers, Kate
,
Bullock, Ian
in
Attitude of Health Personnel
,
Clinical trials
,
Content analysis
2013
Background
The case has been made for more and better theory-informed process evaluations within trials in an effort to facilitate insightful understandings of how interventions work. In this paper, we provide an explanation of implementation processes from one of the first national implementation research randomized controlled trials with embedded process evaluation conducted within acute care, and a proposed extension to the Promoting Action on Research Implementation in Health Services (PARIHS) framework.
Methods
The PARIHS framework was prospectively applied to guide decisions about intervention design, data collection, and analysis processes in a trial focussed on reducing peri-operative fasting times. In order to capture a holistic picture of implementation processes, the same data were collected across 19 participating hospitals irrespective of allocation to intervention. This paper reports on findings from data collected from a purposive sample of 151 staff and patients pre- and post-intervention. Data were analysed using content analysis within, and then across data sets.
Results
A robust and uncontested evidence base was a necessary, but not sufficient condition for practice change, in that individual staff and patient responses such as caution influenced decision making. The implementation context was challenging, in which individuals and teams were bounded by professional issues, communication challenges, power and a lack of clarity for the authority and responsibility for practice change. Progress was made in sites where processes were aligned with existing initiatives. Additionally, facilitators reported engaging in many intervention implementation activities, some of which result in practice changes, but not significant improvements to outcomes.
Conclusions
This study provided an opportunity for reflection on the comprehensiveness of the PARIHS framework. Consistent with the underlying tenant of PARIHS, a multi-faceted and dynamic story of implementation was evident. However, the prominent role that individuals played as part of the interaction between evidence and context is not currently explicit within the framework. We propose that successful implementation of evidence into practice is a planned facilitated process involving an interplay between individuals, evidence, and context to promote evidence-informed practice. This proposal will enhance the potential of the PARIHS framework for explanation, and ensure theoretical development both informs and responds to the evidence base for implementation.
Trial registration
ISRCTN18046709
- Peri-operative Implementation Study Evaluation (PoISE).
Journal Article
Evaluation of Electronic and Paper-Pen Data Capturing Tools for Data Quality in a Public Health Survey in a Health and Demographic Surveillance Site, Ethiopia: Randomized Controlled Crossover Health Care Information Technology Evaluation
by
Wilken, Marc
,
Zeleke, Atinkut Alamirrew
,
Worku, Abebaw Gebeyehu
in
Adult
,
Cross-Over Studies
,
Data Accuracy
2019
Periodic demographic health surveillance and surveys are the main sources of health information in developing countries. Conducting a survey requires extensive use of paper-pen and manual work and lengthy processes to generate the required information. Despite the rise of popularity in using electronic data collection systems to alleviate the problems, sufficient evidence is not available to support the use of electronic data capture (EDC) tools in interviewer-administered data collection processes.
This study aimed to compare data quality parameters in the data collected using mobile electronic and standard paper-based data capture tools in one of the health and demographic surveillance sites in northwest Ethiopia.
A randomized controlled crossover health care information technology evaluation was conducted from May 10, 2016, to June 3, 2016, in a demographic and surveillance site. A total of 12 interviewers, as 2 individuals (one of them with a tablet computer and the other with a paper-based questionnaire) in 6 groups were assigned in the 6 towns of the surveillance premises. Data collectors switched the data collection method based on computer-generated random order. Data were cleaned using a MySQL program and transferred to SPSS (IBM SPSS Statistics for Windows, Version 24.0) and R statistical software (R version 3.4.3, the R Foundation for Statistical Computing Platform) for analysis. Descriptive and mixed ordinal logistic analyses were employed. The qualitative interview audio record from the system users was transcribed, coded, categorized, and linked to the International Organization for Standardization 9241-part 10 dialogue principles for system usability. The usability of this open data kit-based system was assessed using quantitative System Usability Scale (SUS) and matching of qualitative data with the isometric dialogue principles.
From the submitted 1246 complete records of questionnaires in each tool, 41.89% (522/1246) of the paper and pen data capture (PPDC) and 30.89% (385/1246) of the EDC tool questionnaires had one or more types of data quality errors. The overall error rates were 1.67% and 0.60% for PPDC and EDC, respectively. The chances of more errors on the PPDC tool were multiplied by 1.015 for each additional question in the interview compared with EDC. The SUS score of the data collectors was 85.6. In the qualitative data response mapping, EDC had more positive suitability of task responses with few error tolerance characteristics.
EDC possessed significantly better data quality and efficiency compared with PPDC, explained with fewer errors, instant data submission, and easy handling. The EDC proved to be a usable data collection tool in the rural study setting. Implementation organization needs to consider consistent power source, decent internet connection, standby technical support, and security assurance for the mobile device users for planning full-fledged implementation and integration of the system in the surveillance site.
Journal Article
Usability, Engagement, and Report Usefulness of Chatbot-Based Family Health History Data Collection: Mixed Methods Analysis
2024
Family health history (FHx) is an important predictor of a person's genetic risk but is not collected by many adults in the United States.
This study aims to test and compare the usability, engagement, and report usefulness of 2 web-based methods to collect FHx.
This mixed methods study compared FHx data collection using a flow-based chatbot (KIT; the curious interactive test) and a form-based method. KIT's design was optimized to reduce user burden. We recruited and randomized individuals from 2 crowdsourced platforms to 1 of the 2 FHx methods. All participants were asked to complete a questionnaire to assess the method's usability, the usefulness of a report summarizing their experience, user-desired chatbot enhancements, and general user experience. Engagement was studied using log data collected by the methods. We used qualitative findings from analyzing free-text comments to supplement the primary quantitative results.
Participants randomized to KIT reported higher usability than those randomized to the form, with a mean System Usability Scale score of 80.2 versus 61.9 (P<.001), respectively. The engagement analysis reflected design differences in the onboarding process. KIT users spent less time entering FHx information and reported more conditions than form users (mean 5.90 vs 7.97 min; P=.04; and mean 7.8 vs 10.1 conditions; P=.04). Both KIT and form users somewhat agreed that the report was useful (Likert scale ratings of 4.08 and 4.29, respectively). Among desired enhancements, personalization was the highest-rated feature (188/205, 91.7% rated medium- to high-priority). Qualitative analyses revealed positive and negative characteristics of both KIT and the form-based method. Among respondents randomized to KIT, most indicated it was easy to use and navigate and that they could respond to and understand user prompts. Negative comments addressed KIT's personality, conversational pace, and ability to manage errors. For KIT and form respondents, qualitative results revealed common themes, including a desire for more information about conditions and a mutual appreciation for the multiple-choice button response format. Respondents also said they wanted to report health information beyond KIT's prompts (eg, personal health history) and for KIT to provide more personalized responses.
We showed that KIT provided a usable way to collect FHx. We also identified design considerations to improve chatbot-based FHx data collection: First, the final report summarizing the FHx collection experience should be enhanced to provide more value for patients. Second, the onboarding chatbot prompt may impact data quality and should be carefully considered. Finally, we highlighted several areas that could be improved by moving from a flow-based chatbot to a large language model implementation strategy.
Journal Article
When and how to use data from randomised trials to develop or validate prognostic models
by
Peelen, Linda M
,
Reitsma, Johannes B
,
Groenwold, Rolf H H
in
Cardiovascular disease
,
Clinical Decision-Making
,
Clinical trials
2019
Prediction models have become an integral part of clinical practice, providing information for patients and clinicians and providing support for their shared decision making. The development and validation of prognostic prediction models requires substantial volumes of high quality information on relevant predictors and patient health outcomes. Primary data collection dedicated to prognostic model (development or validation) research could come with substantial time and costs and can be seen as a waste of resources if suitable data are already available. Randomised clinical trials are a source of high quality clinical data with a largely untapped potential for use in further research. This article addresses when and how data from a randomised clinical trial can be used additionally for prognostic model research, and provides guidance for researchers with access to trial data to evaluate the suitability of their data for the development and validation of prognostic prediction models.
Journal Article
Assessment of Natural Language Processing of Electronic Health Records to Measure Goals-of-Care Discussions as a Clinical Trial Outcome
by
Lober, William B
,
Engelberg, Ruth A
,
Sibley, James
in
Clinical trials
,
Electronic health records
,
Monte Carlo simulation
2023
Importance Many clinical trial outcomes are documented in free-text electronic health records (EHRs), making manual data collection costly and infeasible at scale. Natural language processing (NLP) is a promising approach for measuring such outcomes efficiently, but ignoring NLP-related misclassification may lead to underpowered studies. Objective To evaluate the performance, feasibility, and power implications of using NLP to measure the primary outcome of EHR-documented goals-of-care discussions in a pragmatic randomized clinical trial of a communication intervention. Design, Setting, and Participants This diagnostic study compared the performance, feasibility, and power implications of measuring EHR-documented goals-of-care discussions using 3 approaches: (1) deep-learning NLP, (2) NLP-screened human abstraction (manual verification of NLP-positive records), and (3) conventional manual abstraction. The study included hospitalized patients aged 55 years or older with serious illness enrolled between April 23, 2020, and March 26, 2021, in a pragmatic randomized clinical trial of a communication intervention in a multihospital US academic health system. Main Outcomes and Measures Main outcomes were natural language processing performance characteristics, human abstractor-hours, and misclassification-adjusted statistical power of methods of measuring clinician-documented goals-of-care discussions. Performance of NLP was evaluated with receiver operating characteristic (ROC) curves and precision-recall (PR) analyses and examined the effects of misclassification on power using mathematical substitution and Monte Carlo simulation. Results A total of 2512 trial participants (mean [SD] age, 71.7 [10.8] years; 1456 [58%] female) amassed 44 324 clinical notes during 30-day follow-up. In a validation sample of 159 participants, deep-learning NLP trained on a separate training data set from identified patients with documented goals-of-care discussions with moderate accuracy (maximal F1score, 0.82; area under the ROC curve,0.924; area under the PR curve, 0.879). Manual abstraction of the outcome from the trial data set would require an estimated 2000 abstractor-hours and would power the trial to detect a risk difference of 5.4% (assuming 33.5% control-arm prevalence, 80% power, and 2-sided α = .05). Measuring the outcome by NLP alone would power the trial to detect a risk difference of 7.6%. Measuring the outcome by NLP-screened human abstraction would require 34.3 abstractor-hours to achieve estimated sensitivity of 92.6% and would power the trial to detect a risk difference of 5.7%. Monte Carlo simulations corroborated misclassification-adjusted power calculations. Conclusions and Relevance In this diagnostic study, deep-learning NLP and NLP-screened human abstraction had favorable characteristics for measuring an EHR outcome at scale. Adjusted power calculations accurately quantified power loss from NLP-related misclassification, suggesting that incorporation of this approach into the design of studies using NLP would be beneficial.
Journal Article