Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
209
result(s) for
"Hunt, Joe"
Sort by:
Classifying publications from the clinical and translational science award program along the translational research spectrum: a machine learning approach
by
Mazmanian, Paul E.
,
Surkis, Alisa
,
Mueller, Meridith
in
Algorithms
,
Analysis
,
Area Under Curve
2016
Background
Translational research is a key area of focus of the National Institutes of Health (NIH), as demonstrated by the substantial investment in the Clinical and Translational Science Award (CTSA) program. The goal of the CTSA program is to accelerate the translation of discoveries from the bench to the bedside and into communities. Different classification systems have been used to capture the spectrum of basic to clinical to population health research, with substantial differences in the number of categories and their definitions. Evaluation of the effectiveness of the CTSA program and of translational research in general is hampered by the lack of rigor in these definitions and their application. This study adds rigor to the classification process by creating a checklist to evaluate publications across the translational spectrum and operationalizes these classifications by building machine learning-based text classifiers to categorize these publications.
Methods
Based on collaboratively developed definitions, we created a detailed checklist for categories along the translational spectrum from T0 to T4. We applied the checklist to CTSA-linked publications to construct a set of coded publications for use in training machine learning-based text classifiers to classify publications within these categories. The training sets combined T1/T2 and T3/T4 categories due to low frequency of these publication types compared to the frequency of T0 publications. We then compared classifier performance across different algorithms and feature sets and applied the classifiers to all publications in PubMed indexed to CTSA grants. To validate the algorithm, we manually classified the articles with the top 100 scores from each classifier.
Results
The definitions and checklist facilitated classification and resulted in good inter-rater reliability for coding publications for the training set. Very good performance was achieved for the classifiers as represented by the area under the receiver operating curves (AUC), with an AUC of 0.94 for the T0 classifier, 0.84 for T1/T2, and 0.92 for T3/T4.
Conclusions
The combination of definitions agreed upon by five CTSA hubs, a checklist that facilitates more uniform definition interpretation, and algorithms that perform well in classifying publications along the translational spectrum provide a basis for establishing and applying uniform definitions of translational research categories. The classification algorithms allow publication analyses that would not be feasible with manual classification, such as assessing the distribution and trends of publications across the CTSA network and comparing the categories of publications and their citations to assess knowledge transfer across the translational research spectrum.
Journal Article
Understanding and responding to COVID-19 in Wales: protocol for a privacy-protecting data platform for enhanced epidemiology and evaluation of interventions
by
Emmerson, Chris
,
Taylor, Chris
,
Lyons, Ronan
in
At risk populations
,
Betacoronavirus
,
Censuses
2020
IntroductionThe emergence of the novel respiratory SARS-CoV-2 and subsequent COVID-19 pandemic have required rapid assimilation of population-level data to understand and control the spread of infection in the general and vulnerable populations. Rapid analyses are needed to inform policy development and target interventions to at-risk groups to prevent serious health outcomes. We aim to provide an accessible research platform to determine demographic, socioeconomic and clinical risk factors for infection, morbidity and mortality of COVID-19, to measure the impact of COVID-19 on healthcare utilisation and long-term health, and to enable the evaluation of natural experiments of policy interventions.Methods and analysisTwo privacy-protecting population-level cohorts have been created and derived from multisourced demographic and healthcare data. The C20 cohort consists of 3.2 million people in Wales on the 1 January 2020 with follow-up until 31 May 2020. The complete cohort dataset will be updated monthly with some individual datasets available daily. The C16 cohort consists of 3 million people in Wales on the 1 January 2016 with follow-up to 31 December 2019. C16 is designed as a counterfactual cohort to provide contextual comparative population data on disease, health service utilisation and mortality. Study outcomes will: (a) characterise the epidemiology of COVID-19, (b) assess socioeconomic and demographic influences on infection and outcomes, (c) measure the impact of COVID-19 on short -term and longer-term population outcomes and (d) undertake studies on the transmission and spatial spread of infection.Ethics and disseminationThe Secure Anonymised Information Linkage-independent Information Governance Review Panel has approved this study. The study findings will be presented to policy groups, public meetings, national and international conferences, and published in peer-reviewed journals.
Journal Article
Powering evaluation and continuous improvement in translational science: Insights from the 2025 ACTS Evaluation SIG meeting
by
Harvey, Jillian
,
Kane, Cathleen
,
Lechuga, Claudia
in
capacity building
,
Clinical and translational science
,
Collaboration
2026
The 2025 Evaluation Special Interest Group (SIG) meeting at the Association for Clinical and Translational Science conference brought together clinical and translational science (CTS) professionals to address evolving challenges in translational science evaluation. The meeting presentations and discussions addressed concept mapping for commonly used metrics, continuous quality improvement (CQI) practices, translational science impact evaluation, and evaluator toolkit development. Key themes and lessons learned included the tension between institution-specific and network-wide evaluation goals, the need for standardized yet flexible evaluation frameworks, and persistent barriers including limited staffing capacity and data ownership challenges. Facilitators identified included diverse CQI approaches, the evolving frameworks, and collaborative evaluation practices. Convened during a time of increasing research funding uncertainty and accountability, the meeting underscored the urgency of strengthening evaluation capacity to sustain the impact of CTS, highlighting both the enduring value of heterogeneous evaluation approaches and the critical need for coordinated CTS evaluation strategies to demonstrate impact and secure continued funding support.
Journal Article
Engaging community in the translational process: Environmental scan of adaptive capacity and preparedness of Clinical and Translational Science Award Program hubs
by
Volkov, Boris B.
,
Hunt, Joe
,
Hoyo, Verónica
in
Adaptation
,
adaptive capacity
,
Adaptive Capacity and Preparedness in Clinical and Translational Science
2023
This paper is part of the Environmental Scan of Adaptive Capacity and Preparedness of Clinical and Translational Science Award (CTSA) hubs, illuminating challenges, practices, and lessons learned related to CTSA hubs’ efforts of engaging community partners to reduce the spread of the virus, address barriers to COVID-19 testing, identify treatments to improve health outcomes, and advance community participation in research. CTSA researchers, staff, and community partners collaborated to develop evidence-based, inclusive, accessible, and culturally appropriate strategies and resources helping community members stay healthy, informed, and connected during the pandemic. CTSA institutions have used various mechanisms to advance co-learning and co-sharing of knowledge, resources, tools, and experiences between academic professionals, patients, community partners, and other stakeholders. Forward-looking and adaptive decision-making structures are those that prioritize sustained relationships, mutual trust and commitment, ongoing communication, proactive identification of community concerns and needs, shared goals and decision making, as well as ample appreciation of community members and their contributions to translational research. There is a strong need for further community-engaged research and workforce training on how to build our collective and individual adaptive capacity to sustain and improve processes and outcomes of engagement with and by communities—in all aspects of translational science.
Journal Article
184 Cross-institutional collaborations for health equity research at a CTSA
by
Hunt, Joe D.
,
Ramirez, Mirian
,
Whipple, Elizabeth C.
in
Bibliometrics
,
Co authorship
,
Collaboration
2022
OBJECTIVES/GOALS: We were interested in health equity research for each CTSA-affiliated institution, specifically focusing on cross department and cross-campus co-authorship. We conducted a bibliometric analysis of our CTSA-funded papers relating to diversity and inclusion to identify cross department and cross-campus collaborations. METHODS/STUDY POPULATION: We worked with our CTSAs Racial Justice, Diversity, Equity and Inclusion Task Force to conduct an environmental scan of diversity and inclusion research across our CTSA partner institutions. Using the Scopus database, searches were constructed to identify and retrieve the variety of affiliations for each of the CTSA authors, a health equity/health disparities search hedge, and all of our CTSA grant numbers. We limited the dates from the beginning of our CTSA in 2008-November 2021. We used PubMed to retrieve all MeSH terms for the articles. We used Excel to analyze the data, Python and NCBIs Entrez Programming Utilities to analyze MeSH terms, and VOSviewer to produce the visualizations. RESULTS/ANTICIPATED RESULTS: The results of this search yielded 94 articles overall. We broke these up into subsets (not mutually exclusive) to represent five of the researcher groups across our CTSA. We analyzed the overall dataset for citation count, normalized citation count, CTSA average authors, gender trends, and co-term analysis. We also developed cross department co-authorship maps and cross-institutional/group co-authorship maps. DISCUSSION/SIGNIFICANCE: This poster will demonstrate both the current areas where cross-departmental and cross-institutional collaboration exists among our CTSA authors, as well as identify potential existing areas for collaboration to occur. These findings may determine areas our CTSA can support to improve institutional performance in addressing health equity.
Journal Article
A landscape assessment of CTSA evaluators and their work in the CTSA consortium, 2021 survey findings
2024
This article presents a landscape assessment of the findings from the 2021 Clinical and Translational Science Award (CTSA) Evaluators Survey. This survey was the most recent iteration of a well established, national, peer-led systematic snapshot of the CTSA evaluators, their skillsets, listed evaluation resources, preferred methods, and identified best practices. Three questions guided our study: who are the CTSA evaluators, what competencies do they share and how is their work used within hubs. We describe our survey process (logistics of development, deployment, and differences in historical context with prior instruments); and present its main findings. We provide specific recommendations for evaluation practice in two main categories (National vs Group-level) including, among others, the need for a national, strategic plan for evaluation as well as enhanced mentoring and training of the next generation of evaluators. Although based on the challenges and opportunities currently within the CTSA Consortium, takeaways from this study constitute important lessons with potential for application in other large evaluation consortia. To our knowledge, this is the first time 2021 survey findings are disseminated widely, to increase transparency of the CTSA evaluators' work and to motivate conversations within hub and beyond, as to how best to leverage existent evaluative capacity.
Journal Article
Distinguishing between translational science and translational research in CTSA pilot studies: A collaborative project across 12 CTSA hubs
by
Ericson, Marissa
,
Boerger, Lindsie
,
Denne, Scott
in
Clinical medicine
,
Collaboration
,
efficiency
2024
The institutions (i.e., hubs) making up the National Institutes of Health (NIH)-funded network of Clinical and Translational Science Awards (CTSAs) share a mission to turn observations into interventions to improve public health. Recently, the focus of the CTSAs has turned increasingly from translational research (TR) to translational science (TS). The current NIH Funding Opportunity Announcement (PAR-21-293) for CTSAs stipulates that pilot studies funded through the CTSAs must be \"focused on understanding a scientific or operational principle underlying a step of the translational process with the goal of developing generalizable solutions to accelerate translational research.\" This new directive places Pilot Program administrators in the position of arbiters with the task of distinguishing between TR and TS projects. The purpose of this study was to explore the utility of a set of TS principles set forth by NCATS for distinguishing between TR and TS.
Twelve CTSA hubs collaborated to generate a list of Translational Science Principles questions. Twenty-nine Pilot Program administrators used these questions to evaluate 26 CTSA-funded pilot studies.
Factor analysis yielded three factors: Generalizability/Efficiency, Disruptive Innovation, and Team Science. The Generalizability/Efficiency factor explained the largest amount of variance in the questions and was significantly able to distinguish between projects that were verified as TS or TR (
= 6.92,
< .001) by an expert panel.
The seven questions in this factor may be useful for informing deliberations regarding whether a study addresses a question that aligns with NCATS' vision of TS.
Journal Article
402 Developing a rubric to distinguish translational science from translational research in CTSA pilot projects
by
Lee, Jennifer
,
Boerger, Lindsie
,
Denne, Scott
in
Consortia
,
Pilot projects
,
Population studies
2023
OBJECTIVES/GOALS: The goal of the CTSA consortium is to move scientific discoveries to clinical application. Translational science (TS) focuses on the process by which this happens, and NCATS supports pilot projects that propose TS questions. We are developing a rubric to guide program managers’ability to discriminate between TS and translational research (TR). METHODS/STUDY POPULATION: The CTSA External Review Exchange Consortium (CEREC) and CEREC II are reciprocal review collaborations between CTSA hubs that identify reviewers for each other’s pilot grant applications. CEREC and CEREC II partners developed a 31-item rubric, based on NIH’s Translational Science Principles, for discriminating pilot TS grant applications from those proposing TR. The hubs contributed proposals pre-selected as either TS or TR projects. Then, experienced reviewers and/or program administrators from the hubs used the rubric to score each of the proposals. Reliability of the rubric will be assessed using inter-rater reliability (% agreement and kappa). To identify which of the items in the rubric best discriminate between TS and TR, Item Response Theory analysis will be employed. RESULTS/ANTICIPATED RESULTS: Ten CEREC participating hubs submitted 30 applications: 20 TS proposals and 10 TR proposals. Twenty-two reviewers from 12 CEREC hubs evaluated the applications by using the scoring rubric; at least two reviewers evaluated each proposal. The results of the analyses will describe the reliability of the rubric and identify which of the seven TS Principles are most useful for distinguishing between TS and TR pilot grant proposals. Ultimately, this work will yield a scoring rubric that will be disseminated throughout the CTSA network to facilitate the screening of TS applications. DISCUSSION/SIGNIFICANCE: Optimizing research processes is critical to ensure that scientific discoveries are integrated into clinical practice and public health policy as rapidly, efficiently, and equitably as possible. By appropriately identifying and funding TS projects, CTSA hubs can accelerate the impact of clinical and translational research.
Journal Article