Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
28
result(s) for
"Painter, Jeffery"
Sort by:
Refining the impact of genetic evidence on clinical success
by
Minikel, Eric Vallabh
,
Painter, Jeffery L.
,
Nelson, Matthew R.
in
631/154/556
,
631/208/205/2138
,
631/208/727/2000
2024
The cost of drug discovery and development is driven primarily by failure
1
, with only about 10% of clinical programmes eventually receiving approval
2
–
4
. We previously estimated that human genetic evidence doubles the success rate from clinical development to approval
5
. In this study we leverage the growth in genetic evidence over the past decade to better understand the characteristics that distinguish clinical success and failure. We estimate the probability of success for drug mechanisms with genetic support is 2.6 times greater than those without. This relative success varies among therapy areas and development phases, and improves with increasing confidence in the causal gene, but is largely unaffected by genetic effect size, minor allele frequency or year of discovery. These results indicate we are far from reaching peak genetic insights to aid the discovery of targets for more effective drugs.
Human genetic evidence increases the success rate of drugs from clinical development to approval but we are still far from reaching peak genetic insights to aid the discovery of targets for more effective drugs.
Journal Article
The support of human genetic evidence for approved drug indications
2015
Matthew Nelson and colleagues investigate how well genetic evidence for disease susceptibility predicts drug mechanisms. They find a correlation between gene products that are successful drug targets and genetic loci associated with the disease treated by the drug and predict that selecting genetically supported targets could increase the success rate of drugs in clinical development.
Over a quarter of drugs that enter clinical development fail because they are ineffective. Growing insight into genes that influence human disease may affect how drug targets and indications are selected. However, there is little guidance about how much weight should be given to genetic evidence in making these key decisions. To answer this question, we investigated how well the current archive of genetic evidence predicts drug mechanisms. We found that, among well-studied indications, the proportion of drug mechanisms with direct genetic support increases significantly across the drug development pipeline, from 2.0% at the preclinical stage to 8.2% among mechanisms for approved drugs, and varies dramatically among disease areas. We estimate that selecting genetically supported targets could double the success rate in clinical development. Therefore, using the growing wealth of human genetic data to select the best targets and indications should have a measurable impact on the successful development of new drugs.
Journal Article
Artificial Intelligence Based on Machine Learning in Pharmacovigilance: A Scoping Review
by
Kompa, Kathryn Grace
,
Woloszynek, Stephen
,
Beam, Andrew L.
in
Artificial intelligence
,
Best practice
,
Bias
2022
Introduction
Artificial intelligence based on machine learning has made large advancements in many fields of science and medicine but its impact on pharmacovigilance is yet unclear.
Objective
The present study conducted a scoping review of the use of artificial intelligence based on machine learning to understand how it is used for pharmacovigilance tasks, characterize differences with other fields, and identify opportunities to improve pharmacovigilance through the use of machine learning.
Design
The PubMed, Embase, Web of Science, and IEEE Xplore databases were searched to identify articles pertaining to the use of machine learning in pharmacovigilance published from the year 2000 to September 2021. After manual screening of 7744 abstracts, a total of 393 papers met the inclusion criteria for further analysis. Extraction of key data on study design, data sources, sample size, and machine learning methodology was performed. Studies with the characteristics of good machine learning practice were defined and manual review focused on identifying studies that fulfilled these criteria and results that showed promise.
Results
The majority of studies (53%) were focused on detecting safety signals using traditional statistical methods. Of the studies that used more recent machine learning methods, 61% used off-the-shelf techniques with minor modifications. Temporal analysis revealed that newer methods such as deep learning have shown increased use in recent years. We found only 42 studies (10%) that reflect current best practices and trends in machine learning. In the subset of 154 papers that focused on data intake and ingestion, 30 (19%) were found to incorporate the same best practices.
Conclusion
Advances from artificial intelligence have yet to fully penetrate pharmacovigilance, although recent studies show signs that this may be changing.
Journal Article
Perspective review: Will generative AI make common data models obsolete in future analyses of distributed data networks?
by
Ramcharran, Darmendra
,
Painter, Jeffery L.
,
Bate, Andrew
in
Data analysis
,
Data models
,
Generative artificial intelligence
2025
Integrating real-world healthcare data is challenging due to diverse formats and terminologies, making standardization resource-intensive. While Common Data Models (CDMs) facilitate interoperability, they often cause information loss, exhibit semantic inconsistencies, and are labor-intensive to implement and update. We explore how generative artificial intelligence (GenAI), especially large language models (LLMs), could make CDMs obsolete in quantitative healthcare data analysis by interpreting natural language queries and generating code, enabling direct interaction with raw data. Knowledge graphs (KGs) standardize relationships and semantics across heterogeneous data, preserving integrity. This perspective review proposes a fourth generation of distributed data network analysis, building on previous generations categorized by their approach to data standardization and utilization. It emphasizes the potential of GenAI to overcome the limitations CDMs with GenAI-enabled access, KGs, and automatic code generation. A data commons may further enhance this capability, and KGs may well be needed to enable effective GenAI. Addressing privacy, security, and governance is critical; any new method must ensure protections comparable to CDM-based models. Our approach would aim to enable efficient, real-time analyses across diverse datasets and enhance patient safety. We recommend prioritizing research to assess how GenAI can transform quantitative healthcare data analysis by overcoming current limitations.
Plain language summary
Perspective Review: Will generative AI make common data models obsolete in future analyses of distributed data networks?
This perspective review explores whether Artificial Intelligence (AI) can revolutionize healthcare data analysis by reducing the current reliance on Common Data Models (CDMs), which encompass the following elements:
• CDMs are approaches that standardize diverse healthcare to a single shared format to enable efficiencies in data management and analyses using the same analysis syntax and analytic tools.
• Although CDMs have strengths, they also have limitations, such as high costs, potential loss of important details, significant effort to produce and maintain, and delays in data availability due to lengthy data processing steps.
• With the rapid growth of healthcare data, effectively analyzing it is crucial for patient safety and public health.
• AI may offer an alternative solution by analyzing data directly in its original form, reducing costs, preserving data details, and enabling real-time insights that support better patient outcomes and safer medication use.
This review investigates challenges currently associated with CDMs and explores how AI, particularly generative AI, can directly analyze raw data without the need for standardization. We discuss the following:
• How AI can interpret complex questions and generate accurate answers from raw data, enabling more timely analyses of real-world data.
• While CDMs may still be necessary in the short term, AI has the potential to eventually replace them, improving patient care and safety outcomes by providing faster and more precise insights.
• This perspective could lead to new methods of using healthcare data to inform decision-making and enhance treatment outcomes.
• By adopting advanced AI technologies, healthcare providers and researchers can better understand treatment risks and benefits, make more informed decisions, and ultimately improve patient safety and public health.
Journal Article
The need for guardrails with large language models in pharmacovigilance and other medical safety critical settings
by
Sobczak, Paulina
,
Beam, Andrew
,
Sato, Chiho
in
631/114/1305
,
631/154/1438
,
Adverse Drug Reaction Reporting Systems
2025
Large language models (LLMs) are useful tools with the capacity for performing specific types of knowledge work at an effective scale. However, LLM deployments in high-risk and safety-critical domains pose unique challenges, notably the issue of “hallucinations”, where LLMs can generate fabricated information. This is particularly concerning in settings such as drug safety, where inaccuracies could lead to patient harm. To mitigate these risks, we have developed and demonstrated a proof of concept suite of
guardrails
specifically designed to mitigate certain types of hallucinations and errors for drug safety, with potential applicability to other medical safety-critical contexts. These guardrails include mechanisms to detect anomalous documents to prevent the ingestion of inappropriate data, identify incorrect drug names or adverse event terms, and convey uncertainty in generated content. We integrated these guardrails with an LLM fine-tuned for a text-to-text task, which involves converting both structured and unstructured data within adverse event reports into natural language. This method was applied to translate individual case safety reports, demonstrating effective application in a pharmacovigilance processing task. Our guardrail framework offers a set of tools with broad applicability across various domains, ensuring LLMs can be safely used in high-risk situations by eliminating the occurrence of key errors, including the generation of incorrect pharmacovigilance-related terms, thus adhering to stringent regulatory and quality standards in medical safety-critical environments.
Journal Article
Developing Crowdsourced Training Data Sets for Pharmacovigilance Intelligent Automation
by
Painter, Jeffery L.
,
Casperson, Tim A.
,
Powell, Gregory Eugene
in
Automation
,
Crowdsourcing
,
Datasets
2021
Introduction
Machine learning offers an alluring solution to developing automated approaches to the increasing individual case safety report burden being placed upon pharmacovigilance. Leveraging crowdsourcing to annotate unstructured data may provide accurate, efficient, and contemporaneous training data sets in support of machine learning.
Objective
The objective of this study was to evaluate whether crowdsourcing can be used to accurately and efficiently develop training data sets in support of pharmacovigilance automation.
Materials and Methods
Pharmacovigilance experts created a reference dataset by reviewing 15,490 de-identified social media posts of narratives pertaining to 15 drugs and 22 medically relevant topics. A random sampling of posts from the reference dataset was published on Amazon Turk and its users (Turkers) were asked a series of questions about those same medical concepts. Accuracy, price elasticity, and time efficiency were evaluated.
Results
Accuracy of crowdsourced curation exceeded 90% when compared to the reference dataset and was completed in about 5% of the time. There was an increase in time efficiency with higher pay, but there was no significant difference in accuracy. Additionally, having a social media post reviewed by more than one Turker (using a voting system) did not offer significant improvements in terms of accuracy.
Conclusions
Crowdsourcing is an accurate and efficient method that can be used to develop training data sets in support of pharmacovigilance automation. More research is needed to better understand the breadth and depth of possible uses as well as strengths, limitations, and generalizability of results.
Journal Article
Engaging Patients via Online Healthcare Fora: Three Pharmacovigilance Use Cases
by
Painter, Jeffery L.
,
Merico, Erin
,
Powell, Greg
in
adverse event reporting
,
Adverse events
,
Case reports
2022
Increasingly, patient-generated safety insights are shared online, via general social media platforms or dedicated healthcare fora which give patients the opportunity to discuss their disease and treatment options. We evaluated three areas of potential interest for the use of social media in pharmacovigilance. To evaluate how social media may complement existing safety signal detection capabilities, we identified two use cases (drug/adverse event [AE] pairs) and then evaluated the frequency of AE discussions across a range of social media channels. Changes in frequency over time were noted in social media, then compared to frequency changes in Food and Drug Administration Adverse Event Reporting System (FAERS) data over the same time period using a traditional disproportionality method. Although both data sources showed increasing frequencies of AE discussions over time, the increase in frequency was greater in the FAERS data as compared to social media. To demonstrate the robustness of medical/AE insights of linked posts we manually reviewed 2,817 threads containing 21,313 individual posts from 3,601 unique authors. Posts from the same authors were linked together. We used a quality scoring algorithm to determine the groups of linked posts with the highest quality and manually evaluated the top 16 groups of posts. Most linked posts (12/16; 75%) contained all seven relevant medical insights assessed compared to only one (of 1,672) individual post. To test the capability of actively engage patients via social media to obtain follow-up AE information we identified and sent consents for follow-up to 39 individuals (through a third party). We sent target follow-up questions (identified by pharmacovigilance experts as critical for causality assessment) to those who consented. The number of people consenting to follow-up was low (20%), but receipt of follow-up was high (75%). We observed completeness of responses (37 out of 37 questions answered) and short average time required to receive the follow-up (1.8 days). Our findings indicate a limited use of social media data for safety signal detection. However, our research highlights two areas of potential value to pharmacovigilance: obtaining more complete medical/AE insights via longitudinal post linking and actively obtaining rapid follow-up information on AEs.
Journal Article
Social Media Listening for Routine Post-Marketing Safety Surveillance
by
Burstein, Phil J.
,
Reblin, Tjark
,
Powell, Gregory E.
in
Collaboration
,
Drug Safety and Pharmacovigilance
,
Drug use
2016
Introduction
Post-marketing safety surveillance primarily relies on data from spontaneous adverse event reports, medical literature, and observational databases. Limitations of these data sources include potential under-reporting, lack of geographic diversity, and time lag between event occurrence and discovery. There is growing interest in exploring the use of social media (‘social listening’) to supplement established approaches for pharmacovigilance. Although social listening is commonly used for commercial purposes, there are only anecdotal reports of its use in pharmacovigilance. Health information posted online by patients is often publicly available, representing an untapped source of post-marketing safety data that could supplement data from existing sources.
Objectives
The objective of this paper is to describe one methodology that could help unlock the potential of social media for safety surveillance.
Methods
A third-party vendor acquired 24 months of publicly available Facebook and Twitter data, then processed the data by standardizing drug names and vernacular symptoms, removing duplicates and noise, masking personally identifiable information, and adding supplemental data to facilitate the review process. The resulting dataset was analyzed for safety and benefit information.
Results
In Twitter, a total of 6,441,679 Medical Dictionary for Regulatory Activities (MedDRA
®
) Preferred Terms (PTs) representing 702 individual PTs were discussed in the same post as a drug compared with 15,650,108 total PTs representing 946 individual PTs in Facebook. Further analysis revealed that 26 % of posts also contained benefit information.
Conclusion
Social media listening is an important tool to augment post-marketing safety surveillance. Much work remains to determine best practices for using this rapidly evolving data source.
Journal Article
Semi-automation of keratopathy visual acuity grading of corneal events in belantamab mafodotin clinical trials: clinical decision support software
by
Painter, Jeffery L.
,
Talekar, Mala K.
,
Stein, Heather K.
in
belamaf
,
belamaf eye examination
,
belantamab mafodotin
2023
BackgroundBelantamab mafodotin (belamaf) has demonstrated clinically meaningful antimyeloma activity in patients with heavily pretreated multiple myeloma. However, it is highly active against dividing cells, contributing to off-target adverse events, particularly ocular toxicity. Changes in best corrected visual acuity (BCVA) and corneal examination findings are routinely monitored to determine Keratopathy Visual Acuity (KVA) grade to inform belamaf dose modification.ObjectiveWe aimed to develop a semiautomated mobile app to facilitate the grading of ocular events in clinical trials involving belamaf.MethodsThe paper process was semiautomated by creating a library of finite-state automaton (FSA) models to represent all permutations of KVA grade changes from baseline BCVA readings. The transition states in the FSA models operated independently of eye measurement units (e.g., Snellen, logMAR, decimal) and provided a uniform approach to determining KVA grade changes. Together with the FSA, the complex decision tree for determining the grade change based on corneal examination findings was converted into logical statements for accurate and efficient overall KVA grade computation. First, a web-based user interface, conforming to clinical practice settings, was developed to simplify the input of key KVA grading criteria. Subsequently, a mobile app was developed that included additional guided steps to assist in clinical decision-making.ResultsThe app underwent a robust Good Clinical Practice validation process. Outcomes were reviewed by key stakeholders, our belamaf medical lead, and the systems integration team. The time to compute a patient's overall KVA grade using the Belamaf Eye Exam (BEE) app was reduced from a 20- to 30-min process to <1–2 min. The BEE app was well received, with most investigators surveyed selecting “satisfied” or “highly satisfied” for its accuracy and time efficiency.ConclusionsOur semiautomated approach provides for an accurate, simplified method of assessment of patients’ corneal status that reduces errors and quickly delivers information critical for potential belamaf dose modifications. The app is currently available on the Apple iOS and Android platforms for use by investigators of the DREAMM clinical trials, and its use could easily be extended to the clinic to support healthcare providers who need to make informed belamaf treatment decisions.
Journal Article