Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
15 result(s) for "Juhlin, Kristina"
Sort by:
Comparison of Statistical Signal Detection Methods Within and Across Spontaneous Reporting Databases
Background Most pharmacovigilance departments maintain a system to identify adverse drug reactions (ADRs) through analysis of spontaneous reports. The signal detection algorithms (SDAs) and the nature of the reporting databases vary between operators and it is unclear whether any algorithm can be expected to provide good performance in a wide range of environments. Objective The objective of this study was to compare the performance of commonly used algorithms across spontaneous reporting databases operated by pharmaceutical companies and national and international pharmacovigilance organisations. Methods 220 products were chosen and a reference set of ADRs was compiled. Within four company, one national and two international databases, 15 SDAs based on five disproportionality methods were tested. Signals of disproportionate reporting (SDRs) were calculated at monthly intervals and classified by comparison with the reference set. These results were summarised as sensitivity and precision for each algorithm in each database. Results Different algorithms performed differently between databases but no method dominated all others. Performance was strongly dependent on the thresholds used to define a statistical signal. However, the different disproportionality statistics did not influence the achievable performance. The relative performance of two algorithms was similar in different databases. Over the lifetime of a product there is a reduction in precision for any method. Conclusions In designing signal detection systems, careful consideration should be given to the criteria that are used to define an SDR. The choice of disproportionality statistic does not appreciably affect the achievable range of signal detection performance and so this can primarily be based on ease of implementation, interpretation and minimisation of computing resources. The changes in sensitivity and precision obtainable by replacing one algorithm with another are predictable. However, the absolute performance of a method is specific to the database and is best assessed directly on that database. New methods may be required to gain appreciable improvements.
Improved Statistical Signal Detection in Pharmacovigilance by Combining Multiple Strength-of-Evidence Aspects in vigiRank
Background Detection of unknown risks with marketed medicines is key to securing the optimal care of individual patients and to reducing the societal burden from adverse drug reactions. Large collections of individual case reports remain the primary source of information and require effective analytics to guide clinical assessors towards likely drug safety signals. Disproportionality analysis is based solely on aggregate numbers of reports and naively disregards report quality and content. However, these latter features are the very fundament of the ensuing clinical assessment. Objective Our objective was to develop and evaluate a data-driven screening algorithm for emerging drug safety signals that accounts for report quality and content. Methods vigiRank is a predictive model for emerging safety signals, here implemented with shrinkage logistic regression to identify predictive variables and estimate their respective contributions. The variables considered for inclusion capture different aspects of strength of evidence, including quality and clinical content of individual reports, as well as trends in time and geographic spread. A reference set of 264 positive controls (historical safety signals from 2003 to 2007) and 5,280 negative controls (pairs of drugs and adverse events not listed in the Summary of Product Characteristics of that drug in 2012) was used for model fitting and evaluation; the latter used fivefold cross-validation to protect against over-fitting. All analyses were performed on a reconstructed version of VigiBase ® as of 31 December 2004, at around which time most safety signals in our reference set were emerging. Results The following aspects of strength of evidence were selected for inclusion into vigiRank: the numbers of informative and recent reports, respectively; disproportional reporting; the number of reports with free-text descriptions of the case; and the geographic spread of reporting. vigiRank offered a statistically significant improvement in area under the receiver operating characteristics curve (AUC) over screening based on the Information Component (IC) and raw numbers of reports, respectively (0.775 vs. 0.736 and 0.707, cross-validated). Conclusions Accounting for multiple aspects of strength of evidence has clear conceptual and empirical advantages over disproportionality analysis. vigiRank is a first-of-its-kind predictive model to factor in report quality and content in first-pass screening to better meet tomorrow’s post-marketing drug safety surveillance needs.
Improved Statistical Signal Detection in Pharmacovigilance by Combining Multiple Strength-of-Evidence Aspects in vigiRank
Background Detection of unknown risks with marketed medicines is key to securing the optimal care of individual patients and to reducing the societal burden from adverse drug reactions. Large collections of individual case reports remain the primary source of information and require effective analytics to guide clinical assessors towards likely drug safety signals. Disproportionality analysis is based solely on aggregate numbers of reports and naively disregards report quality and content. However, these latter features are the very fundament of the ensuing clinical assessment. Objective Our objective was to develop and evaluate a data-driven screening algorithm for emerging drug safety signals that accounts for report quality and content. Methods vigiRank is a predictive model for emerging safety signals, here implemented with shrinkage logistic regression to identify predictive variables and estimate their respective contributions. The variables considered for inclusion capture different aspects of strength of evidence, including quality and clinical content of individual reports, as well as trends in time and geographic spread. A reference set of 264 positive controls (historical safety signals from 2003 to 2007) and 5,280 negative controls (pairs of drugs and adverse events not listed in the Summary of Product Characteristics of that drug in 2012) was used for model fitting and evaluation; the latter used fivefold cross-validation to protect against over-fitting. All analyses were performed on a reconstructed version of VigiBase ® as of 31 December 2004, at around which time most safety signals in our reference set were emerging. Results The following aspects of strength of evidence were selected for inclusion into vigiRank: the numbers of informative and recent reports, respectively; disproportional reporting; the number of reports with free-text descriptions of the case; and the geographic spread of reporting. vigiRank offered a statistically significant improvement in area under the receiver operating characteristics curve (AUC) over screening based on the Information Component (IC) and raw numbers of reports, respectively (0.775 vs. 0.736 and 0.707, cross-validated). Conclusions Accounting for multiple aspects of strength of evidence has clear conceptual and empirical advantages over disproportionality analysis. vigiRank is a first-of-its-kind predictive model to factor in report quality and content in first-pass screening to better meet tomorrow’s post-marketing drug safety surveillance needs.
Current Safety Concerns with Human Papillomavirus Vaccine: A Cluster Analysis of Reports in VigiBase
Introduction A number of safety signals—complex regional pain syndrome (CRPS), postural orthostatic tachycardia syndrome (POTS), and chronic fatigue syndrome (CFS)—have emerged with human papillomavirus (HPV) vaccines, which share a similar pattern of symptomatology. Previous signal evaluations and epidemiological studies have largely relied on traditional methodologies and signals have been considered individually. Objective The aim of this study was to explore global reporting patterns for HPV vaccine for subgroups of reports with similar adverse event (AE) profiles. Methods All individual case safety reports (reports) for HPV vaccines in VigiBase ® until 1 January 2015 were identified. A statistical cluster analysis algorithm was used to identify natural groupings based on AE profiles in a data-driven exploratory analysis. Clinical assessment of the clusters was performed to identify clusters relevant to current safety concerns. Results Overall, 54 clusters containing at least five reports were identified. The four largest clusters included 71 % of the analysed HPV reports and described AEs included in the product label. Four smaller clusters were identified to include case reports relevant to ongoing safety concerns (total of 694 cases). In all four of these clusters, the most commonly reported AE terms were headache and dizziness and fatigue or syncope; three of these four AE terms were reported in >50 % of the reports included in the clusters. These clusters had a higher proportion of serious cases compared with HPV reports overall (44–89 % in the clusters compared with 24 %). Furthermore, only a minority of reports included in these clusters included AE terms of diagnoses to explain these symptoms. Using proportional reporting ratios, the combination of headache and dizziness with either fatigue or syncope was found to be more commonly reported in HPV vaccine reports compared with non-HPV vaccine reports for females aged 9–25 years. This disproportionality remained when results were stratified by age and when those countries reporting the signals of CRPS (Japan) and POTS (Denmark) were excluded. Conclusions Cluster analysis reveals additional reports of AEs following HPV vaccination that are serious in nature and describe symptoms that overlap those reported in cases from the recent safety signals (POTS, CRPS, and CFS), but which do not report explicit diagnoses. While the causal association between HPV vaccination and these AEs remains uncertain, more extensive analyses of spontaneous reports can better identify the relevant case series for thorough signal evaluation.
Good Signal Detection Practices: Evidence from IMI PROTECT
Over a period of 5 years, the Innovative Medicines Initiative PROTECT (Pharmacoepidemiological Research on Outcomes of Therapeutics by a European ConsorTium) project has addressed key research questions relevant to the science of safety signal detection. The results of studies conducted into quantitative signal detection in spontaneous reporting, clinical trial and electronic health records databases are summarised and 39 recommendations have been formulated, many based on comparative analyses across a range of databases (e.g. regulatory, pharmaceutical company). The recommendations point to pragmatic steps that those working in the pharmacovigilance community can take to improve signal detection practices, whether in a national or international agency or in a pharmaceutical company setting. PROTECT has also pointed to areas of potentially fruitful future research and some areas where further effort is likely to yield less.
Performance of Stratified and Subgrouped Disproportionality Analyses in Spontaneous Databases
Introduction Disproportionality analyses are used in many organisations to identify adverse drug reactions (ADRs) from spontaneous report data. Reporting patterns vary over time, with patient demographics, and between different geographical regions, and therefore subgroup analyses or adjustment by stratification may be beneficial. Objective The objective of this study was to evaluate the performance of subgroup and stratified disproportionality analyses for a number of key covariates within spontaneous report databases of differing sizes and characteristics. Methods Using a reference set of established ADRs, signal detection performance (sensitivity and precision) was compared for stratified, subgroup and crude (unadjusted) analyses within five spontaneous report databases (two company, one national and two international databases). Analyses were repeated for a range of covariates: age, sex, country/region of origin, calendar time period, event seriousness, vaccine/non-vaccine, reporter qualification and report source. Results Subgroup analyses consistently performed better than stratified analyses in all databases. Subgroup analyses also showed benefits in both sensitivity and precision over crude analyses for the larger international databases, whilst for the smaller databases a gain in precision tended to result in some loss of sensitivity. Additionally, stratified analyses did not increase sensitivity or precision beyond that associated with analytical artefacts of the analysis. The most promising subgroup covariates were age and region/country of origin, although this varied between databases. Conclusions Subgroup analyses perform better than stratified analyses and should be considered over the latter in routine first-pass signal detection. Subgroup analyses are also clearly beneficial over crude analyses for larger databases, but further validation is required for smaller databases.
Zoo or Savannah? Choice of Training Ground for Evidence-Based Pharmacovigilance
Pharmacovigilance seeks to detect and describe adverse drug reactions early. Ideally, we would like to see objective evidence that a chosen signal detection approach can be expected to be effective. The development and evaluation of evidence-based methods require benchmarks for signal detection performance, and recent years have seen unprecedented efforts to build such reference sets. Here, we argue that evaluation should be made against emerging and not established adverse drug reactions, and we present real-world examples that illustrate the relevance of this to pharmacovigilance methods development for both individual case reports and longitudinal health records. The establishment of broader reference sets of emerging safety signals must be made a top priority to achieve more effective pharmacovigilance methods development and evaluation.
Using VigiBase to Identify Substandard Medicines: Detection Capacity and Key Prerequisites
Background Substandard medicines, whether the result of intentional manipulation or lack of compliance with good manufacturing practice (GMP) or good distribution practice (GDP), pose a significant potential threat to patient safety. Spontaneous adverse drug reaction reporting systems can contribute to identification of quality problems that cause unwanted and/or harmful effects, and to identification of clusters of lack of efficacy. In 2011, the Uppsala Monitoring Centre (UMC) constructed a novel algorithm to identify reporting patterns suggestive of substandard medicines in spontaneous reporting, and applied it to VigiBase ® , the World Health Organization’s global individual case safety report database. The algorithm identified some historical clusters related to substandard products, which were later able to be confirmed in the literature or by contact with national centres (NCs). As relevant and detailed information is often lacking in the VigiBase reports but might be available at the reporting NC, further evaluation of the algorithm was undertaken with involvement from NCs. Objective To evaluate the effectiveness of an algorithm that identifies clusters of potentially substandard medicines, when these are assessed directly at the NC concerned. Methods The algorithm identifies countries and time periods with disproportionately high reporting of product inadequacy. NCs with at least 20 clusters were eligible to participate in the study, and six NCs—those in the Republic of Korea, Malaysia, Singapore, South Africa, the UK and the USA—were selected, taking into account the geographical spread and prevalence of recent clusters. The clusters were systematically assessed at the NCs, following a standardized protocol, and then compiled centrally at the UMC. The clusters were classified as ‘confirmed’, ‘potential’ or ‘unlikely’ substandard products; or as ‘confirmed not substandard’ when confirmed by an investigation; or as ‘indecisive’ when the information available did not allow a sound assessment even at the NC. Results The assessment of a total of 147 clusters resulted in 8 confirmed, 12 potential and 51 unlikely substandard products, and a further 19 clusters were confirmed as not substandard. Reflecting the difficulty of evaluating suspected substandard products retrospectively when additional information from the primary reporter, as well as samples, are no longer available, 57 clusters were classified as indecisive. Conclusion While application of the algorithm to VigiBase allowed identification of some substandard medicines, some key prerequisites have been identified that need to be fulfilled at the national level for the algorithm to be useful in practice. Such key factors are fast handling and transfer of incoming reports into VigiBase, detailed information on the product and its distribution channels, the possibility of contacting primary reporters for further information, availability of samples of suspected products and laboratory capacity to analyse suspected products.
Empirical Performance of the Calibrated Self-Controlled Cohort Analysis Within Temporal Pattern Discovery: Lessons for Developing a Risk Identification and Analysis System
Background Observational healthcare data offer the potential to identify adverse drug reactions that may be missed by spontaneous reporting. The self-controlled cohort analysis within the Temporal Pattern Discovery framework compares the observed-to-expected ratio of medical outcomes during post-exposure surveillance periods with those during a set of distinct pre-exposure control periods in the same patients. It utilizes an external control group to account for systematic differences between the different time periods, thus combining within- and between-patient confounder adjustment in a single measure. Objectives To evaluate the performance of the calibrated self-controlled cohort analysis within Temporal Pattern Discovery as a tool for risk identification in observational healthcare data. Research Design Different implementations of the calibrated self-controlled cohort analysis were applied to 399 drug-outcome pairs (165 positive and 234 negative test cases across 4 health outcomes of interest) in 5 real observational databases (four with administrative claims and one with electronic health records). Measures Performance was evaluated on real data through sensitivity/specificity, the area under receiver operator characteristics curve (AUC), and bias. Results The calibrated self-controlled cohort analysis achieved good predictive accuracy across the outcomes and databases under study. The optimal design based on this reference set uses a 360 days surveillance period and a single control period 180 days prior to new prescriptions. It achieved an average AUC of 0.75 and AUC >0.70 in all but one scenario. A design with three separate control periods performed better for the electronic health records database and for acute renal failure across all data sets. The estimates for negative test cases were generally unbiased, but a minor negative bias of up to 0.2 on the RR-scale was observed with the configurations using multiple control periods, for acute liver injury and upper gastrointestinal bleeding. Conclusions The calibrated self-controlled cohort analysis within Temporal Pattern Discovery shows promise as a tool for risk identification; it performs well at discriminating positive from negative test cases. The optimal parameter configuration may vary with the data set and medical outcome of interest.
Comment on: \Zoo or Savannah? Choice of Training Ground for Evidence-Based Pharmacovigilance\/Authors' Reply to Harpaz et al. Comment on: \Zoo or Savannah? Choice of Training Ground for Evidence-Based Pharmacovigilance\
Norén et al. argue that signal detection is fundamentally a prognostic activity. [...]evaluation strategies should aim to emulate a prospective analysis of signal detection in lieu of a commonly applied yet unsatisfactory approach of retrospective analysis based on well established associations such as those comprising the Observational Medical Outcome Partnership (OMOP) [2] and EUADR benchmarks [3]. [...]the status of a recently labeled ADR (positive control in some benchmarks) may be revised based on new refuting evidence. [...]the increased level of uncertainty associated with experiments based on such recently labeled or emerging ADRs cannot be ignored. [...]consideration of timeliness does not eliminate the need to distinguish between emerging and established adverse drug reactions; we would not recommend a comparison of statistical signal detection methods based on how early in the post-marketing phase they signalled adverse drug reactions that were known already from pre-marketing clinical trials.