Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
10 result(s) for "Dalgic, Ozden O."
Sort by:
Deriving effective vaccine allocation strategies for pandemic influenza: Comparison of an agent-based simulation and a compartmental model
Individuals are prioritized based on their risk profiles when allocating limited vaccine stocks during an influenza pandemic. Computationally expensive but realistic agent-based simulations and fast but stylized compartmental models are typically used to derive effective vaccine allocation strategies. A detailed comparison of these two approaches, however, is often omitted. We derive age-specific vaccine allocation strategies to mitigate a pandemic influenza outbreak in Seattle by applying derivative-free optimization to an agent-based simulation and also to a compartmental model. We compare the strategies derived by these two approaches under various infection aggressiveness and vaccine coverage scenarios. We observe that both approaches primarily vaccinate school children, however they may allocate the remaining vaccines in different ways. The vaccine allocation strategies derived by using the agent-based simulation are associated with up to 70% decrease in total cost and 34% reduction in the number of infections compared to the strategies derived by using the compartmental model. Nevertheless, the latter approach may still be competitive for very low and/or very high infection aggressiveness. Our results provide insights about potential differences between the vaccine allocation strategies derived by using agent-based simulations and those derived by using compartmental models.
Projecting COVID-19 Mortality as States Relax Nonpharmacologic Interventions
Importance A key question for policy makers and the public is what to expect from the COVID-19 pandemic going forward as states lift nonpharmacologic interventions (NPIs), such as indoor mask mandates, to prevent COVID-19 transmission. Objective To project COVID-19 deaths between March 1, 2022, and December 31, 2022, in each of the 50 US states, District of Columbia, and Puerto Rico assuming different dates of lifting of mask mandates and NPIs. Design, Setting, and Participants This simulation modeling study used the COVID-19 Policy Simulator compartmental model to project COVID-19 deaths from March 1, 2022, to December 31, 2022, using simulated populations in the 50 US states, District of Columbia, and Puerto Rico. Projected current epidemiologic trends for each state until December 31, 2022, assuming the current pace of vaccination is maintained into the future and modeling different dates of lifting NPIs. Exposures Date of lifting statewide NPI mandates as March 1, April 1, May 1, June 1, or July 1, 2022. Main Outcomes and Measures Projected COVID-19 incident deaths from March to December 2022. Results With the high transmissibility of current circulating SARS-CoV-2 variants, the simulated lifting of NPIs in March 2022 was associated with resurgences of COVID-19 deaths in nearly every state. In comparison, delaying by even 1 month to lift NPIs in April 2022 was estimated to mitigate the amplitude of the surge. For most states, however, no amount of delay was estimated to be sufficient to prevent a surge in deaths completely. The primary factor associated with recurrent epidemics in the simulation was the assumed high effective reproduction number of unmitigated viral transmission. With a lower level of transmissibility similar to those of the ancestral strains, the model estimated that most states could remove NPIs in March 2022 and likely not see recurrent surges. Conclusions and Relevance This simulation study estimated that the SARS-CoV-2 virus would likely continue to take a major toll in the US, even as cases continued to decrease. Because of the high transmissibility of the recent Delta and Omicron variants, premature lifting of NPIs could pose a substantial threat of rebounding surges in morbidity and mortality. At the same time, continued delay in lifting NPIs may not prevent future surges.
Tollgate-based progression pathways of ALS patients
ObjectiveTo capture ALS progression in arm, leg, speech, swallowing, and breathing segments using a disease-specific staging system, namely tollgate-based ALS staging system (TASS), where tollgates refer to a set of critical clinical events including having slight weakness in arms, needing a wheelchair, needing a feeding tube, etc.MethodsWe compiled a longitudinal dataset from medical records including free-text clinical notes of 514 ALS patients from Mayo Clinic, Rochester-MN. We derived tollgate-based progression pathways of patients up to a 1-year period starting from the first clinic visit. We conducted Kaplan–Meier analyses to estimate the probability of passing each tollgate over time for each functional segment.ResultsAt their first clinic visit, 93%, 77%, and 60% of patients displayed some level of limb, bulbar, and breathing weakness, respectively. The proportion of patients at milder tollgate levels (tollgate level < 2) was smaller for arm and leg segments (38% and 46%, respectively) compared to others (> 65%). Patients showed non-uniform TASS pathways, i.e., the likelihood of passing a tollgate differed based on the affected segments at the initial visit. For instance, stratified by impaired segments at the initial visit, patients with limb and breathing impairment were more likely (62%) to use bi-level positive airway pressure device in a year compared to those with bulbar and breathing impairment (26%).ConclusionUsing TASS, clinicians can inform ALS patients about their individualized likelihood of having critical disabilities and assistive-device needs (e.g., being dependent on wheelchair/ventilation, needing walker/wheelchair or communication devices), and help them better prepare for future.
Challenges of COVID-19 Case Forecasting in the US, 2020–2021
During the COVID-19 pandemic, forecasting COVID-19 trends to support planning and response was a priority for scientists and decision makers alike. In the United States, COVID-19 forecasting was coordinated by a large group of universities, companies, and government entities led by the Centers for Disease Control and Prevention and the US COVID-19 Forecast Hub ( https://covid19forecasthub.org ). We evaluated approximately 9.7 million forecasts of weekly state-level COVID-19 cases for predictions 1–4 weeks into the future submitted by 24 teams from August 2020 to December 2021. We assessed coverage of central prediction intervals and weighted interval scores (WIS), adjusting for missing forecasts relative to a baseline forecast, and used a Gaussian generalized estimating equation (GEE) model to evaluate differences in skill across epidemic phases that were defined by the effective reproduction number. Overall, we found high variation in skill across individual models, with ensemble-based forecasts outperforming other approaches. Forecast skill relative to the baseline was generally higher for larger jurisdictions (e.g., states compared to counties). Over time, forecasts generally performed worst in periods of rapid changes in reported cases (either in increasing or decreasing epidemic phases) with 95% prediction interval coverage dropping below 50% during the growth phases of the winter 2020, Delta, and Omicron waves. Ideally, case forecasts could serve as a leading indicator of changes in transmission dynamics. However, while most COVID-19 case forecasts outperformed a naïve baseline model, even the most accurate case forecasts were unreliable in key phases. Further research could improve forecasts of leading indicators, like COVID-19 cases, by leveraging additional real-time data, addressing performance across phases, improving the characterization of forecast confidence, and ensuring that forecasts were coherent across spatial scales. In the meantime, it is critical for forecast users to appreciate current limitations and use a broad set of indicators to inform pandemic-related decision making.
Improved Health Outcomes from Hepatitis C Treatment Scale-Up in Spain’s Prisons: A Cost-Effectiveness Study
Hepatitis C virus (HCV) is 15 times more prevalent among persons in Spain’s prisons than in the community. Recently, Spain initiated a pilot program, JAILFREE-C, to treat HCV in prisons using direct-acting antivirals (DAAs). Our aim was to identify a cost-effective strategy to scale-up HCV treatment in all prisons. Using a validated agent-based model, we simulated the HCV landscape in Spain’s prisons considering disease transmission, screening, treatment, and prison-community dynamics. Costs and disease outcomes under status quo were compared with strategies to scale-up treatment in prisons considering prioritization (HCV fibrosis stage vs. HCV prevalence of prisons), treatment capacity (2,000/year vs. unlimited) and treatment initiation based on sentence lengths (>6 months vs. any). Scaling-up treatment by treating all incarcerated persons irrespective of their sentence length provided maximum health benefits–preventing 10,200 new cases of HCV, and 8,300 HCV-related deaths between 2019–2050; 90% deaths prevented would have occurred in the community. Compared with status quo, this strategy increased quality-adjusted life year (QALYs) by 69,700 and costs by €670 million, yielding an incremental cost-effectiveness ratio of €9,600/QALY. Scaling-up HCV treatment with DAAs for the entire Spanish prison population, irrespective of sentence length, is cost-effective and would reduce HCV burden.
Analysis of a Simulation Model to Estimate Long-term Outcomes in Patients with Nonalcoholic Fatty Liver Disease
Quantitative assessment of disease progression in patients with nonalcoholic fatty liver disease (NAFLD) has not been systematically examined using competing liver-related and non-liver-related mortality. To estimate long-term outcomes in NAFLD, accounting for competing liver-related and non-liver-related mortality associated with the different fibrosis stages of NAFLD using a simulated patient population. This decision analytical modeling study used individual-level state-transition simulation analysis and was conducted from September 1, 2017, to September 1, 2021. A publicly available interactive tool, dubbed NAFLD Simulator, was developed that simulates the natural history of NAFLD by age and fibrosis stage at the time of (hypothetical) diagnosis defined by liver biopsy. Model health states were defined by fibrosis states F0 to F4, decompensated cirrhosis, hepatocellular carcinoma (HCC), and liver transplant. Simulated patients could experience nonalcoholic steatohepatitis resolution, and their fibrosis stage could progress or regress. Transition probabilities between states were estimated from the literature as well as calibration, and the model reproduced the outcomes of a large observational study. Simulated natural history of NAFLD. Main outcomes were life expectancy; all cause, liver-related, and non-liver-related mortality; and cumulative incidence of decompensated cirrhosis and/or HCC. The model included 1 000 000 simulated patients with a mean (range) age of 49 (18-75) years at baseline, including 66% women. The life expectancy of patients aged 49 years was 25.3 (95% CI, 20.1-29.8) years for those with F0, 25.1 (95% CI, 20.1-29.4) years for those with F1, 23.6 (95% CI, 18.3-28.2) years for those with F2, 21.1 (95% CI, 15.6-26.3) years for those with F3, and 13.8 (95% CI, 10.3-17.6) years for those with F4 at the time of diagnosis. The estimated 10-year liver-related mortality was 0.1% (95% uncertainty interval [UI], <0.1%-0.2%) in F0, 0.2% (95% UI, 0.1%-0.4%) in F1, 1.0% (95% UI, 0.6%-1.7%) in F2, 4.0% (95% UI, 2.5%-5.9%) in F3, and 29.3% (95% UI, 21.8%-35.9%) in F4. The corresponding 10-year non-liver-related mortality was 1.8% (95% UI, 0.6%-5.0%) in F0, 2.4% (95% UI, 0.8%-6.3%) in F1, 5.2% (95% UI, 2.0%-11.9%) in F2, 9.7% (95% UI, 4.3%-18.1%) in F3, and 15.6% (95% UI, 10.1%-21.7%) in F4. Among patients aged 65 years, estimated 10-year non-liver-related mortality was higher than liver-related mortality in all fibrosis stages (eg, F2: 16.7% vs 0.8%; F3: 28.8% vs 3.0%; F4: 40.8% vs 21.9%). This decision analytic model study simulated stage-specific long-term outcomes, including liver- and non-liver-related mortality in patients with NAFLD. Depending on age and fibrosis stage, non-liver-related mortality was higher than liver-related mortality in patients with NAFLD. By translating surrogate markers into clinical outcomes, the NAFLD Simulator could be used as an educational tool among patients and clinicians to increase awareness of the health consequences of NAFLD.
Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States
Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks.
The United States COVID-19 Forecast Hub dataset
Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages.
Challenges of COVID-19 Case Forecasting in the US, 2020–2021
During the COVID-19 pandemic, forecasting COVID-19 trends to support planning and response was a priority for scientists and decision makers alike. In the United States, COVID-19 forecasting was coordinated by a large group of universities, companies, and government entities led by the Centers for Disease Control and Prevention and the US COVID-19 Forecast Hub (https://covid19forecasthub.org). We evaluated approximately 9.7 million forecasts of weekly state-level COVID-19 cases for predictions 1–4 weeks into the future submitted by 24 teams from August 2020 to December 2021. We assessed coverage of central prediction intervals and weighted interval scores (WIS), adjusting for missing forecasts relative to a baseline forecast, and used a Gaussian generalized estimating equation (GEE) model to evaluate differences in skill across epidemic phases that were defined by the effective reproduction number. Overall, we found high variation in skill across individual models, with ensemble-based forecasts outperforming other approaches. Forecast skill relative to the baseline was generally higher for larger jurisdictions (e.g., states compared to counties). Over time, forecasts generally performed worst in periods of rapid changes in reported cases (either in increasing or decreasing epidemic phases) with 95% prediction interval coverage dropping below 50% during the growth phases of the winter 2020, Delta, and Omicron waves. Ideally, case forecasts could serve as a leading indicator of changes in transmission dynamics. However, while most COVID-19 case forecasts outperformed a naïve baseline model, even the most accurate case forecasts were unreliable in key phases. Further research could improve forecasts of leading indicators, like COVID-19 cases, by leveraging additional real-time data, addressing performance across phases, improving the characterization of forecast confidence, and ensuring that forecasts were coherent across spatial scales. In the meantime, it is critical for forecast users to appreciate current limitations and use a broad set of indicators to inform pandemic-related decision making.
Prognostic factors affecting ALS progression through disease tollgates
Background and objectives Understanding factors affecting the timing of critical clinical events in ALS progression. Methods We captured ALS progression based on the timing of critical events (tollgates), by augmenting 6366 patients’ data from the PRO-ACT database with tollgate-passed information using classification. Time trajectories of passing ALS tollgates after the first visit were derived using Kaplan–Meier analyses. The significant prognostic factors were found using log-rank tests. Decision-tree-based classifications identified significant ALS phenotypes characterized by the list of body segments involved at the first visit. Results Standard (e.g., gender and onset type) and tollgate-related (phenotype and initial tollgate level) prognostic factors affect the timing of ALS tollgates. For instance, by the third year after the first visit, 80–100% of bulbar-onset patients vs. 43–48% of limb-onset patients, and 65–73% of females vs. 42–49% of males lost the ability to talk and started using a feeding tube. Compared to the standard factors, tollgate-related factors had a stronger effect on ALS progression. The initial impairment level significantly impacted subsequent ALS progression in a segment while affected segment combinations further characterized progression speed. For instance, patients with normal speech (Tollgate Level 0) at the first visit had less than a 10% likelihood of losing speech within a year, while for patients with Tollgate Level 1 (affected speech), this likelihood varied between 23 and 53% based on additional segment (leg) involvement. Conclusions Tollgate- and phenotype-related factors have a strong effect on the timing of ALS tollgates. All factors should be jointly considered to better characterize patient groups with different progression aggressiveness.