Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
120,893 result(s) for "Net benefit"
Sort by:
Factors influencing short-term effectiveness and efficiency of the care provided by Dutch general practice mental health professionals
Introduction: This study examined whether factors related to general practice mental health professionals (GP-MHPs), that is, characteristics of the professional, the function, and the care provided, were associated with short-term effectiveness and efficiency of the care provided by GP-MHPs to adults in Dutch general practice. Methods: A prospective cohort study was conducted among 320 adults with anxiety or depressive symptoms who had an intake consultation with GP-MHPs (n = 64). Effectiveness was measured in terms of change in quality-adjusted life years (QALYs) 3 months after intake; and efficiency in terms of net monetary benefit (NMB) at 3-month follow-up. A range of GP-MHP-related predictors and patient-related confounders was considered. Results: Patients gained on average 0.022 QALYs at 3-month follow-up. The mean total costs per patient during the 3-month follow-up period (€3,864; 95% confidence interval [CI]: €3,196-€4,731) decreased compared to that during the 3 months before intake (€5,220; 95% CI: €4,639–€5,925), resulting largely from an increase in productivity. Providing mindfulness and/or relaxation exercises was associated with QALY decrement. Having longer work experience as a GP-MHP (≥2 years) and having 10-20 years of work experience as a mental health care professional were negatively associated with NMB. Furthermore, a higher number of homework exercises tended to be related to less efficient care. Finally, being self-employed and being seconded from an organization in which primary care and mental health care organizations collaborate were related to a positive NMB, while being seconded from a mental health organization tended towards such a relationship. Conclusions: Findings seem to imply that the care provided by GP-MHPs contributes to improving patients’ functioning. Some GP-MHP-related characteristics appear to influence short-term effectiveness and efficiency of the care provided. Further research is needed to confirm and better explain these findings and to examine longer-term effects.
External validation of clinical prediction models: simulation-based sample size calculations were more reliable than rules-of-thumb
•After a clinical prediction model is developed, it is usually necessary to undertake an external validation study that examines the model's performance in new data from the same or different population. External validation studies should have an appropriate sample size, in order to estimate model performance measures precisely for calibration, discrimination and clinical utility.•Rules-of-thumb suggest at least 100 events and 100 nonevents. Such blanket guidance is imprecise, and not specific to the model or validation setting.•Our works shows that precision of performance estimates is affected by the model's linear predictor (LP) distribution, in addition to number of events and total sample size. Furthermore, sample sizes of 100 (or even 200) events and non-events can give imprecise estimates, especially for calibration.•Our new proposal uses a simulation-based sample size calculation, which accounts for the LP distribution and (mis)calibration in the validation sample, and calculates the sample size (and events) required conditional on these factors.•The approach requires the researcher to specify the desired precision for each performance measure of interest (calibration, discrimination, net benefit, etc), the model's anticipated LP distribution in the validation population, and whether or not the model is well calibrated. Guidance for how to specify these values is given, and R and Stata code is provided. Sample size “rules-of-thumb” for external validation of clinical prediction models suggest at least 100 events and 100 non-events. Such blanket guidance is imprecise, and not specific to the model or validation setting. We investigate factors affecting precision of model performance estimates upon external validation, and propose a more tailored sample size approach. Simulation of logistic regression prediction models to investigate factors associated with precision of performance estimates. Then, explanation and illustration of a simulation-based approach to calculate the minimum sample size required to precisely estimate a model's calibration, discrimination and clinical utility. Precision is affected by the model's linear predictor (LP) distribution, in addition to number of events and total sample size. Sample sizes of 100 (or even 200) events and non-events can give imprecise estimates, especially for calibration. The simulation-based calculation accounts for the LP distribution and (mis)calibration in the validation sample. Application identifies 2430 required participants (531 events) for external validation of a deep vein thrombosis diagnostic model. Where researchers can anticipate the distribution of the model's LP (eg, based on development sample, or a pilot study), a simulation-based approach for calculating sample size for external validation offers more flexibility and reliability than rules-of-thumb.
Cost-effectiveness of high flow nasal cannula therapy versus continuous positive airway pressure for non-invasive respiratory support in paediatric critical care
Background High flow nasal cannula therapy (HFNC) and continuous positive airway pressure (CPAP) are two widely used modes of non-invasive respiratory support in paediatric critical care units. The FIRST-ABC randomised controlled trials (RCTs) evaluated the clinical and cost-effectiveness of HFNC compared with CPAP in two distinct critical care populations: acutely ill children (‘step-up’ RCT) and extubated children (‘step-down’ RCT). Clinical effectiveness findings (time to liberation from all forms of respiratory support) showed that HFNC was non-inferior to CPAP in the step-up RCT, but failed to meet non-inferiority criteria in the step-down RCT. This study evaluates the cost-effectiveness of HFNC versus CPAP. Methods All-cause mortality, health-related Quality of Life (HrQoL), and costs up to six months were reported using FIRST-ABC RCTs data. HrQoL was measured with the age-appropriate Paediatric Quality of Life Generic Core Scales questionnaire and mapped onto the Child Health Utility 9D index score at six months. Quality-Adjusted Life Years (QALYs) were estimated by combining HrQoL with mortality. Costs at six months were calculated by measuring and valuing healthcare resources used in paediatric critical care units, general medical wards and wider health service. The cost-effectiveness analysis used regression methods to report the cost-effectiveness of HFNC versus CPAP at six months and summarised the uncertainties around the incremental cost-effectiveness results. Results In both RCTs, the incremental QALYs at six months were similar between the randomised groups. The estimated incremental cost at six months was − £4565 (95% CI − £11,499 to £2368) and − £5702 (95% CI − £11,328 to − £75) for step-down and step-up RCT, respectively. The incremental net benefits of HFNC versus CPAP in step-down RCT and step-up RCT were £4388 (95% CI − £2551 to £11,327) and £5628 (95% CI − £8 to £11,264) respectively. The cost-effectiveness results were surrounded by considerable uncertainties. The results were similar across most pre-specified subgroups, and the base case results were robust to alternative assumptions. Conclusions HFNC compared to CPAP as non-invasive respiratory support for critically-ill children in paediatric critical care units reduces mean costs and is relatively cost-effective overall and for key subgroups, although there is considerable statistical uncertainty surrounding this result.
The Net Benefit of a treatment should take the correlation between benefits and harms into account
•The assessment of benefits and harms from experimental treatments often ignores the association between outcomes•The method of generalized pairwise comparisons (GPC) takes into account the association between endpoints•A Net Benefit computed using GPC leads to very different conclusions about the benefit/risk of treatment than when only marginal benefits are used•When data from randomized clinical trials are available, the benefit/risk assessment should use GPC rather than marginal treatment effects The assessment of benefits and harms from experimental treatments often ignores the association between outcomes. In a randomized trial, generalized pairwise comparisons (GPC) can be used to assess a Net Benefit that takes this association into account. We use GPC to analyze a fictitious trial of treatment versus control, with a binary efficacy outcome (response) and a binary toxicity outcome, as well as data from two actual randomized trials in oncology. In all cases, we compute the Net Benefit for scenarios with different orders of priority between response and toxicity, and a range of odds ratios (ORs) for the association between these outcomes. The GPC Net Benefit was quite different from the benefit/harm computed using marginal treatment effects on response and toxicity. In the fictitious trial using response as first priority, treatment had an unfavorable Net Benefit if OR < 1, but favorable if OR > 1. With OR = 1, the Net Benefit was 0. Results changed drastically using toxicity as first priority. Even in a simple situation, marginal treatment effects can be misleading. In contrast, GPC assesses the Net Benefit as a function of the treatment effects on each outcome, the association between outcomes, and individual patient priorities.
Visualizing the value of diagnostic tests and prediction models, part II. Net benefit graphs: net benefit as a function of the exchange rate
In this second of a 3-part series, we move from expected gain in utility (EGU) graphs to net benefit (NB) graphs, which show how NB depends on w= C/B, the treatment threshold odds, equal to the harm of treating unnecessarily (C) divided by the benefit of treating appropriately (B). For NB graphs, we shift from the perspective of testing individual patients with varying pretest probabilities of disease to the perspective of applying a test or risk model to an entire population with a given prevalence of disease, P0. As with EGU graphs, we subtract the harm of testing and the expected harm of treating according to the results of a test or model when it is wrong from the expected benefit of treating when it is right. The difference is that for NB graphs, the prevalence is fixed at P0 , and the x-axis is w. NB graphs show the NB of 3 strategies: 1) “Treat None”; 2) “Test” and treat those with predicted risk greater than the treatment threshold; and 3) “Treat All” in the population regardless of predicted risk. The “Treat All” line intersects the y-axis at NB = P0 and the x-axis at w = P0/(1 − P0). The “Test” line intersects the “Treat All” line at the Treat–Test threshold value of w; it intersects the x-axis at the Test-No Treat value of w. When NB is plotted as a function of w, NB graphs can be drawn as straight lines from easily calculated intercepts. •Net benefit (NB) graphs display NB of tests or models in populations.•They hold prevalence at P0 and show how NB depends on the exchange rate w = C/B.•In the absence of a test, “Treat All” is better than “Treat None” if w < P0 / (1 - P0).•Unless a test is perfect, NB declines with w, due to harms from false positives.•For dichotomous tests, this decline is linear.
Economic evaluation of return-to-work interventions for mental disorder-related sickness absence
The objective was to (i) assess the long-term cost-effectiveness of acceptance and commitment therapy (ACT), a workplace dialog intervention (WDI), and ACT+WDI compared to treatment as usual (TAU) for common mental disorders and (ii) investigate any differences in cost-effectiveness between diagnostic groups. An economic evaluation from the healthcare and limited welfare perspectives was conducted alongside a randomized clinical trial with a two-year follow-up period. Persons with common mental disorders receiving sickness benefits were invited to the trial. We used registry data for cost analysis alongside participant data collected during the trial and the reduction in sickness absence days as treatment effect. A total of 264 participants with a diagnosis of depression, anxiety, or stress-induced exhaustion disorder participated in a two-year follow-up of a four-arm trial: ACT (N=74), WDI (N=60), ACT+WDI (N=70), and TAU (N=60). For all patients in general, there were no statistically significant differences between interventions in terms of costs or effect. The subgroup analyses suggested that from a healthcare perspective, ACT was a cost-effective option for depression or anxiety disorders and ACT+WDI for stress-induced exhaustion disorder. With a two-year time horizon, the probability of WDI to be cost-saving in terms of sickness benefits costs was 80% compared with TAU. ACT had a high probability of cost-effectiveness from a healthcare perspective for employees on sick leave due to depression or anxiety disorders. For participants with stress-induced exhaustion disorder, adding WDI to ACT seems to reduce healthcare costs, while WDI as a stand-alone intervention seems to reduce welfare costs.
Visualizing the value of diagnostic tests and prediction models, part I: introduction and expected gain in utility as a function of pretest probability
In this first of a 3-part series, we review expected gain in utility (EGU) calculations and graphs; in later parts, we contrast them with net benefit calculations and graphs. Our example is plasma D-dimer as a test for pulmonary embolism. We approach EGU calculations from the perspective of a clinician evaluating a patient. The clinician is considering 1) not testing and not treating, 2) testing and treating according to the test result; or 3) treating without testing. We use simple algebra and graphs to show how EGU depends on pretest probability and the benefit of treating someone with disease (B) relative to the harms of treating someone without the disease (C) and the harm of the testing procedure itself (T). The treatment threshold probability, i.e., the probability of disease at which the expected benefit of treating those with disease is balanced by the harm of treating those without disease (EGU = 0) is C/(C + B). When a diagnostic test is available, the course of action with the highest EGU depends on C, B, T, the pretest probability of disease, and the test result. For a given C, B, and T, the lower the pretest probability, the more abnormal the test result must be to justify treatment. EGU calculations and graphs allow visualization of how the value of testing can be calculated from the prior probability of the disease, the benefit of treating those with disease, the harm of treating those without disease, and the harm of testing itself. •First of a 3-part series on visualizing the value of tests and prediction models.•Expected Gain in Utility (EGU) graphs compare testing and treating strategies.•EGU depends on pretest probability of disease and benefits vs. harms of treating.•Testing/treatment thresholds are where expected benefits and harms are balanced.
Aspirin for Primary Prevention in Patients With Elevated Coronary Artery Calcium Score: A Systematic Review of Current Evidences
The 2019 American College of Cardiology and American Heart Association guidelines regarding low-dose aspirin in the primary prevention of atherosclerotic cardiovascular disease (ASCVD) indicate an increased risk of bleeding without a net benefit. The coronary artery calcium (CAC) score could be used to guide aspirin therapy in high-risk patients without an increased risk of bleeding. With this systematic review, we aimed to analyze studies that have investigated the role of CAC in primary prevention with aspirin. A total of 4 relevant studies were identified and the primary outcomes of interest were bleeding events and major adverse cardiac events. The outcomes of interest were stratified into 3 groups based on CAC scoring: 0, 1 to 99, and ≥100. A study concluded from 2,191 patients that with a low bleeding risk, CAC ≥100, and ASCVD risk ≥5% aspirin confers a net benefit, whereas patients with a high bleeding risk would experience a net harm, irrespective of ASCVD risk or CAC. All other studies demonstrated net benefit in patients with CAC ≥100 with a clear benefit. CAC scores correspond to calcified plaque in coronary vessels and are associated with graded increase in adverse cardiovascular events. Our review has found that in the absence of a significant bleeding risk, increased ASCVD risk and CAC score corelate with increased benefit from aspirin. A study demonstrated a decrease in the odds of myocardial infarction from 3 to 0.56 in patients on aspirin. The major drawback of aspirin for primary prevention is the bleeding complication. At present, there is no widely validated tool to predict the bleeding risk with aspirin, which creates difficulties in accurately delineating risk. Barring some discrepancy between studies, evidence shows a net harm for the use of aspirin in low ASCVD risk (<5%), irrespective of CAC score.
Visualizing the value of diagnostic tests and prediction models, part III. Numerical example with discrete risk groups and miscalibration
In this third of a 3-part series, we use net benefit (NB) graphs to evaluate a risk model that divides D-dimer results into 8 intervals to estimate the probability of pulmonary embolism (PE). This demonstrates the effect of miscalibration on NB graphs. We evaluate the risk model’s performance using pooled data on 6013 participants from 5 PE diagnostic management studies. For a range of values of the “exchange rate” (w, the treatment threshold odds), we obtained NB of applying the risk model by subtracting the number of unnecessary treatments weighted by the exchange rate from the number of appropriate treatments and then dividing by the population size. In NB graphs, in which the x-axis is scaled linearly with the exchange rate w, miscalibration causes vertical changes in NB. If the risk model overestimates risk, as in this example, the NB graph for the risk model has vertical jumps up. These are due to the sudden gain in NB resulting from less overtreatment when the treatment threshold first exceeds the overestimated predicted risk. Calculating NB is a logical approach to quantifying the value of a diagnostic test or risk prediction model. In the same dataset at the same treatment threshold probability, the risk model with the higher net benefit is the better model in that dataset. Most net benefit calculations omit the harm of doing the test or applying the risk model, but if it is nontrivial, this harm can be subtracted from the net benefit. •NB quantifies projected benefits and harms of treating based on a test or model.•NB graphs show how NB depends on the exchange rate.•NB graphs differ from decision curves because the x-axis is scaled in odds.•Vertical changes in the NB graph reflect miscalibration of the risk model.
Ecology and Economics of Using Native Managed Bees for Almond Pollination
Native managed bees can improve crop pollination, but a general framework for evaluating the associated economic costs and benefits has not been developed. We conducted a cost–benefit analysis to assess how managing blue orchard bees (Osmia lignaria Say [Hymenoptera: Megachildae]) alongside honey bees (Apis mellifera Linnaeus [Hymenoptera: Apidae]) can affect profits for almond growers in California. Specifically, we studied how adjusting three strategies can influence profits: (1) number of released O. lignaria bees, (2) density of artificial nest boxes, and (3) number of nest cavities (tubes) per box. We developed an ecological model for the effects of pollinator activity on almond yields, validated the model with published data, and then estimated changes in profits for different management strategies. Our model shows that almond yields increase with O. lignaria foraging density, even where honey bees are already in use. Our cost–benefit analysis shows that profit ranged from –US$1,800 to US$2,800/ acre given different combinations of the three strategies. Adding nest boxes had the greatest effect; we predict an increase in profit between low and high nest box density strategies (2.5 and 10 boxes/acre). In fact, the number of released bees and the availability of nest tubes had relatively small effects in the high nest box density strategies. This suggests that growers could improve profits by simply adding more nest boxes with moderate number of tubes in each. Our approach can support grower decisions regarding integrated crop pollination and highlight the importance of a comprehensive ecological economic framework for assessing these decisions.