Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
214 result(s) for "Geerts, Bart"
Sort by:
Cumulative fluid balance predicts mortality and increases time on mechanical ventilation in ARDS patients: An observational cohort study
Acute respiratory distress syndrome (ARDS) is characterized by acute, diffuse, inflammatory lung injury leading to increased pulmonary vascular permeability, pulmonary oedema and loss of aerated tissue. Previous literature showed that restrictive fluid therapy in ARDS shortens time on mechanical ventilation and length of ICU-stay. However, the effect of intravenous fluid use on mortality remains uncertain. We investigated the relationship between cumulative fluid balance (FB), time on mechanical ventilation and mortality in ARDS patients. Retrospective observational study. Patients were divided in four cohorts based on cumulative FB on day 7 of ICU-admission: ≤0 L (Group I); 0-3.5 L (Group II); 3.5-8 L (Group III) and ≥8 L (Group IV). In addition, we used cumulative FB on day 7 as continuum as a predictor of mortality. Primary outcomes were 28-day mortality and ventilator-free days. Secondary outcomes were 90-day mortality and ICU length of stay. Six hundred ARDS patients were included, of whom 156 (26%) died within 28 days. Patients with a higher cumulative FB on day 7 had a longer length of ICU-stay and fewer ventilator-free days on day 28. Furthermore, after adjusting for severity of illness, a higher cumulative FB was associated with 28-day mortality (Group II, adjusted OR (aOR) 2.1 [1.0-4.6], p = 0.045; Group III, aOR 3.3 [1.7-7.2], p = 0.001; Group IV, aOR 7.9 [4.0-16.8], p<0.001). Using restricted cubic splines, a non-linear dose-response relationship between cumulative FB and probability of death at day 28 was found; where a more positive FB predicted mortality and a negative FB showed a trend towards survival. A higher cumulative fluid balance is independently associated with increased risk of death, longer time on mechanical ventilation and longer length of ICU-stay in patients with ARDS. This underlines the importance of implementing restrictive fluid therapy in ARDS patients.
DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence
As an increasing number of clinical decision-support systems driven by artificial intelligence progress from development to implementation, better guidance on the reporting of human factors and early-stage clinical evaluation is needed.
Systematic evaluation of machine learning models for postoperative surgical site infection prediction
Surgical site infections (SSIs) lead to increased mortality and morbidity, as well as increased healthcare costs. Multiple models for the prediction of this serious surgical complication have been developed, with an increasing use of machine learning (ML) tools. The aim of this systematic review was to assess the performance as well as the methodological quality of validated ML models for the prediction of SSIs. A systematic search in PubMed, Embase and the Cochrane library was performed from inception until July 2023. Exclusion criteria were the absence of reported model validation, SSIs as part of a composite adverse outcome, and pediatric populations. ML performance measures were evaluated, and ML performances were compared to regression-based methods for studies that reported both methods. Risk of bias (ROB) of the studies was assessed using the Prediction model Risk of Bias Assessment Tool. Of the 4,377 studies screened, 24 were included in this review, describing 85 ML models. Most models were only internally validated (81%). The C-statistic was the most used performance measure (reported in 96% of the studies) and only two studies reported calibration metrics. A total of 116 different predictors were described, of which age, steroid use, sex, diabetes, and smoking were most frequently (100% to 75%) incorporated. Thirteen studies compared ML models to regression-based models and showed a similar performance of both modelling methods. For all included studies, the overall ROB was high or unclear. A multitude of ML models for the prediction of SSIs are available, with large variability in performance. However, most models lacked external validation, performance was reported limitedly, and the risk of bias was high. In studies describing both ML models and regression-based models, one modelling method did not outperform the other.
Patient reported postoperative pain with a smartphone application: A proof of concept
Thirty patients (60%) found it satisfying or very satisfying to communicate their pain with the app. Pain experienced after surgery was scored by patients as 'no': 3 (6%), 'little': 5 (10%), 'bearable': 25 (50%), 'considerable': 13 (26%) and 'severe': 1 (2%). Forty-five patients (90%) were positive about the ease of recording. Forty-five patients (90%) could correctly record their pain with the app. Thirty-eight patients (76%) agreed that in-app notifications to record pain were useful. Two patients (4%) were too ill to use the application. Based on usability feedback, we will redesign the pain intensity wheel and the in-app pain chart to improve clarity for patients to understand the course of their pain. The median patient recorded pain app score 4.0 (range 0 to 10) and nurse recorded numerical rating scale (NRS) for pain NRS 4.0 (range 0 to 9) were not statistically different (p = 0.06). Forty-two percent from a total of 307 patient pain app scores were ≥ 5 (on a scale from 0 no pain at all to 10 worst imaginable pain). Of these, 83% were recorded as 'bearable' while only in 18% of the recordings patients asked for additional analgesia. The results suggest that self-recording the severity of postoperative pain by patients with a smartphone application could be useful for postoperative pain management. The application was perceived as user-friendly and had high satisfaction rates from both patients and stakeholders. Further research is needed to validate the 11-point numeric and faces pain scale with the current gold standards visual analogue scale (VAS) and NRS for pain.
Effect of goal-directed therapy on outcome after esophageal surgery: A quality improvement study
Goal-directed therapy (GDT) can reduce postoperative complications in high-risk surgery patients. It is uncertain whether GDT has the same benefits in patients undergoing esophageal surgery. Goal of this Quality Improvement study was to evaluate the effects of a stroke volume guided GDT on post-operative outcome. We compared the postoperative outcome of patients undergoing esophagectomy before (99 patients) and after (100 patients) implementation of GDT. There was no difference in the proportion of patients with a complication (56% vs. 54%, p = 0.82), hospital stay and mortality. The incidence of prolonged ICU stay (>48 hours) was reduced (28% vs. 12, p = .005) in patients treated with GDT. Secondary analysis of complication rate showed a decrease in pneumonia (29 vs. 15%, p = .02), mediastinal abscesses (12 vs. 3%, p = .02), and gastric tube necrosis (5% vs. 0%, p = .03) in patients treated with GDT. Patients in the GDT group received significantly less fluids but received more colloids. The implementation of GDT during esophagectomy was not associated with reductions in overall morbidity, mortality and hospital length of stay. However, we observed a decrease in pneumonia, mediastinal abscesses, gastric tube necrosis, and ICU length of stay.
Simulated Hydrologic Impacts of Cloud Seeding in the North Platte and Little Snake River Basins of Wyoming
In the western United States, the recent mega‐drought and impacts of climate change have resulted in an interest in cloud seeding to enhance water supplies. Studies and field campaigns focused on cloud seeding across the West have quantified the effect on precipitation generation through the release of silver iodide, and these effects can be studied in simulations using WRF‐WxMod®, a modeling capability based on the WRF model that includes a cloud‐seeding parameterization. Here, we use a 36‐member ensemble of WRF‐WxMod simulations to force a spatially distributed hydrological model, WRF‐Hydro, to study how simulated cloud seeding impacts hydrology in the North Platte and Little Snake River basins of Wyoming during the 2020 water year. WRF‐Hydro is configured with a 1‐km land surface model, Noah‐MP, with the terrain routing grid run at 250 m. Compared to observations, WRF‐Hydro shows good performance with an average Kling Gupta Efficiency = 0.80. Over the 2020 water year, snow water equivalent increases by 10 mm over target mountain ranges due to simulated cloud seeding and streamflow increases by 6,921 acre‐ft over the entire domain. A water budget analysis shows that increases in ensemble mean precipitation due to simulated cloud seeding result in 78% diverted to increasing streamflow, 21% increasing soil moisture, and 8% going toward evapotranspiration. Such information is critical for water managers looking into the efficacy of cloud seeding to enhance their water resources amidst climate change.
Prediction of emergency department presentations for acute coronary syndrome using a machine learning approach
The relationship between weather and acute coronary syndrome (ACS) incidence has been the subject of considerable research, with varying conclusions. Harnessing machine learning techniques, our study explores the relationship between meteorological factors and ACS presentations in the emergency department (ED), offering insights into seasonal variations and inter-day fluctuations to optimize patient care and resource allocation. A retrospective cohort analysis was conducted, encompassing ACS presentations to Dutch EDs from 2010 to 2017. Temporal patterns were analyzed using heat-maps and time series plots. Multivariable linear regression (MLR) and Random Forest (RF) regression models were employed to forecast daily ACS presentations with prediction horizons of one, three, seven, and thirty days. Model performance was assessed using the coefficient of determination (R²), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE). The study included 214,953 ACS presentations, predominantly unstable angina (UA) (94,272; 44%), non-ST-elevated myocardial infarction (NSTEMI) (78,963; 37%), and ST-elevated myocardial infarction (STEMI) (41,718; 19%). A decline in daily ACS admissions over time was observed, with notable inter-day (estimated median difference: 41 (95%CI = 37–43, p  = < 0.001) and seasonal variations (estimated median difference: 9 (95%CI 6–12, p  = < 0.001). Both MLR and RF models demonstrated similar predictive capabilities, with MLR slightly outperforming RF. The models showed moderate explanatory power for ACS incidence (adjusted R² = 0.66; MAE (MAPE): 7.8 (11%)), with varying performance across subdiagnoses. Prediction of UA incidence resulted in the best-explained variability (adjusted R² = 0.80; MAE (MAPE): 5.3 (19.1%)), followed by NSTEMI and STEMI diagnoses. All models maintained consistent performance over extended prediction horizons. Our findings indicate that ACS presentation exhibits distinctive seasonal changes and inter-day differences, with marked reductions in incidence during the summer months and a distinct peak prevalence on Mondays. The predictive performance of our model was moderate. Nonetheless, we obtained good explanatory power for UA presentations. Our model emerges as a potentially valuable supplementary tool to enhance ED resource allocation or future predictive models predicting ACS incidence in the ED.
A Case Study of Radar Observations and WRF LES Simulations of the Impact of Ground-Based Glaciogenic Seeding on Orographic Clouds and Precipitation. Part II
Several Weather Research and Forecasting (WRF) Model simulations of natural and seeded clouds have been conducted in non-LES and LES (large-eddy simulation) modes to investigate the seeding impact on wintertime orographic clouds for an actual seeding case on 18 February 2009 in the Medicine Bow Mountains of Wyoming. Part I of this two-part series has shown the capability of WRF LES with 100-m grid spacing to capture the essential environmental conditions by comparing the model results with measurements from a variety of instruments. In this paper, the silver iodide (AgI) dispersion features, the AgI impacts on the turbulent kinetic energy (TKE), themicrophysics, and the precipitation are examined in detail using the model data, which leads to five main results. 1) The vertical dispersion of AgI particles is more efficient in cloudy conditions than in clear conditions. 2) The wind shear and the buoyancy are both important TKE production mechanisms in the wintertime PBL over complex terrain in cloudy conditions. The buoyancy-induced eddies are more responsible for the AgI vertical dispersion than the shear-induced eddies are. 3) Seeding has insignificant effects on the cloud dynamics. 4) AgI particles released fromthe ground-based generators affect the cloud within the boundary layer below 1 km AGL through nucleating extra ice crystals, converting liquid water into ice, depleting more vapor, and generating more precipitation on the ground. The AgI nucleation rate is inversely related to the natural ice nucleation rate. 5) The seeding effects on the ground precipitation are confined within narrow areas. The relative seeding effect ranges between 5% and 20% for the simulations with different grid spacing.
A Case Study of Radar Observations and WRF LES Simulations of the Impact of Ground-Based Glaciogenic Seeding on Orographic Clouds and Precipitation. Part I
Profiling airborne radar data and accompanying large-eddy-simulation (LES) modeling are used to examine the impact of ground-based glaciogenic seeding on cloud and precipitation in a shallow stratiform orographic winter storm. This storm occurred on 18 February 2009 over a mountain in Wyoming. The numerical simulations use the Weather Research and Forecasting (WRF) Model in LES mode with horizontal grid spacings of 300 and 100 m in a domain covering the entire mountain range, and a glaciogenic seeding parameterization coupled with the Thompson microphysics scheme. A series of non-LES simulations at 900-m resolution, each with different initial/boundary conditions, is validated against sounding, cloud, and precipitation data. The LES runs then are driven by the most representative 900-m non-LES simulation. The 100-m LES results compare reasonably well to the vertical-plane radar data. The modeled vertical-motion field reveals a turbulent boundary layer and gravity waves above this layer, as observed. The stormstructure also validates well, but the model storm thins and weakens more rapidly than is observed. Radar reflectivity frequency-by-altitude diagrams suggest a positive seeding effect, but time- and space-matched model reflectivity diagrams only confirm this in a relative sense, in comparison with the trend in the control region upwind of seeding generators, and not in an absolute sense. A model sensitivity run shows that in this case natural storm weakening dwarfs the seeding effect, which does enhance snow mass and snowfall. Since the kinematic and microphysical structure of the storm is simulated well, future Part II of this study will examine how glaciogenic seeding impacts clouds and precipitation processes within the LES.
C-reactive protein in the first 30 postoperative days and its discriminative value as a marker for postoperative infections, a multicentre cohort study
ObjectiveTo assess the association of C-reactive protein (CRP) with postoperative infections for eight different types of surgery using big data.DesignA multicentre cohort study with longitudinally collected data from electronic health records, collected from 1 January 2011 to 22 September 2023.SettingData of two tertiary medical centres in the Netherlands were used.ParticipantsThis study included all procedures (42 125 in total) in adult patients undergoing surgery in two tertiary medical centres in the Netherlands.Outcome measuresThe primary outcome was the association between CRP and a postoperative infection in the first 30 days postoperatively. Postoperative infection was defined by an action-based definition, that is, patients had to be treated for an infection with anti-microbial treatment and/or an intervention (eg, surgical drainage) to be classified as having a postoperative infection. CRP measurements were divided into a reference group (0–5.0 mg/dL) and four groups for comparison (5.1–10.0 mg/dL, 10.1–15.0 mg/dL, 15.1–20.0 mg/dL and >20.0 mg/dL). Subgroup analyses were performed for eight major surgical subspecialties and for the two medical centres separately.ResultsA total of 175,779 CRP measurements were performed, of which the majority was drawn in the first postoperative week. The ORs for developing a postoperative infection varied between 1.0 (0.9–1.1 95% CI) and 12.0 (9.5–15.1 95% CI), with a stronger association for the higher level of CRP categories and when more time had elapsed since surgery. Sensitivity ranged between 11% and 34%, specificity ranged between 64 and 95%, and the positive and negative predicting value ranged between 12% and 51% and 88% and 94%, respectively. For the surgical subspecialties and the two hospitals separately, similar results were found.ConclusionIn this study, an elevated postoperative CRP was associated with postoperative infections with a stronger association for higher CRP levels. The association was stronger if a longer time had elapsed since surgery, which contrasts with the moment most CRP measurements were done, namely in the first postoperative week. Clinicians should take the evolving value of CRP in mind when using it in the diagnosis of postoperative infections.