Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
166 result(s) for "Goto, Tadahiro"
Sort by:
Machine learning approaches for predicting disposition of asthma and COPD exacerbations in the ED
The prediction of emergency department (ED) disposition at triage remains challenging. Machine learning approaches may enhance prediction. We compared the performance of several machine learning approaches for predicting two clinical outcomes (critical care and hospitalization) among ED patients with asthma or COPD exacerbation. Using the 2007–2015 National Hospital and Ambulatory Medical Care Survey (NHAMCS) ED data, we identified adults with asthma or COPD exacerbation. In the training set (70% random sample), using routinely-available triage data as predictors (e.g., demographics, arrival mode, vital signs, chief complaint, comorbidities), we derived four machine learning-based models: Lasso regression, random forest, boosting, and deep neural network. In the test set (the remaining 30% of sample), we compared their prediction ability against traditional logistic regression with Emergency Severity Index (ESI, reference model). Of 3206 eligible ED visits, corresponding to weighted estimates of 13.9 million visits, 4% had critical care outcome and 26% had hospitalization outcome. For the critical care prediction, the best performing approach– boosting – achieved the highest discriminative ability (C-statistics 0.80 vs. 0.68), reclassification improvement (net reclassification improvement [NRI] 53%, P = 0.002), and sensitivity (0.79 vs. 0.53) over the reference model. For the hospitalization prediction, random forest provided the highest discriminative ability (C-statistics 0.83 vs. 0.64) reclassification improvement (NRI 92%, P < 0.001), and sensitivity (0.75 vs. 0.33). Results were generally consistent across the asthma and COPD subgroups. Based on nationally-representative ED data, machine learning approaches improved the ability to predict disposition of patients with asthma or COPD exacerbation.
Emergency department triage prediction of clinical outcomes using machine learning models
Background Development of emergency department (ED) triage systems that accurately differentiate and prioritize critically ill from stable patients remains challenging. We used machine learning models to predict clinical outcomes, and then compared their performance with that of a conventional approach—the Emergency Severity Index (ESI). Methods Using National Hospital and Ambulatory Medical Care Survey (NHAMCS) ED data, from 2007 through 2015, we identified all adult patients (aged ≥ 18 years). In the randomly sampled training set (70%), using routinely available triage data as predictors (e.g., demographics, triage vital signs, chief complaints, comorbidities), we developed four machine learning models: Lasso regression, random forest, gradient boosted decision tree, and deep neural network. As the reference model, we constructed a logistic regression model using the five-level ESI data. The clinical outcomes were critical care (admission to intensive care unit or in-hospital death) and hospitalization (direct hospital admission or transfer). In the test set (the remaining 30%), we measured the predictive performance, including area under the receiver-operating-characteristics curve (AUC) and net benefit (decision curves) for each model. Results Of 135,470 eligible ED visits, 2.1% had critical care outcome and 16.2% had hospitalization outcome. In the critical care outcome prediction, all four machine learning models outperformed the reference model (e.g., AUC, 0.86 [95%CI 0.85–0.87] in the deep neural network vs 0.74 [95%CI 0.72–0.75] in the reference model), with less under-triaged patients in ESI triage levels 3 to 5 (urgent to non-urgent). Likewise, in the hospitalization outcome prediction, all machine learning models outperformed the reference model (e.g., AUC, 0.82 [95%CI 0.82–0.83] in the deep neural network vs 0.69 [95%CI 0.68–0.69] in the reference model) with less over-triages in ESI triage levels 1 to 3 (immediate to urgent). In the decision curve analysis, all machine learning models consistently achieved a greater net benefit—a larger number of appropriate triages considering a trade-off with over-triages—across the range of clinical thresholds. Conclusions Compared to the conventional approach, the machine learning models demonstrated a superior performance to predict critical care and hospitalization outcomes. The application of modern machine learning models may enhance clinicians’ triage decision making, thereby achieving better clinical care and optimal resource utilization.
Isotonic fluid for intravenous hydration maintenance in children
Emergency surgery is defined as surgery that is done as early as possible; thus, the increased number of patients who received emergency surgery might be associated with the increased risk of hyponatraemia reported in the group who received hypotonic fluid containing 77 mmol/L of sodium.
Development and validation of early prediction models for new-onset functional impairment at hospital discharge of ICU admission
PurposeWe aimed to develop and validate models for predicting new-onset functional impairment after intensive care unit (ICU) admission with predictors routinely collected within 2 days of admission.MethodsIn this multi-center retrospective cohort study of acute care hospitals in Japan, we identified adult patients who were admitted to the ICU with independent activities of daily living before hospitalization and survived for at least 2 days from April 2014 to October 2020. The primary outcome was functional impairment defined as Barthel Index ≤ 60 at hospital discharge. In the internal validation dataset (April 2014 to March 2019), using routinely collected 94 candidate predictors within 2 days of ICU admission, we trained and tuned the six conventional and machine-learning models with repeated random sub-sampling cross-validation. We computed the variable importance of each predictor to the models. In the temporal validation dataset (April 2019 to October 2020), we measured the performance of these models.ResultsWe identified 19,846 eligible patients. Functional impairment at discharge was developed in 33% of patients (n = 6488/19,846). In the temporal validation dataset, all six models showed good discrimination ability with areas under the curve above 0.86, and the differences among the six models were negligible. Variable importance revealed newly detected early predictors, including worsened neurologic conditions and catabolism biomarkers such as decreased serum albumin and increased blood urea nitrogen.ConclusionWe successfully developed early prediction models of new-onset functional impairment after ICU admission that achieved high performance using only data routinely collected within 2 days of ICU admission.
Development of novel optical character recognition system to reduce recording time for vital signs and prescriptions: A simulation-based study
Digital advancements can reduce the burden of recording clinical information. This intra-subject experimental study compared the time and error rates for recording vital signs and prescriptions between an optical character reader (OCR) and manual typing. This study was conducted at three community hospitals and two fire departments in Japan. Thirty-eight volunteers (15 paramedics, 10 nurses, and 13 physicians) participated in the study. We prepared six sample pictures: three ambulance monitors for vital signs (normal, abnormal, and shock) and three pharmacy notebooks that provided prescriptions (two, four, or six medications). The participants recorded the data for each picture using an OCR or by manually typing on a smartphone. The outcomes were recording time and error rate defined as the number of characters with omissions or misrecognitions/misspellings of the total number of characters. Data were analyzed using paired Wilcoxon signed-rank sum and McNemar’s tests. The recording times for vital signs were similar between groups (normal state, 21 s [interquartile range (IQR), 17–26 s] for OCR vs. 23 s [IQR, 18–31 s] for manual typing). In contrast, prescription recording was faster with the OCR (e.g., six-medication list, 18 s [IQR, 14–21 s] for OCR vs. 144 s [IQR, 112–187 s] for manual typing). The OCR had fewer errors than manual typing for both vital signs and prescriptions (0/1056 [0%] vs. 14/1056 [1.32%]; p<0.001 and 30/4814 [0.62%] vs. 53/4814 [1.10%], respectively). In conclusion, the developed OCR reduced the recording time for prescriptions but not vital signs. The OCR showed lower error rates than manual typing for both vital signs and prescription data.
Development and validation of a bedside-available machine learning model to predict discrepancies between SaO₂ and SpO₂: Exploring factors related to the discrepancies
In critically ill patients, a discrepancy frequently exists between percutaneous oxygen saturation (SpO₂) and arterial blood oxygen saturation (SaO₂), which can lead to potential hypoxemia being overlooked. The aim of this study was to explore the factors related to the discrepancy and to develop an easy-to-use prediction model that uses readily available bedside information to predict the discrepancy and suggest the need for arterial blood gas measurement. This is a prognostic study that used eICU Collaborative Research Database from 2014 to 2015 for model development and MIMIC-IV data from 2008 to 2019 for model validation. To predict the outcome of SpO₂ exceeding SaO₂ by 3% or more, non-invasive, readily available bedside information (patient demographics, vital signs, vasopressor use, ventilator use) was used to develop prediction models with three machine learning methods (decision tree, logistic regression, XGBoost). To make the model accessible, the model was deployed as a web-based application. Additionally, the contribution of each variable was explored using partial dependence plots and SHAP values. From 4,781 admission records in eICU data, a total of 19,804 paired SpO₂ and SaO₂ measurements were used. Among three machine learning models, the XGBoost model demonstrated the best predictive performance with an AUROC of 0.73 and a calibration slope of 0.90. In the validation cohort of MIMIC-IV paired dataset, the performance was AUROC of 0.56. An exploratory model-updating step followed by temporal validation raised performance to AUROC of 0.70 with a calibration slope of 0.85. In both datasets, worse vital signs were associated with the discrepancy (e.g., low blood pressure, low temperature) between SpO₂ and SaO₂. Using non-invasive bedside data, a machine learning model was developed to predict SpO₂–SaO₂ discrepancy and identified vital signs as key contributors. These findings underscore the awareness for hidden hypoxemia and provide the basis of further study to accurately evaluate the actual SaO₂.
Coagulation phenotypes in sepsis and effects of recombinant human thrombomodulin: an analysis of three multicentre observational studies
Background A recent randomised trial showed that recombinant thrombomodulin did not benefit patients who had sepsis with coagulopathy and organ dysfunction. Several recent studies suggested presence of clinical phenotypes in patients with sepsis and heterogenous treatment effects across different sepsis phenotypes. We examined the latent phenotypes of sepsis with coagulopathy and the associations between thrombomodulin treatment and the 28-day and in-hospital mortality for each phenotype. Methods This was a secondary analysis of multicentre registries containing data on patients (aged ≥ 16 years) who were admitted to intensive care units for severe sepsis or septic shock in Japan. Three multicentre registries were divided into derivation (two registries) and validation (one registry) cohorts. Phenotypes were derived using k -means with coagulation markers, platelet counts, prothrombin time/international normalised ratios, fibrinogen, fibrinogen/fibrin-degradation-products (FDP), D-dimer, and antithrombin activities. Associations between thrombomodulin treatment and survival outcomes (28-day and in-hospital mortality) were assessed in the derived clusters using a generalised estimating equation. Results Four sepsis phenotypes were derived from 3694 patients in the derivation cohort. Cluster dA ( n  = 323) had severe coagulopathy with high FDP and D-dimer levels, severe organ dysfunction, and high mortality. Cluster dB had severe disease with moderate coagulopathy. Clusters dC and dD had moderate and mild disease with and without coagulopathy, respectively. Thrombomodulin was associated with a lower 28-day (adjusted risk difference [RD]: − 17.8% [95% CI − 28.7 to − 6.9%]) and in-hospital (adjusted RD: − 17.7% [95% CI − 27.6 to − 7.8%]) mortality only in cluster dA. Sepsis phenotypes were similar in the validation cohort, and thrombomodulin treatment was also associated with lower 28-day (RD: − 24.9% [95% CI − 49.1 to − 0.7%]) and in-hospital mortality (RD: − 30.9% [95% CI − 55.3 to − 6.6%]). Conclusions We identified four coagulation marker-based sepsis phenotypes. The treatment effects of thrombomodulin varied across sepsis phenotypes. This finding will facilitate future trials of thrombomodulin, in which a sepsis phenotype with high FDP and D-dimer can be targeted.
Machine Learning–Based Prediction of Clinical Outcomes for Children During Emergency Department Triage
While machine learning approaches may enhance prediction ability, little is known about their utility in emergency department (ED) triage. To examine the performance of machine learning approaches to predict clinical outcomes and disposition in children in the ED and to compare their performance with conventional triage approaches. Prognostic study of ED data from the National Hospital Ambulatory Medical Care Survey from January 1, 2007, through December 31, 2015. A nationally representative sample of 52 037 children aged 18 years or younger who presented to the ED were included. Data analysis was performed in August 2018. The outcomes were critical care (admission to an intensive care unit and/or in-hospital death) and hospitalization (direct hospital admission or transfer). In the training set (70% random sample), using routinely available triage data as predictors (eg, demographic characteristics and vital signs), we derived 4 machine learning-based models: lasso regression, random forest, gradient-boosted decision tree, and deep neural network. In the test set (the remaining 30% of the sample), we measured the models' prediction performance by computing C statistics, prospective prediction results, and decision curves. These machine learning models were built for each outcome and compared with the reference model using the conventional triage classification information. Of 52 037 eligible ED visits by children (median [interquartile range] age, 6 [2-14] years; 24 929 [48.0%] female), 163 (0.3%) had the critical care outcome and 2352 (4.5%) had the hospitalization outcome. For the critical care prediction, all machine learning approaches had higher discriminative ability compared with the reference model, although the difference was not statistically significant (eg, C statistics of 0.85 [95% CI, 0.78-0.92] for the deep neural network vs 0.78 [95% CI, 0.71-0.85] for the reference; P = .16), and lower number of undertriaged critically ill children in the conventional triage levels 3 to 5 (urgent to nonurgent). For the hospitalization prediction, all machine learning approaches had significantly higher discrimination ability (eg, C statistic, 0.80 [95% CI, 0.78-0.81] for the deep neural network vs 0.73 [95% CI, 0.71-0.75] for the reference; P < .001) and fewer overtriaged children who did not require inpatient management in the conventional triage levels 1 to 3 (immediate to urgent). The decision curve analysis demonstrated a greater net benefit of machine learning models over ranges of clinical thresholds. Machine learning-based triage had better discrimination ability to predict clinical outcomes and disposition, with reduction in undertriaging critically ill children and overtriaging children who are less ill.