Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
139,260
result(s) for
"prediction model"
Sort by:
Don't be misled: 3 misconceptions about external validation of clinical prediction models
by
Dunias, Zoë S.
,
de Hond, Anne
,
Kant, Ilse
in
Artificial intelligence
,
Clinical algorithm
,
Clinical prediction model
2024
Clinical prediction models provide risks of health outcomes that can inform patients and support medical decisions. However, most models never make it to actual implementation in practice. A commonly heard reason for this lack of implementation is that prediction models are often not externally validated. While we generally encourage external validation, we argue that an external validation is often neither sufficient nor required as an essential step before implementation. As such, any available external validation should not be perceived as a license for model implementation. We clarify this argument by discussing 3 common misconceptions about external validation. We argue that there is not one type of recommended validation design, not always a necessity for external validation, and sometimes a need for multiple external validations. The insights from this paper can help readers to consider, design, interpret, and appreciate external validation studies.
Journal Article
Machine learning for the prediction of acute kidney injury in patients with sepsis
2022
Background
Acute kidney injury (AKI) is the most common and serious complication of sepsis, accompanied by high mortality and disease burden. The early prediction of AKI is critical for timely intervention and ultimately improves prognosis. This study aims to establish and validate predictive models based on novel machine learning (ML) algorithms for AKI in critically ill patients with sepsis.
Methods
Data of patients with sepsis were extracted from the Medical Information Mart for Intensive Care III (MIMIC- III) database. Feature selection was performed using a Boruta algorithm. ML algorithms such as logistic regression (LR),
k
-nearest neighbors (KNN), support vector machine (SVM), decision tree, random forest, Extreme Gradient Boosting (XGBoost), and artificial neural network (ANN) were applied for model construction by utilizing tenfold cross-validation. The performances of these models were assessed in terms of discrimination, calibration, and clinical application. Moreover, the discrimination of ML-based models was compared with those of Sequential Organ Failure Assessment (SOFA) and the customized Simplified Acute Physiology Score (SAPS) II model.
Results
A total of 3176 critically ill patients with sepsis were included for analysis, of which 2397 cases (75.5%) developed AKI during hospitalization. A total of 36 variables were selected for model construction. The models of LR, KNN, SVM, decision tree, random forest, ANN, XGBoost, SOFA and SAPS II score were established and obtained area under the receiver operating characteristic curves of 0.7365, 0.6637, 0.7353, 0.7492, 0.7787, 0.7547, 0.821, 0.6457 and 0.7015, respectively. The XGBoost model had the best predictive performance in terms of discrimination, calibration, and clinical application among all models.
Conclusion
The ML models can be reliable tools for predicting AKI in septic patients. The XGBoost model has the best predictive performance, which can be used to assist clinicians in identifying high-risk patients and implementing early interventions to reduce mortality.
Journal Article
COVID-19 Pandemic Prediction for Hungary; A Hybrid Machine Learning Approach
by
Mosavi, Amir
,
Gloaguen, Richard
,
Ghamisi, Pedram
in
Adaptive systems
,
Coronaviruses
,
COVID-19
2020
Several epidemiological models are being used around the world to project the number of infected individuals and the mortality rates of the COVID-19 outbreak. Advancing accurate prediction models is of utmost importance to take proper actions. Due to the lack of essential data and uncertainty, the epidemiological models have been challenged regarding the delivery of higher accuracy for long-term prediction. As an alternative to the susceptible-infected-resistant (SIR)-based models, this study proposes a hybrid machine learning approach to predict the COVID-19, and we exemplify its potential using data from Hungary. The hybrid machine learning methods of adaptive network-based fuzzy inference system (ANFIS) and multi-layered perceptron-imperialist competitive algorithm (MLP-ICA) are proposed to predict time series of infected individuals and mortality rate. The models predict that by late May, the outbreak and the total morality will drop substantially. The validation is performed for 9 days with promising results, which confirms the model accuracy. It is expected that the model maintains its accuracy as long as no significant interruption occurs. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research.
Journal Article
A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models
by
Christodoulou, Evangelia
,
Collins, Gary S.
,
Van Calster, Ben
in
Algorithms
,
Area Under Curve
,
Artificial intelligence
2019
The objective of this study was to compare performance of logistic regression (LR) with machine learning (ML) for clinical prediction modeling in the literature.
We conducted a Medline literature search (1/2016 to 8/2017) and extracted comparisons between LR and ML models for binary outcomes.
We included 71 of 927 studies. The median sample size was 1,250 (range 72–3,994,872), with 19 predictors considered (range 5–563) and eight events per predictor (range 0.3–6,697). The most common ML methods were classification trees, random forests, artificial neural networks, and support vector machines. In 48 (68%) studies, we observed potential bias in the validation procedures. Sixty-four (90%) studies used the area under the receiver operating characteristic curve (AUC) to assess discrimination. Calibration was not addressed in 56 (79%) studies. We identified 282 comparisons between an LR and ML model (AUC range, 0.52–0.99). For 145 comparisons at low risk of bias, the difference in logit(AUC) between LR and ML was 0.00 (95% confidence interval, −0.18 to 0.18). For 137 comparisons at high risk of bias, logit(AUC) was 0.34 (0.20–0.47) higher for ML.
We found no evidence of superior performance of ML over LR. Improvements in methodology and reporting are needed for studies that compare modeling algorithms.
Journal Article
Artificial intelligence-enabled prediction model of student academic performance in online engineering education
2022
Online education has been facing difficulty in predicting the academic performance of students due to the lack of usage of learning process, summative data and a precise prediction of quantitative relations between variables and achievements. To address these two obstacles, this study develops an artificial intelligence-enabled prediction model for student academic performance based on students’ learning process and summative data. The prediction criteria are first predefined to characterize and convert the learning data in an online engineering course. An evolutionary computation technique is then used to explore the best prediction model for the student academic performance. The model is validated using another online course that applies the same pedagogy and technology. Satisfactory agreements are obtained between the course outputs and model prediction results. The main findings indicate that the dominant variables in academic performance are the knowledge acquisition, the participation in class and the summative performance. The prerequisite knowledge tends not to play a key role in academic performance. Based on the results, pedagogical and analytical implications are provided. The proposed evolutionary computation-enabled prediction method is found to be a viable tool to evaluate the learning performance of students in online courses. Furthermore, the reported genetic programming model provides an acceptable prediction performance compared to other powerful artificial intelligence methods.
Journal Article
Calibration: the Achilles heel of predictive analytics
2019
Background
The assessment of calibration performance of risk prediction models based on regression or more flexible machine learning algorithms receives little attention.
Main text
Herein, we argue that this needs to change immediately because poorly calibrated algorithms can be misleading and potentially harmful for clinical decision-making. We summarize how to avoid poor calibration at algorithm development and how to assess calibration at algorithm validation, emphasizing balance between model complexity and the available sample size. At external validation, calibration curves require sufficiently large samples. Algorithm updating should be considered for appropriate support of clinical practice.
Conclusion
Efforts are required to avoid poor calibration when developing prediction models, to evaluate calibration when validating models, and to update models when indicated. The ultimate aim is to optimize the utility of predictive analytics for shared decision-making and patient counseling.
Journal Article
A new framework to enhance the interpretation of external validation studies of clinical prediction models
by
Nieboer, Daan
,
Debray, Thomas P.A.
,
Steyerberg, Ewout W.
in
Case mix
,
Data Interpretation, Statistical
,
Epidemiology
2015
It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from “different but related” samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models.
We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting.
We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings.
The proposed framework enhances the interpretation of findings at external validation of prediction models.
Journal Article
COVID-19 Prediction Models and Unexploited Data
2020
For COVID-19, predictive modeling, in the literature, uses broadly SEIR/SIR, agent-based, curve-fitting techniques/models. Besides, machine-learning models that are built on statistical tools/techniques are widely used. Predictions aim at making states and citizens aware of possible threats/consequences. However, for COVID-19 outbreak, state-of-the-art prediction models are failed to exploit crucial and unprecedented uncertainties/factors, such as a) hospital settings/capacity; b) test capacity/rate (on a daily basis); c) demographics; d) population density; e) vulnerable people; and f) income versus commodities (poverty). Depending on what factors are employed/considered in their models, predictions can be short-term and long-term. In this paper, we discuss how such continuous and unprecedented factors lead us to design complex models, rather than just relying on stochastic and/or discrete ones that are driven by randomly generated parameters. Further, it is a time to employ data-driven mathematically proved models that have the luxury to dynamically and automatically tune parameters over time.
Journal Article
Evidence of questionable research practices in clinical prediction models
2023
Background
Clinical prediction models are widely used in health and medical research. The area under the receiver operating characteristic curve (AUC) is a frequently used estimate to describe the discriminatory ability of a clinical prediction model. The AUC is often interpreted relative to thresholds, with “good” or “excellent” models defined at 0.7, 0.8 or 0.9. These thresholds may create targets that result in “hacking”, where researchers are motivated to re-analyse their data until they achieve a “good” result.
Methods
We extracted AUC values from
PubMed
s to look for evidence of hacking. We used histograms of the AUC values in bins of size 0.01 and compared the observed distribution to a smooth distribution from a spline.
Results
The distribution of 306,888 AUC values showed clear excesses above the thresholds of 0.7, 0.8 and 0.9 and shortfalls below the thresholds.
Conclusions
The AUCs for some models are over-inflated, which risks exposing patients to sub-optimal clinical decision-making. Greater modelling transparency is needed, including published protocols, and data and code sharing.
Journal Article
Integration of artificial intelligence performance prediction and learning analytics to improve student learning in online engineering course
by
Zheng, Luyi
,
Jiao, Pengcheng
,
Ouyang, Fan
in
Academic achievement
,
Algorithms
,
Artificial intelligence
2023
As a cutting-edge field of artificial intelligence in education (AIEd) that depends on advanced computing technologies, AI performance prediction model is widely used to identify at-risk students that tend to fail, establish student-centered learning pathways, and optimize instructional design and development. A majority of the existing AI prediction models focus on the development and optimization of the accuracy of AI algorithms rather than applying AI models to provide student with in-time and continuous feedback and improve the students’ learning quality. To fill this gap, this research integrated an AI performance prediction model with learning analytics approaches with a goal to improve student learning effects in a collaborative learning context. Quasi-experimental research was conducted in an online engineering course to examine the differences of students’ collaborative learning effect with and without the support of the integrated approach. Results showed that the integrated approach increased student engagement, improved collaborative learning performances, and strengthen student satisfactions about learning. This research made contributions to proposing an integrated approach of AI models and learning analytics (LA) feedback and providing paradigmatic implications for future development of AI-driven learning analytics.HighlightsIntegrated approach was used to combine AI with learning analytics (LA) feedbackQuasi-experiment research was conducted to investigate student learning effectsIntegrated approach to foster student engagement, performances and satisfactionsParadigmatic implication was proposed for develop AI-driven learning analyticsClosed loop was established for both AI model development and educational application.
Journal Article