Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
32,124
result(s) for
"risk prediction"
Sort by:
The dynamics of risk: changing technologies and collective action in seismic events
Earthquakes are a huge global threat. In thirty-six countries, severe seismic risks threaten populations and their increasingly interdependent systems of transportation, communication, energy, and finance. in this important book, the author provides an unprecedented examination of how twelve communities in nine countries responded to destructive earthquakes between 1999 and 2015. And many of the book's lessons can also be applied to other large-scale risks. This book sets the global problem of seismic risk in the framework of complex adaptive systems to explore how the consequences of such events ripple across jurisdictions, communities, and organizations in complex societies, triggering unexpected alliances but also exposing social, economic, and legal gaps. This book assesses how the networks of organizations involved in response and recovery adapted and acted collectively after the twelve earthquakes it examines. It describes how advances in information technology enabled some communities to anticipate seismic risk better and to manage response and recovery operations more effectively, decreasing losses. Finally, the book shows why investing substantively in global information infrastructure would create shared awareness of seismic risk and make postdisaster relief more effective and less expensive. The result is a landmark study of how to improve the way we prepare for and respond to earthquakes and other disasters in our ever-more-complex world.
Supervised machine learning enables non-invasive lesion characterization in primary prostate cancer with 68GaGa-PSMA-11 PET/MRI
2021
PurposeRisk classification of primary prostate cancer in clinical routine is mainly based on prostate-specific antigen (PSA) levels, Gleason scores from biopsy samples, and tumor-nodes-metastasis (TNM) staging. This study aimed to investigate the diagnostic performance of positron emission tomography/magnetic resonance imaging (PET/MRI) in vivo models for predicting low-vs-high lesion risk (LH) as well as biochemical recurrence (BCR) and overall patient risk (OPR) with machine learning.MethodsFifty-two patients who underwent multi-parametric dual-tracer [18F]FMC and [68Ga]Ga-PSMA-11 PET/MRI as well as radical prostatectomy between 2014 and 2015 were included as part of a single-center pilot to a randomized prospective trial (NCT02659527). Radiomics in combination with ensemble machine learning was applied including the [68Ga]Ga-PSMA-11 PET, the apparent diffusion coefficient, and the transverse relaxation time-weighted MRI scans of each patient to establish a low-vs-high risk lesion prediction model (MLH). Furthermore, MBCR and MOPR predictive model schemes were built by combining MLH, PSA, and clinical stage values of patients. Performance evaluation of the established models was performed with 1000-fold Monte Carlo (MC) cross-validation. Results were additionally compared to conventional [68Ga]Ga-PSMA-11 standardized uptake value (SUV) analyses.ResultsThe area under the receiver operator characteristic curve (AUC) of the MLH model (0.86) was higher than the AUC of the [68Ga]Ga-PSMA-11 SUVmax analysis (0.80). MC cross-validation revealed 89% and 91% accuracies with 0.90 and 0.94 AUCs for the MBCR and MOPR models respectively, while standard routine analysis based on PSA, biopsy Gleason score, and TNM staging resulted in 69% and 70% accuracies to predict BCR and OPR respectively.ConclusionOur results demonstrate the potential to enhance risk classification in primary prostate cancer patients built on PET/MRI radiomics and machine learning without biopsy sampling.
Journal Article
Disaster Prediction Knowledge Graph Based on Multi-Source Spatio-Temporal Information
by
Chen, Jiahui
,
Li, Weichao
,
Zhang, Wenyue
in
Artificial intelligence
,
Decision making
,
disaster dynamic prediction
2022
Natural disasters have frequently occurred and caused great harm. Although the remote sensing technology can effectively provide disaster data, it still needs to consider the relevant information from multiple aspects for disaster analysis. It is hard to build an analysis model that can integrate the remote sensing and the large-scale relevant information, particularly at the sematic level. This paper proposes a disaster prediction knowledge graph for disaster prediction by integrating remote sensing information, relevant geographic information, with the expert knowledge in the field of disaster analysis. This paper constructs the conceptual layer and instance layer of the knowledge graph by building a common semantic ontology of disasters and a unified spatio-temporal framework benchmark. Moreover, this paper represents the disaster prediction model in the forms of knowledge of disaster prediction. This paper demonstrates experiments and cases studies regarding the forest fire and geological landslide risk. These investigations show that the proposed method is beneficial to multi-source spatio-temporal information integration and disaster prediction.
Journal Article
Temporal validation and updating of a prediction model for the diagnosis of gestational diabetes mellitus
by
Soldatos, Georgia
,
De Silva, Kushan
,
Paul, Eldho
in
(6 max): Gestational Diabetes Mellitus
,
Australia - epidemiology
,
Body mass index
2023
The original Monash gestational diabetes mellitus (GDM) risk prediction in early pregnancy model is internationally externally validated and clinically implemented. We temporally validate and update this model in a contemporary population with a universal screening context and revised diagnostic criteria and ethnicity categories, thereby improving model performance and generalizability.
The updating dataset comprised of routinely collected health data for singleton pregnancies delivered in Melbourne, Australia from 2016 to 2018. Model predictors included age, body mass index, ethnicity, diabetes family history, GDM history, and poor obstetric outcome history. Model updating methods were recalibration-in-the-large (Model A), intercept and slope re-estimation (Model B), and coefficient revision using logistic regression (Model C1, original ethnicity categories; Model C2, revised ethnicity categories). Analysis included 10-fold cross-validation, assessment of performance measures (c-statistic, calibration-in-the-large, calibration slope, and expected-observed ratio), and a closed-loop testing procedure to compare models’ log-likelihood and akaike information criterion scores.
In 26,474 singleton pregnancies (4,756, 18% with GDM), the original model demonstrated reasonable temporal validation (c-statistic = 0.698) but suboptimal calibration (expected-observed ratio = 0.485). Updated model C2 was preferred, with a high c-statistic (0.732) and significantly better performance in closed testing.
We demonstrated updating methods to sustain predictive performance in a contemporary population, highlighting the value and versatility of prediction models for guiding risk-stratified GDM care.
Journal Article
Clinical risk prediction with random forests for survival, longitudinal, and multivariate (RF-SLAM) data analysis
by
Zeger, Scott L.
,
Wu, Katherine C.
,
Wongvibulsin, Shannon
in
Bayes Theorem
,
Clinical decision making
,
Clinical Decision-Making - methods
2019
Background
Clinical research and medical practice can be advanced through the prediction of an individual’s health state, trajectory, and responses to treatments. However, the majority of current clinical risk prediction models are based on regression approaches or machine learning algorithms that are static, rather than dynamic. To benefit from the increasing emergence of large, heterogeneous data sets, such as electronic health records (EHRs), novel tools to support improved clinical decision making through methods for individual-level risk prediction that can handle multiple variables, their interactions, and time-varying values are necessary.
Methods
We introduce a novel dynamic approach to clinical risk prediction for survival, longitudinal, and multivariate (SLAM) outcomes, called random forest for SLAM data analysis (RF-SLAM). RF-SLAM is a continuous-time, random forest method for survival analysis that combines the strengths of existing statistical and machine learning methods to produce individualized Bayes estimates of piecewise-constant hazard rates. We also present a method-agnostic approach for time-varying evaluation of model performance.
Results
We derive and illustrate the method by predicting sudden cardiac arrest (SCA) in the Left Ventricular Structural (LV) Predictors of Sudden Cardiac Death (SCD) Registry. We demonstrate superior performance relative to standard random forest methods for survival data. We illustrate the importance of the number of preceding heart failure hospitalizations as a time-dependent predictor in SCA risk assessment.
Conclusions
RF-SLAM is a novel statistical and machine learning method that improves risk prediction by incorporating time-varying information and accommodating a large number of predictors, their interactions, and missing values. RF-SLAM is designed to easily extend to simultaneous predictions of multiple, possibly competing, events and/or repeated measurements of discrete or continuous variables over time.Trial registration: LV Structural Predictors of SCD Registry (clinicaltrials.gov, NCT01076660), retrospectively registered 25 February 2010
Journal Article
Development and validation of a multivariable risk prediction model for head and neck cancer using the UK Biobank
by
Bonnet, Laura Jayne
,
McCarthy, Caroline Elizabeth
,
Marcus, Michael Williams
in
Alcohol
,
Biobanks
,
cancer risk prediction
2020
Head and neck cancer (HNC) is the eighth most common cancer in the UK, with over 12,000 new cases every year. The incidence of HNC is predicted to increase by 33% by 2035. Risk modelling produces personalised risk estimates for specific diseases, which can be used to inform education, screening programmes and recruitment to clinical trials. The present study describes the development and validation of the first risk prediction model for absolute risk of HNC, using a nested case-control study within the UK Biobank dataset. The UK Biobank recruited 502,647 individuals aged 40-69 years from around the UK. In total, 859 cases of HNC were identified, with 253 incident cases (individuals who developed HNC in the 7 years following recruitment to the UK Biobank study). Logistic regression was used to develop the model, then the model performance was validated using a cohort from the North West of England. Overall, increasing age, male sex, positive history of smoking and alcohol consumption and higher levels of material deprivation were significantly associated with a higher risk of HNC. Consuming at least five portions of fruit and vegetables per day, exercising at least once per week and higher BMI offered a protective effect against HNC. The C-statistic was 0.69 [95% confidence interval (CI), 0.66-0.71] and the model displayed good calibration. Upon external validation, the C-statistic was 0.64 (95% CI, 0.60-0.68) with reasonable calibration. The model developed and validated in the present study allows calculation of a personalised risk estimate for HNC. This could be used to guide clinicians when counselling individuals on risk behaviour, and there is potential for such models to inform recruitment to screening trials.
Journal Article
Machine-learning versus traditional approaches for atherosclerotic cardiovascular risk prognostication in primary prevention cohorts: a systematic review and meta-analysis
2023
Abstract
Background
Cardiovascular disease (CVD) risk prediction is important for guiding the intensity of therapy in CVD prevention. Whilst current risk prediction algorithms use traditional statistical approaches, machine learning (ML) presents an alternative method that may improve risk prediction accuracy. This systematic review and meta-analysis aimed to investigate whether ML algorithms demonstrate greater performance compared with traditional risk scores in CVD risk prognostication.
Methods and results
MEDLINE, EMBASE, CENTRAL, and SCOPUS Web of Science Core collections were searched for studies comparing ML models to traditional risk scores for CVD risk prediction between the years 2000 and 2021. We included studies that assessed both ML and traditional risk scores in adult (≥18 year old) primary prevention populations. We assessed the risk of bias using the Prediction Model Risk of Bias Assessment Tool (PROBAST) tool. Only studies that provided a measure of discrimination [i.e. C-statistics with 95% confidence intervals (CIs)] were included in the meta-analysis. A total of 16 studies were included in the review and meta-analysis (3302 515 individuals). All study designs were retrospective cohort studies. Out of 16 studies, 3 externally validated their models, and 11 reported calibration metrics. A total of 11 studies demonstrated a high risk of bias. The summary C-statistics (95% CI) of the top-performing ML models and traditional risk scores were 0.773 (95% CI: 0.740–0.806) and 0.759 (95% CI: 0.726–0.792), respectively. The difference in C-statistic was 0.0139 (95% CI: 0.0139–0.140), P < 0.0001.
Conclusion
ML models outperformed traditional risk scores in the discrimination of CVD risk prognostication. Integration of ML algorithms into electronic healthcare systems in primary care could improve identification of patients at high risk of subsequent CVD events and hence increase opportunities for CVD prevention. It is uncertain whether they can be implemented in clinical settings. Future implementation research is needed to examine how ML models may be utilized for primary prevention.
This review was registered with PROSPERO (CRD42020220811).
Journal Article
The risk of delirium or dementia‐related hospitalization among individuals living with dementia after long‐term care entry: A population‐based risk prediction model
by
Caughey, Gillian E.
,
Visvanathan, Renuka
,
Lang, Catherine
in
Aged
,
Aged, 80 and over
,
Australia - epidemiology
2025
INTRODUCTION
Identifying individuals with dementia in long‐term care facilities (LTCFs) at risk for delirium or dementia‐related hospitalizations can support individualized risk mitigation.
METHODS
Using the Registry of Senior Australians (ROSA) Historical National Cohort (N = 207343 individuals with dementia in 2655 LTCFs), we identified predictors of delirium or dementia‐related hospitalization within 365 days of LTCF entry and developed a risk prediction model using elastic net penalized regression and Fine‐Gray model. Model discrimination using area under the receiver operating characteristics curve (AUC), calibration and clinical utility were assessed.
RESULTS
Within 365 days, 5.2% (N = 10709) of individuals had a delirium or dementia‐related hospitalization. Forty predictors were identified, strongest included history of frequent emergency department presentations, physical violence history, being male, and prior delirium. Model AUC was 0.664 (95% confidence interval: 0.650–0.676) with reasonable calibration.
DISCUSSION
Our risk prediction model for delirium or dementia‐related hospitalizations had moderate discrimination with reasonable calibration and clinical utility. Routinely collected data can inform risk profiling in LTCFs.
Highlights
Using a large population‐based cohort of people living with dementia, we developed a risk prediction model for delirium or dementia‐related hospitalization within 365 days of long‐term care facility (LTCF) entry.
Within 365 days after entry into LTCF, 5.2% of individuals living with dementia had a delirium or dementia‐related hospitalization.
The model demonstrated moderate discriminatory performance (area under the curve [AUC] = 0.664, 95% confidence interval [CI]: 0.650–0.676) and reasonable calibration in predicting delirium or dementia‐related hospitalization risk.
Our model showed net benefits within 2%–22% risk threshold ranges assessed via decision curve analysis .
Risk stratification at LTCF entry may support clinicians and aged care providers in identifying high risk individuals and implementing targeted interventions to reduce delirium or dementia‐related hospitalizations .
Journal Article
Effect of carotid image-based phenotypes on cardiovascular risk calculator: AECRS1.0
2019
Today, the 10-year cardiovascular risk largely relies on conventional cardiovascular risk factors (CCVRFs) and suffers from the effect of atherosclerotic wall changes. In this study, we present a novel risk calculator AtheroEdge Composite Risk Score (AECRS1.0), designed by fusing CCVRF with ultrasound image-based phenotypes. Ten-year risk was computed using the Framingham Risk Score (FRS), United Kingdom Prospective Diabetes Study 56 (UKPDS56), UKPDS60, Reynolds Risk Score (RRS), and pooled composite risk (PCR) score. AECRS1.0 was computed by measuring the 10-year five carotid phenotypes such as IMT (ave., max., min.), IMT variability, and total plaque area (TPA) by fusing eight CCVRFs and then compositing them. AECRS1.0 was then benchmarked against the five conventional cardiovascular risk calculators by computing the receiver operating characteristics (ROC) and area under curve (AUC) values with a 95% CI. Two hundred four IRB-approved Japanese patients’ left/right common carotid arteries (407 ultrasound scans) were collected with a mean age of 69 ± 11 years. The calculators gave the following AUC: FRS, 0.615; UKPDS56, 0.576; UKPDS60, 0.580; RRS, 0.590; PCRS, 0.613; and AECRS1.0, 0.990. When fusing CCVRF, TPA reported the highest AUC of 0.81. The patients were risk-stratified into low, moderate, and high risk using the standardized thresholds. The AECRS1.0 demonstrated the best performance on a Japanese diabetes cohort when compared with five conventional calculators.
Journal Article
Development and external validation of a pre-treatment nomogram for predicting drug-induced liver injury risk in tuberculosis patients
2025
Drug-induced liver injury (DILI) frequently complicates anti-tuberculosis (TB) treatment, particularly in regions with a high TB burden. Early pre-treatment identification of patients at elevated risk is essential for timely intervention and safer treatment outcomes. In this retrospective two-center cohort study, we collected baseline data from 2022 to 2024 of 2624 patients admitted to two tertiary hospitals before starting standard drug-susceptible anti-TB therapy (isoniazid, rifampicin, pyrazinamide, ethambutol). Patients were randomly divided into training (
n
= 1512), internal validation (
n
= 648), and external validation (
n
= 564) cohorts. Multivariable logistic regression found DILI predictors, and a pre—treatment risk—forecasting nomogram was built. Model performance was assessed by AUC, calibration plots, and decision curve analysis (DCA). Six baseline predictors emerged: age ≥ 60 years, BMI < 18.5 kg/m
2
, alcohol use, extrapulmonary TB, albumin < 35 g/L, and hemoglobin < 110 g/L. The nomogram demonstrated robust discrimination (AUCs: 0.80 training, 0.75 internal validation, 0.77 external validation) and favorable calibration and net clinical benefit on DCA. We developed and externally validated a pre-treatment nomogram for DILI risk in TB patients. By enabling risk stratification before therapy begins, this tool supports personalized monitoring and may enhance treatment safety.
Journal Article