Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
26,600
result(s) for
"risk modeling"
Sort by:
Modeling Historical and Future Forest Fires in South Korea: The FLAM Optimization Approach
2023
Climate change-induced heat waves increase the global risk of forest fires, intensifying biomass burning and accelerating climate change in a vicious cycle. This presents a challenge to the response system in heavily forested South Korea, increasing the risk of more frequent and large-scale fire outbreaks. This study aims to optimize IIASA’s wildFire cLimate impacts and Adaptation Model (FLAM)—a processed-based model integrating biophysical and human impacts—to South Korea for projecting the pattern and scale of future forest fires. The developments performed in this study include: (1) the optimization of probability algorithms in FLAM based on the national GIS data downscaled to 1 km2 with additional factors introduced for national specific modeling; (2) the improvement of soil moisture computation by adjusting the Fine Fuel Moisture Code (FFMC) to represent vegetation feedbacks by fitting soil moisture to daily remote sensing data; and (3) projection of future forest fire frequency and burned area. Our results show that optimization has considerably improved the modeling of seasonal patterns of forest fire frequency. Pearson’s correlation coefficient between monthly predictions and observations from national statistics over 2016–2022 was improved from 0.171 in the non-optimized to 0.893 in the optimized FLAM. These findings imply that FLAM’s main algorithms for interpreting biophysical and human impacts on forest fire at a global scale are only applicable to South Korea after the optimization of all modules, and climate change is the main driver of the recent increases in forest fires. Projections for forest fire were produced for four periods until 2100 based on the forest management plan, which included three management scenarios (current, ideal, and overprotection). Ideal management led to a reduction of 60–70% of both fire frequency and burned area compared to the overprotection scenario. This study should be followed by research for developing adaptation strategies corresponding to the projected risks of future forest fires.
Journal Article
Physics‐Based Hazard Assessment of Compound Flooding From Tropical and Extratropical Cyclones in a Warming Climate
by
Emanuel, Kerry
,
Sarhadi, Ali
,
Rousseau‐Rizzi, Raphaël
in
Causes of
,
Climate change
,
Climate models
2025
Recent efforts to assess coastal compound surge and rainfall‐driven flooding hazard from tropical (TCs) and extratropical cyclones (ETCs) in a warming climate have intensified. However, challenges persist in gaining actionable insights into the changing magnitude and spatial variability of these hazards. We employ a physics‐based hydrodynamic framework to numerically simulate compound flooding from TCs and ETCs in both current and future climates, focusing on the western side of Buzzards Bay in Massachusetts. Our approach leverages hydrodynamic models driven by extensive sets of synthetic TCs downscaled from CMIP6 climate models. We also perform a far less extensive analysis of ETCs using a previously produced event set, dynamically downscaled using the WRF model driven by a single CMIP5 model. This methodology quantifies how climate change may reshape the compound flooding hazard landscape in the study area. Our findings reveal a significant increase in TC‐induced compound flooding hazard due to evolving climatology and sea level rise (SLR). Although compound flooding induced by ETCs increases mostly in coastal areas due to SLR, inland areas exhibit almost no change, and some even show a decline in rainfall‐driven flooding from high‐frequency ETC events toward the end of the century compared to the current climate. Our methodology is transferable to vulnerable coastal regions, serving as a tool for adaptive measures in populated areas. It equips decision‐makers and stakeholders with the means to mitigate the destructive impacts of compound flooding arising from both current and future TCs, and shows how the same methodology might be applied to ETCs. Plain Language Summary During storms in coastal areas, strong winds can cause surge‐driven flooding, and simultaneously, intense rainfall may lead to inland heavy rainfall‐driven flooding. Sometimes, these two flooding sources coincide, forming compound surge‐ and rainfall‐driven flooding, which is more destructive than either hazard alone. To assess the hazard of such destructive compound flooding, we use physics‐based models to quantify the frequency and magnitude of these hazards. Additionally, we evaluate how climate change and factors such as SLR may affect the frequency and magnitude of such events in coastal areas. Through these detailed and granular hazard assessments, regions facing increased flooding threats can develop strategies to more effectively mitigate damages posed by compound flooding during extreme storms. Key Points A newly developed model simulates the interplay of surge and rainfall flooding from tropical and extratropical cyclones in a warming climate Tropical cyclone‐induced compound flooding hazard increases with evolving storm climatology and rising sea levels in a warming climate Extratropical cyclone‐induced flooding increases in coastal areas with sea‐level rise, while staying minimal inland in a warming climate
Journal Article
Rethinking economic capital management through the integrated derivative-based treatment of interest rate and credit risk
2018
This research revisits the economic capital management regarding banking books of financial institutions exposed to the emerging market sovereign debt. We develop a derivative-based integrated approach to quantify economic capital requirements for considered jointly interest rate and credit risk. Our framework represents a major contribution to the empirical aspects of capital management. The proposed innovative modeling allows applying standard historic value-at-risk techniques developed for stand-alone risk factors to evaluate aggregate impacts of several risks. We use the time-series of credit default swap spreads and interest rate swap rates as proxy measures for credit risk and interest rate risk, respectively. An elasticity of interest rate risk and credit risk, considered a function of the business cycle phases, maturity of instruments, creditworthiness, and other macroeconomic parameters, is gauged by means of numerical modeling. Our contribution to the new economic thinking regarding the interest rate risk and credit rate risk management consists in their integrated treatment as the dynamics of interest rate and credit spreads is found to demonstrate the features of automatic stabilizers of each other. This research sheds light on how financial institutions may address hedge strategies against downside risks. It is of special importance for emerging markets heavily dependent on foreign capital as it potentially allows emerging market banks to improve risk management practices in terms of capital adequacy and Basel III rules. From the regulatory perspective, by taking into account inter-risk diversification effects it allows enhancing financial stability through jointly optimizing Pillar 1 and Pillar 2 economic capital.
Journal Article
Flood Risk Modelling Based on Machine Learning Using Google Earth Engine in Hulu Sungai Utara Regency
2025
Flood risk modeling is essential for effective disaster mitigation, particularly in flood-prone areas such as Hulu Sungai Utara Regency, Indonesia. This study leverages Google Earth Engine (GEE) to integrate multi-source satellite data and machine learning techniques for flood susceptibility mapping. Key geospatial variables, including the Normalized Difference Vegetation Index (NDVI), elevation, distance from rivers, and the Topographic Position Index (TPI), were analyzed using a weighted overlay method within GEE. A supervised classification approach was employed to enhance accuracy, and validation was performed using historical flood event data. The results indicate that 51.66% (47,875.86 ha) of the study area falls into the low-risk category, 42.90% (39,763.08 ha) is at moderate risk, and 5.44% (5,040.36 ha) is highly susceptible to flooding. This study highlights the advantages of GEE in large-scale flood risk assessments by enabling real-time processing, high computational efficiency, and seamless integration of geospatial datasets. The findings provide critical insights for local governments and disaster management agencies to develop proactive flood mitigation strategies.
Journal Article
Heterogeneity in Treatment Effects of Reduced Versus Standard Dose of Cabazitaxel in Metastatic Castration‐Resistant Prostate Cancer
2026
Background In the PROSELICA, a randomized controlled trial (RCT) comparing cabazitaxel 20 mg/m2 (C20) versus 25 mg/m2 (C25) in metastatic castration‐resistant prostate cancer (mCRPC), one‐variable‐at‐a‐time subgroup analysis suggested possible heterogeneity in treatment effect (HTE) of C25 versus C20 among study participants. Novel predictive HTE analysis approaches may provide an in‐depth understanding of such results. Methods We analyzed patient‐level data from 1200 patients with mCRPC who were randomized in the PROSELICA trial. Outcomes included overall survival (OS) and progression‐free survival (PFS). Using baseline characteristics, patients were stratified into quartiles based on either quantitative baseline risk of poor outcome (risk modeling) or predicted individualized treatment effect (ITE) using a causal survival forest algorithm (effect modeling). Treatment effects were measured as differences in restricted mean survival time (RMST). Results For risk modeling, the OS effect of C25 increased with risk quartiles: −0.07 months (95% CI, −1.60 to 1.46) in the lowest risk quartile and 1.67 months (95% CI, 0.25 to 3.10) in the highest risk quartile. For effect modeling, the OS effect ranged from −0.17 months (95% CI, −3.01 to 2.68) in the lowest ITE quartile to 0.57 months (95% CI, −2.27 to 3.41) in the highest ITE quartile. Both approaches demonstrated greater C25 benefit in patients with extensive previous treatment and baseline disease burden. PFS effects remained consistent across all quartiles. Conclusions The OS effect of C25 versus C20 may vary based on baseline characteristics in post‐docetaxel mCRPC. Patients with extensive treatment history and disease burden may benefit more from C25.
Journal Article
Modeling the integral risk assessment for air pollution in the areas of highways by probabilistic methods
by
Valiev, V.S.
,
Novikova, S.V.
,
Tunakova, Yu.A.
in
air pollution modeling
,
Bayesian approach
,
probabilistic modeling
2021
A methodology for calculating the integral risk of atmospheric pollution using Bayes’s theorem is proposed to take into account the action of mobile and stationary emission sources in the influence zones of highways, the response to the impact in the form of accumulation of emission components in depositing media and biological media of the population. At the first stage, the clustering of experimental data arrays was carried out, homogeneous road sections (clusters) were identified. The integral risk was calculated for the selected clusters. The risks of contamination of the investigated media have been calculated. A multiple regression model has been built to assess the level of integral risk with a high degree of reliability when compared with experimental data. The significance of the aerogenic factor in the formation of the level of integral risk is shown. A reduced model for assessing the integral risk by the level of risk of atmospheric air pollution is proposed. Grades of risk levels are given according to the degree of acceptability. It is possible to determine the contribution of the road transport component to the level of integral risk based on the obtained values of the final risk.
Journal Article
Toward interpretable credit scoring: integrating explainable artificial intelligence with deep learning for credit card default prediction
by
Badawy, Mahmoud
,
Elhosseini, Mostafa
,
Aljadani, Abdussalam
in
Accuracy
,
Artificial Intelligence
,
Computational Biology/Bioinformatics
2024
In recent years, the increasing prevalence of credit card usage has raised concerns about accurately predicting and managing credit card defaults. While machine learning and deep learning methods have shown promising results in default prediction, the black-box nature of these models often limits their interpretability and practical adoption. This study presents a new method for predicting credit card default using a combination of deep learning and explainable artificial intelligence (XAI) techniques. Integrating these methods aims to improve the interpretability of the decision-making process involved in credit card default prediction. The proposed approach is evaluated using a real-world dataset and compared to existing state-of-the-art models. Results show that the proposed approach achieves competitive prediction accuracy while providing meaningful insights into the factors driving credit card default risk. The present investigation adds to the increasing body of literature on explainable artificial intelligence (AI) in the realm of finance. Besides, it provides a pragmatic approach to assessing credit risk, balancing precision and comprehensibility. In conclusion, the model demonstrates strong potential as a credit risk assessment tool, with an accuracy of 0.8350, sensitivity of 0.8823, and specificity of 0.9879. Among the most important features identified by the model are payment delays and outstanding bill amounts. This study is a step toward more interpretable and transparent credit scoring models.
Journal Article
Temporal validation and updating of a prediction model for the diagnosis of gestational diabetes mellitus
by
Soldatos, Georgia
,
De Silva, Kushan
,
Paul, Eldho
in
(6 max): Gestational Diabetes Mellitus
,
Australia - epidemiology
,
Body mass index
2023
The original Monash gestational diabetes mellitus (GDM) risk prediction in early pregnancy model is internationally externally validated and clinically implemented. We temporally validate and update this model in a contemporary population with a universal screening context and revised diagnostic criteria and ethnicity categories, thereby improving model performance and generalizability.
The updating dataset comprised of routinely collected health data for singleton pregnancies delivered in Melbourne, Australia from 2016 to 2018. Model predictors included age, body mass index, ethnicity, diabetes family history, GDM history, and poor obstetric outcome history. Model updating methods were recalibration-in-the-large (Model A), intercept and slope re-estimation (Model B), and coefficient revision using logistic regression (Model C1, original ethnicity categories; Model C2, revised ethnicity categories). Analysis included 10-fold cross-validation, assessment of performance measures (c-statistic, calibration-in-the-large, calibration slope, and expected-observed ratio), and a closed-loop testing procedure to compare models’ log-likelihood and akaike information criterion scores.
In 26,474 singleton pregnancies (4,756, 18% with GDM), the original model demonstrated reasonable temporal validation (c-statistic = 0.698) but suboptimal calibration (expected-observed ratio = 0.485). Updated model C2 was preferred, with a high c-statistic (0.732) and significantly better performance in closed testing.
We demonstrated updating methods to sustain predictive performance in a contemporary population, highlighting the value and versatility of prediction models for guiding risk-stratified GDM care.
Journal Article
Scenario-based stress tests: are they painful enough?
2017
Forecasts, models and stress tests are important tools for policymakers and business planners. Recent developments in these related spheres have seen greater emphasis placed on stress tests from a regulatory perspective, while at the same time forecasting performance has been criticized. Given the interlinkages between the two, similar limitations apply to stress tests as to forecasts and should be borne in mind by practitioners. In addition, the recent evolution of stress tests, and in particular the increasing popularity of scenario-based approaches, raises concerns about how well the shortcomings of the associated models are understood. This includes estimated stress cases relative to base cases – the degree of pain – that simple scenario modelling approaches engender. This paper illustrates this phenomenon using simulation techniques and demonstrates that more extreme stress scenarios need to be employed in order to match the inference from simple value-at-risk approaches. Alternatively, complex modelling approaches can address this concern, but are not widely used to date. Some policymakers seem to be aware of these issues, judging by the severity of some recent stress scenarios.
Journal Article
Global Asymmetries in the Influence of ENSO on Flood Risk Based on 1,600 Years of Hybrid Simulations
by
Boudreault, M.
,
Del Rio Amador, L.
,
Carozza, D. A.
in
Anomalies
,
Asymmetry
,
catastrophe modeling
2023
El Niño‐Southern Oscillation (ENSO) is often considered as a source of long‐term predictability for extreme events via its teleconnection patterns. However, given that its characteristic cycle varies from two to 7 years, it is difficult to obtain statistically significant conclusions based on observational periods spanning only a few decades. To overcome this, we apply the global flood risk modeling framework developed by Carozza and Boudreault to an equivalent of 1,600 years of bias‐corrected General Circulation Model outputs. The results show substantial anomalies in flood occurrences and impacts for El Niño and La Niña when compared to the all‐year baseline. We were able to obtain a larger global coverage of statistically significant results than previous studies limited to observational data. Asymmetries in anomalies for both ENSO phases show a larger global influence of El Niño than La Niña on flood hazard and risk. Plain Language Summary El Niño‐Southern Oscillation (ENSO) is one of the most important global climate phenomena. It is well‐known to affect precipitation and temperature in many areas of the world. It is therefore very important for researchers (environmental and climate sciences, economics, etc.), risk managers, decision‐ and policy‐makers to understand the influence of ENSO on flooding. Previous studies analyzed the link between ENSO and flooding but because they were based upon 40 years of data, a lot of uncertainties remained as to how ENSO has any significance on flooding. In this study, we use outputs from a climate model large ensemble that provides 1,600 years of simulated data to determine the impacts of ENSO on flooding. But because it is very difficult to run traditional flood models on 1,600 years of data, we rather leverage a machine learning approach to accelerate computations in a context where the focus is on socioeconomic impacts. We find that ENSO is a significant driver of flooding in more regions than what was previously found. Finally, there appears to be a greater global influence of El Niño than La Niña on flooding. Key Points We simulated an equivalent of 1,600 years of realistic flood events globally using a statistical model forced with climate model outputs We found a statistically significant (α = 0.05) influence of El Niño‐Southern Oscillation (ENSO) over 55% of land area for flood occurrence and over 69% for flood impact Asymmetries in anomalies for both ENSO phases show a larger global influence of El Niño than La Niña on flood hazard and risk
Journal Article