Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
25,141
result(s) for
"Model forms"
Sort by:
A new ensemble modeling approach for reliability-based design optimization of flexure-based bridge-type amplification mechanisms
by
Ouyang, Linhan
,
Chen, Yuejian
,
Wan, Liangqi
in
Amplification
,
CAE) and Design
,
Computer-Aided Engineering (CAD
2020
Reliability-based design optimization (RBDO) for the design process of the flexure-based bridge-type amplification mechanisms (FBTAMs) relies on an accurate surrogate model. At present, ensemble modeling approaches have been widely used. However, existing ensemble modeling approaches for the RBDO have not considered the model form selection in the process modeling, which leads to an inaccurate quality estimation. Aiming at addressing the drawback of existing ensemble modeling approaches for the RBDO, a new ensemble modeling approach is proposed. The stepwise model selection strategy is adopted where redundant model(s) will be eliminated before constructing an ensemble model. The proposed ensemble modeling approach is applied to a typical FBTAM to illustrate its effectiveness. Results revealed that the proposed ensemble modeling approach has a higher accuracy compared with existing ensemble modeling approaches, and thus reached a better RBDO solution.
Journal Article
Heteroscedasticity and Precise Estimation Model Approach for Complex Financial Time-Series Data: An Example of Taiwan Stock Index Futures before and during COVID-19
by
Hsiao, Chih-Wen
,
Lu, Hsi-Peng
,
Chan, Ya-Chuan
in
Algorithms
,
Artificial intelligence
,
complex financial data
2021
In this paper, we provide a mathematical and statistical methodology using heteroscedastic estimation to achieve the aim of building a more precise mathematical model for complex financial data. Considering a general regression model with explanatory variables (the expected value model form) and the error term (including heteroscedasticity), the optimal expected value and heteroscedastic model forms are investigated by linear, nonlinear, curvilinear, and composition function forms, using the minimum mean-squared error criterion to show the precision of the methodology. After combining the two optimal models, the fitted values of the financial data are more precise than the linear regression model in the literature and also show the fitted model forms in the example of Taiwan stock price index futures that has three cases: (1) before COVID-19, (2) during COVID-19, and (3) the entire observation time period. The fitted mathematical models can apparently show how COVID-19 affects the return rates of Taiwan stock price index futures. Furthermore, the fitted heteroscedastic models also show how COVID-19 influences the fluctuations of the return rates of Taiwan stock price index futures. This methodology will contribute to the probability of building algorithms for computing and predicting financial data based on mathematical model form outcomes and assist model comparisons after adding new data to a database.
Journal Article
Exploring deep neural networks via layer-peeled model
by
Long, Qi
,
Fang, Cong
,
He, Hangfeng
in
Applied Mathematics
,
Artificial neural networks
,
Computer applications
2021
In this paper, we introduce the Layer-Peeled Model, a nonconvex, yet analytically tractable, optimization program, in a quest to better understand deep neural networks that are trained for a sufficiently long time. As the name suggests, this model is derived by isolating the topmost layer from the remainder of the neural network, followed by imposing certain constraints separately on the two parts of the network. We demonstrate that the Layer-Peeled Model, albeit simple, inherits many characteristics of well-trained neural networks, thereby offering an effective tool for explaining and predicting common empirical patterns of deep-learning training. First, when working on class-balanced datasets, we prove that any solution to this model forms a simplex equiangular tight frame, which, in part, explains the recently discovered phenomenon of neural collapse [V. Papyan, X. Y. Han, D. L. Donoho, Proc. Natl. Acad. Sci. U.S.A. 117, 24652–24663 (2020)]. More importantly, when moving to the imbalanced case, our analysis of the Layer-Peeled Model reveals a hitherto-unknown phenomenon that we term Minority Collapse, which fundamentally limits the performance of deep-learning models on the minority classes. In addition, we use the Layer-Peeled Model to gain insights into how to mitigate Minority Collapse. Interestingly, this phenomenon is first predicted by the Layer-Peeled Model before being confirmed by our computational experiments.
Journal Article
Combining Satellite‐Derived PM2.5 Data and a Reduced‐Form Air Quality Model to Support Air Quality Analysis in US Cities
by
Tessum, Christopher W.
,
Jackson, Clara M.
,
Heck, Colleen
in
Aerosols
,
Aerosols and Particles
,
Air pollution
2023
Air quality models can support pollution mitigation design by simulating policy scenarios and conducting source contribution analyses. The Intervention Model for Air Pollution (InMAP) is a powerful tool for equitable policy design as its variable resolution grid enables intra‐urban analysis, the scale of which most environmental justice inquiries are levied. However, InMAP underestimates particulate sulfate and overestimates particulate ammonium formation, errors that limit the model's relevance to city‐scale decision‐making. To reduce InMAP's biases and increase its relevancy for urban‐scale analysis, we calculate and apply scaling factors (SFs) based on observational data and advanced models. We consider both satellite‐derived speciated PM2.5 from Washington University and ground‐level monitor measurements from the U.S. Environmental Protection Agency, applied with different scaling methodologies. Relative to ground‐monitor data, the unscaled InMAP model fails to meet a normalized mean bias performance goal of <±10% for most of the PM2.5 components it simulates (pSO4: −48%, pNO3: 8%, pNH4: 69%), but with city‐specific SFs it achieves the goal benchmarks for every particulate species. Similarly, the normalized mean error performance goal of <35% is not met with the unscaled InMAP model (pSO4: 53%, pNO3: 52%, pNH4: 80%) but is met with the city‐scaling approach (15%–27%). The city‐specific scaling method also improves the R2 value from 0.11 to 0.59 (ranging across particulate species) to the range of 0.36–0.76. Scaling increases the percent pollution contribution of electric generating units (EGUs) (nationwide 4%) and non‐EGU point sources (nationwide 6%) and decreases the agriculture sector's contribution (nationwide −6%).
Plain Language Summary
Air quality models can support the design of pollution reduction strategies by assessing sources of pollution and simulating policy scenarios. The Intervention Model for Air Pollution (InMAP) is an air quality model that can evaluate fine particulate matter (PM2.5) differences within cities, which makes it valuable as tool to assess equity of PM2.5 exposure. However, InMAP's simplified atmospheric chemistry equations results in errors that limit the model's relevance to city‐scale decision‐making. To reduce the model's biases and errors, we calculate and apply SFs based on observational data and advanced models, specifically ground‐level monitor measurements from the U.S. Environmental Protection Agency and a satellite‐derived data product. We find that applying SFs derived from satellite observations over cities or individual grid‐cells improves model performance. Scaling InMAP affects the source contribution analysis nationwide and for individual cities, specifically by increasing the contribution of power plants and industry and decreasing the contribution of the agriculture sector.
Key Points
Applying scaling factors determined with satellite‐derived PM2.5 makes the Intervention Model for Air Pollution (InMAP) model suitable and relevant for urban‐scale analysis
Relative to standard InMAP data, we find higher contributions of electric generating unit (EGU) and non‐EGU point sources to U.S. cities
Relative to standard InMAP data, we find lower contributions of the agricultural emissions to U.S. cities
Journal Article
Defects in the 3-dimensional toric code model form a braided fusion 2-category
by
Tian, Yin
,
Kong, Liang
,
Zhang, Zhi-Hao
in
Anyons
,
Braiding
,
Classical and Quantum Gravitation
2020
A
bstract
It was well known that there are
e
-particles and
m
-strings in the 3-dimensional (spatial dimension) toric code model, which realizes the 3-dimensional ℤ
2
topological order. Recent mathematical result, however, shows that there are additional string-like topological defects in the 3-dimensional ℤ
2
topological order. In this work, we construct all topological defects of codimension 2 and higher, and show that they form a braided fusion 2-category satisfying a braiding non-degeneracy condition.
Journal Article
Toward an improved understanding of causation in the ecological sciences
by
Addicott, Ethan T
,
Fenichel, Eli P
,
Pinsky, Malin L
in
Criteria
,
Ecological effects
,
ecosystems
2022
Society increasingly demands accurate predictions of complex ecosystem processes under novel conditions to address environmental challenges. However, obtaining the process-level knowledge required to do so does not necessarily align with the burgeoning use in ecology of correlative model selection criteria, such as Akaike information criterion. These criteria select models based on their ability to reproduce outcomes, not on their ability to accurately represent causal effects. Causal understanding does not require matching outcomes, but rather involves identifying model forms and parameter values that accurately describe processes. We contend that researchers can arrive at incorrect conclusions about cause-and- effect relationships by relying on information criteria. We illustrate via a specific example that inference extending beyond prediction into causality can be seriously misled by information-theoretic evidence. Finally, we identify a solution space to bridge the gap between the correlative inference provided by model selection criteria and a process-based understanding of ecological systems.
Journal Article
System identifiability in a time-evolving agent-based model
2024
Mathematical models are a valuable tool for studying and predicting the spread of infectious agents. The accuracy of model simulations and predictions invariably depends on the specification of model parameters. Estimation of these parameters is therefore extremely important; however, while some parameters can be derived from observational studies, the values of others are difficult to measure. Instead, models can be coupled with inference algorithms (i.e., data assimilation methods, or statistical filters), which fit model simulations to existing observations and estimate unobserved model state variables and parameters. Ideally, these inference algorithms should find the best fitting solution for a given model and set of observations; however, as those estimated quantities are unobserved, it is typically uncertain whether the correct parameters have been identified. Further, it is unclear what ‘correct’ really means for abstract parameters defined based on specific model forms. In this work, we explored the problem of non-identifiability in a stochastic system which, when overlooked, can significantly impede model prediction. We used a network, agent-based model to simulate the transmission of Methicillin-resistant staphylococcus aureus (MRSA) within hospital settings and attempted to infer key model parameters using the Ensemble Adjustment Kalman Filter, an efficient Bayesian inference algorithm. We show that even though the inference method converged and that simulations using the estimated parameters produced an agreement with observations, the true parameters are not fully identifiable. While the model-inference system can exclude a substantial area of parameter space that is unlikely to contain the true parameters, the estimated parameter range still included multiple parameter combinations that can fit observations equally well. We show that analyzing synthetic trajectories can support or contradict claims of identifiability. While we perform this on a specific model system, this approach can be generalized for a variety of stochastic representations of partially observable systems. We also suggest data manipulations intended to improve identifiability that might be applicable in many systems of interest.
Journal Article
Development of a conceptual model for understanding the learning environment and surgical resident well-being
by
Riall, Taylor S.
,
Shanafelt, Tait D.
,
Etkin, Caryn D.
in
Burnout
,
Burnout, Professional - prevention & control
,
Burnout, Professional - psychology
2021
Surgeon burnout is linked to poor outcomes for physicians and patients. Several conceptual models exist that describe drivers of physician wellness generally. No such model exists for surgical residents specifically.
A conceptual model for surgical resident well-being was adapted from published models with input gained iteratively from an interdisciplinary team. A survey was developed to measure residents’ perceptions of their program. A confirmatory factor analysis (CFA) tested the fit of our proposed model construct.
The conceptual model outlines eight domains that contribute to surgical resident well-being: Efficiency and Resources, Faculty Relationships and Engagement, Meaning in Work, Resident Camaraderie, Program Culture and Values, Work-Life Integration, Workload and Job Demands, and Mistreatment. CFA demonstrated acceptable fit of the proposed 8-domain model.
Eight distinct domains of the learning environment influence surgical resident well-being. This conceptual model forms the basis for the SECOND Trial, a study designed to optimize the surgical training environment and promote well-being.
•New conceptual model created to explain drivers of surgery resident burnout.•Surgery resident wellness influenced by eight learning environment domains.•Resident perceptions of training program can be measured using conceptual model.•Impact of surgical learning environment forms basis of the SECOND Trial.•The SECOND Trial, a national, randomized study aims to improve surgical training.
Journal Article
Understanding token-based ecosystems – a taxonomy of blockchain-based business models of start-ups
2020
Start-ups in the blockchain context generate millions by means of initial coin offerings (ICOs). Many of these crowdfunding endeavours are very successful, others are not. However, despite the increasing investments in ICOs, there is still neither sufficient theoretical knowledge nor a comprehensive understanding of the different types of business models and the implications for these token-based ecosystems. Scientific research equally lacks a thorough understanding of the different business model forms and their influence on collaboration in token-based economies. We bridge this gap by presenting a taxonomy of real-world blockchain-based start-ups. For this taxonomy, we used 195 start-ups and performed a cluster-analysis in order to identify three different archetypes and thus gain a deeper understanding. Our taxonomy and the archetypes can equally be seen as strategic guidance for practitioners as well as a starting point for future research concerning the token-based business models.
Journal Article
Deep-learning cardiac motion analysis for human survival prediction
2019
Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimizing the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimized for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients, the predictive accuracy (quantified by Harrell’s
C
-index) was significantly higher (
P
= 0.0012) for our model
C
= 0.75 (95% CI: 0.70–0.79) than the human benchmark of
C
= 0.59 (95% CI: 0.53–0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival.
A fully convolutional neural network is used to create time-resolved three-dimensional dense segmentations of heart images. This dense motion model forms the input to a supervised system called 4Dsurvival that can efficiently predict human survival.
Journal Article