Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
31,273
result(s) for
"computational modelling"
Sort by:
3D microstructure design of lithium-ion battery electrodes assisted by X-ray nano-computed tomography and modelling
by
Daemi, Sohrab R.
,
O’Regan, Kieran B.
,
Bertei, Antonio
in
119/118
,
639/166/898
,
639/301/930/2735
2020
Driving range and fast charge capability of electric vehicles are heavily dependent on the 3D microstructure of lithium-ion batteries (LiBs) and substantial fundamental research is required to optimise electrode design for specific operating conditions. Here we have developed a full microstructure-resolved 3D model using a novel X-ray nano-computed tomography (CT) dual-scan superimposition technique that captures features of the carbon-binder domain. This elucidates how LiB performance is markedly affected by microstructural heterogeneities, particularly under high rate conditions. The elongated shape and wide size distribution of the active particles not only affect the lithium-ion transport but also lead to a heterogeneous current distribution and non-uniform lithiation between particles and along the through-thickness direction. Building on these insights, we propose and compare potential graded-microstructure designs for next-generation battery electrodes. To guide manufacturing of electrode architectures, in-situ X-ray CT is shown to reliably reveal the porosity and tortuosity changes with incremental calendering steps.
The 3D microstructure of the electrode predominantly determines the electrochemical performance of Li-ion batteries. Here, the authors show that the microstructural heterogeneities lead to non-uniform Li insertion and current distribution while graded-microstructures improve the performance.
Journal Article
Credible practice of modeling and simulation in healthcare: ten rules from a multidisciplinary perspective
by
Morrison, Tina
,
Lytton, William W.
,
Erdemir, Ahmet
in
Analysis
,
Biomedical and Life Sciences
,
Biomedicine
2020
The complexities of modern biomedicine are rapidly increasing. Thus, modeling and simulation have become increasingly important as a strategy to understand and predict the trajectory of pathophysiology, disease genesis, and disease spread in support of clinical and policy decisions. In such cases, inappropriate or ill-placed trust in the model and simulation outcomes may result in negative outcomes, and hence illustrate the need to formalize the execution and communication of modeling and simulation practices. Although verification and validation have been generally accepted as significant components of a model’s credibility, they cannot be assumed to equate to a holistic credible practice, which includes activities that can impact comprehension and in-depth examination inherent in the devel-opment and reuse of the models. For the past several years, the Committee on Credible Practice of Modeling and Simulation in Healthcare, an interdisciplinary group seeded from a U.S. interagency initiative, has worked to codify best practices. Here, we provide Ten Rules for credible practice of modeling and simulation in healthcare developed from a comparative analysis by the Committee’s multidisciplinary membership, followed by a large stakeholder com-munity survey. These rules establish a unified conceptual framework for modeling and simulation design, implementation, evaluation, dissemination and usage across the modeling and simulation life-cycle. While biomedical science and clinical care domains have somewhat different requirements and expectations for credible practice, our study converged on rules that would be useful across a broad swath of model types. In brief, the rules are: (1) Define context clearly. (2) Use contextually appropriate data. (3) Evaluate within context. (4) List limitations explicitly. (5) Use version control. (6) Document appropriately. (7) Disseminate broadly. (8) Get independent reviews. (9) Test competing imple-mentations. (10) Conform to standards. Although some of these are common sense guidelines, we have found that many are often missed or misconstrued, even by seasoned practitioners. Computational models are already widely used in basic science to generate new biomedical knowledge. As they penetrate clinical care and healthcare policy, contributing to personalized and precision medicine, clinical safety will require established guidelines for the credible practice of modeling and simulation in healthcare.
Journal Article
Argumentative landscapes
by
Reijula, Samuli
,
Ylikoski, Petri
,
Aydinonat, N. Emrah
in
Computational Modeling in Philosophy
,
COMPUTATIONAL MODELLING IN PHILOSOPHY
,
Education
2021
We argue that the appraisal of models in social epistemology requires conceiving of them as argumentative devices, taking into account the argumentative context and adopting a family-of-models perspective. We draw up such an account and show how it makes it easier to see the value and limits of the use of models in social epistemology. To illustrate our points, we document and explicate the argumentative role of epistemic landscape models in social epistemology and highlight their limitations. We also claim that our account could be fruitfully used in appraising other models in philosophy and science.
Journal Article
Polycratic hierarchies and networks
by
Zeitnitz, Christian
,
Boge, Florian J.
in
Computational Modeling in Philosophy
,
COMPUTATIONAL MODELLING IN PHILOSOPHY
,
Education
2021
Large scale experiments at CERN’s Large Hadron Collider (LHC) rely heavily on computer simulations (CSs), a fact that has recently caught philosophers’ attention. CSs obviously require appropriate modeling, and it is a common assumption among philosophers that the relevant models can be ordered into hierarchical structures. Focusing on LHC’s ATLAS experiment, we will establish three central results here: (a) with some distinct modifications, individual components of ATLAS’ overall simulation infrastructure can be ordered into hierarchical structures. Hence, to a good degree of approximation, hierarchical accounts remain valid at least as descriptive accounts of initial modeling steps. (b) In order to perform the epistemic function Winsberg (in Magnani L, Nersessian N, Thagard P (eds) Model-based reasoning in scientific discovery. Kluwer Academic/Plenum Publishers, New York, pp 255–269, 1999) assigns to models in simulation—generate knowledge through a sequence of skillful but nondeductive transformations—ATLAS’ simulation models have to be considered part of a network rather than a hierarchy, in turn making the associated simulation modeling messy rather than motley. Deriving knowledge-claims from this ‘mess’ requires two sources of justification: (i) holistic validation (also Lenhard and Winsberg in Stud Hist Philos Sci Part B Stud Hist Philos Modern Phys 41(3):253–262, 2010; in Carrier M, Nordmann A (eds) Science in the context of application. Springer, Berlin, pp 115–130, 2011), and (ii) model coherence. As it turns out, (c) the degree of model coherence sets HEP apart from other messy, simulation-intensive disciplines such as climate science, and the reasons for this are to be sought in the historical, empirical and theoretical foundations of the respective discipline.
Journal Article
Models of thermal motion in small-molecule crystallography
2025
The Debye–Waller factor, introduced a century ago, remains a fundamental component in the refinement of crystal structures against X-ray, neutron and electron diffraction data. This review marks its centenary by exploring its applications in small-molecule crystallography. We provide a historical overview of the development of the Debye–Waller factor and its foundations in lattice dynamics. The review discusses the practical use of anisotropic displacement parameters and their role in accurate structure determination. We also address the challenges and advancements in modelling thermal motion and disorder, the role of multi-temperature measurements and modern computational approaches.
Journal Article
Lying, more or less
by
Trpin, Borut
,
Dobrosovestnova, Anna
,
Götzendorfer, Sebastian J.
in
Computational Modeling in Philosophy
,
COMPUTATIONAL MODELLING IN PHILOSOPHY
,
Computer simulation
2021
Partial lying denotes the cases where we partially believe something to be false but nevertheless assert it with the intent to deceive the addressee. We investigate how the severity of partial lying may be determined and how partial lies can be classified. We also study how much epistemic damage an agent suffers depending on the level of trust that she invests in the liar and the severity of the lies she is told. Our analysis is based on the results from exploratory computer simulations of an arguably rational Bayesian agent who is trying to determine how biased a coin is while observing the coin tosses and listening to a (partial) liar’s misleading predictions about the outcomes. Our results provide an interesting testable hypothesis at the intersection of epistemology and ethics, namely that in the longer term partial lies lead to more epistemic damage than outright lies.
Journal Article
The strategy of model building in climate science
2021
In the 1960s, theoretical biologist Richard Levins criticised modellers in his own discipline of population biology for pursuing the “brute force” strategy of building hyper-realistic models. Instead of exclusively chasing complexity, Levins advocated for the use of multiple different kinds of complementary models, including much simpler ones. In this paper, I argue that the epistemic challenges Levins attributed to the brute force strategy still apply to state-of-the-art climate models today: they have big appetites for unattainable data, they are limited by computational tractability, and they are incomprehensible to the human modeller. Along the lines Levins described, this uncertainty generates a trade-off between realistic, precise models with predictive power and simple, highly idealised models that facilitate understanding. In addition to building ensembles of highly complex dynamical models, climate modellers can address model uncertainty by comparing models of different types, such as dynamical and data-driven models, and by systematically comparing models at different levels of what climate modellers call the model hierarchy. Despite its age, Levins’ paper remains incredibly insightful and should be considered an important entry into the philosophy of computational modelling.
Journal Article
The computational philosophy
by
Mayo-Wilson, Conor
,
Zollman, Kevin J. S.
in
Cognition & reasoning
,
Computational Modeling in Philosophy
,
COMPUTATIONAL MODELLING IN PHILOSOPHY
2021
Modeling and computer simulations, we claim, should be considered core philosophical methods. More precisely, we will defend two theses. First, philosophers should use simulations for many of the same reasons we currently use thought experiments. In fact, simulations are superior to thought experiments in achieving some philosophical goals. Second, devising and coding computational models instill good philosophical habits of mind. Throughout the paper, we respond to the often implicit objection that computer modeling is “not philosophical.”
Journal Article
Arational belief convergence
2021
This model explores consensus among agents in a population in terms of two properties. The first is a probability of belief change (PBC). This value indicates how likely agents are to change their mind in interactions. The other is the size of the agents audience: the proportion of the population the agent has access to at any given time. In all instances, the agents converge on a single belief, although the agents are arational. I argue that this generates a skeptical hypothesis: any instance of purportedly rational consensus might just as well be a case of arational belief convergence. I also consider what the model tells us about increasing the likelihood that one agent’s belief is adopted by the rest. Agents are most likely to have their beliefs adopted by the entire population when their value for PBC is low relative to the rest of the population and their audience sizes are roughly three-quarters of the largest possible audience. I further explore the consequences of dogmatists to the population; individuals who refuse to change their mind end up polarizing the population. I conclude with reflections on the supposedly special character of rationality in belief-spread.
Journal Article
Real patterns and indispensability
by
Suñé, Abel
,
Martínez, Manolo
in
Algorithms
,
Analytic philosophy
,
Computational Modeling in Philosophy
2021
While scientific inquiry crucially relies on the extraction of patterns from data, we still have a far from perfect understanding of the metaphysics of patterns—and, in particular, of what makes a pattern real. In this paper we derive a criterion of real-patternhood from the notion of conditional Kolmogorov complexity. The resulting account belongs to the philosophical tradition, initiated by Dennett (J Philos 88(1):27–51, 1991), that links real-patternhood to data compressibility, but is simpler and formally more perspicuous than other proposals previously defended in the literature. It also successfully enforces a non-redundancy principle, suggested by Ladyman and Ross (Every thing must go: metaphysics naturalized, Oxford University Press, Oxford, 2007), that aims to exclude from real-patternhood those patterns that can be ignored without loss of information about the target dataset, and which their own account fails to enforce.
Journal Article