Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,412
result(s) for
"numerics"
Sort by:
Bayesian Probabilistic Numerical Methods
2019
Over forty years ago average-case error was proposed in the applied mathematics literature as an alternative criterion with which to assess numerical methods. In contrast to worst-case error, this criterion relies on the construction of a probability measure over candidate numerical tasks, and numerical methods are assessed based on their average performance over those tasks with respect to the measure. This paper goes further and establishes Bayesian probabilistic numerical methods as solutions to certain inverse problems based upon the numerical task within the Bayesian framework. This allows us to establish general conditions under which Bayesian probabilistic numerical methods are well defined, encompassing both the nonlinear and non-Gaussian contexts. For general computation, a numerical approximation scheme is proposed and its asymptotic convergence established. The theoretical development is extended to pipelines of computation, wherein probabilistic numerical methods are composed to solve more challenging numerical tasks. The contribution highlights an important research frontier at the interface of numerical analysis and uncertainty quantification, and a challenging industrial application is presented.
Journal Article
Markovian master equations for quantum thermal machines: local versus global approach
by
Brunner, Nicolas
,
Brask, Jonatan Bohr
,
Haack, Géraldine
in
exact numerics
,
heat engine
,
Markovian master equations
2017
The study of quantum thermal machines, and more generally of open quantum systems, often relies on master equations. Two approaches are mainly followed. On the one hand, there is the widely used, but often criticized, local approach, where machine sub-systems locally couple to thermal baths. On the other hand, in the more established global approach, thermal baths couple to global degrees of freedom of the machine. There has been debate as to which of these two conceptually different approaches should be used in situations out of thermal equilibrium. Here we compare the local and global approaches against an exact solution for a particular class of thermal machines. We consider thermodynamically relevant observables, such as heat currents, as well as the quantum state of the machine. Our results show that the use of a local master equation is generally well justified. In particular, for weak inter-system coupling, the local approach agrees with the exact solution, whereas the global approach fails for non-equilibrium situations. For intermediate coupling, the local and the global approach both agree with the exact solution and for strong coupling, the global approach is preferable. These results are backed by detailed derivations of the regimes of validity for the respective approaches.
Journal Article
Probabilistic Integration
by
Oates, Chris J.
,
Osborne, Michael A.
,
Girolami, Mark
in
Computation
,
Computer graphics
,
Discretization
2019
A research frontier has emerged in scientific computation, wherein discretisation error is regarded as a source of epistemic uncertainty that can be modelled. This raises several statistical challenges, including the design of statistical methods that enable the coherent propagation of probabilities through a (possibly deterministic) computational work-flow, in order to assess the impact of discretisation error on the computer output. This paper examines the case for probabilistic numerical methods in routine statistical computation. Our focus is on numerical integration, where a probabilistic integrator is equipped with a full distribution over its output that reflects the fact that the integrand has been discretised. Our main technical contribution is to establish, for the first time, rates of posterior contraction for one such method. Several substantial applications are provided for illustration and critical evaluation, including examples from statistical modelling, computer graphics and a computer model for an oil reservoir.
Journal Article
Despite high objective numeracy, lower numeric confidence relates to worse financial and medical outcomes
by
Knoll, Melissa A. Z.
,
Ardoin, Stacy P.
,
Meara, Alexa Simon
in
Adult
,
Chronic conditions
,
Comprehension
2019
People often laugh about being “no good at math.” Unrecognized, however, is that about one-third of American adults are likely too innumerate to operate effectively in financial and health environments. Two numeric competencies conceivably matter—objective numeracy (ability to “run the numbers” correctly; like literacy but with numbers) and numeric self-efficacy (confidence that provides engagement and persistence in numeric tasks). We reasoned, however, that attaining objective numeracy’s benefits should depend on numeric confidence. Specifically, among the more objectively numerate, having more numeric confidence (vs. less) should lead to better outcomes because they persist in numeric tasks and have the skills to support numeric success. Among the less objectively numerate, however, having more (vs. less) numeric confidence should hurt outcomes, as they also persist, but make unrecognized mistakes. Two studies were designed to test the generalizability of this hypothesized interaction. We report secondary analysis of financial outcomes in a diverse US dataset and primary analysis of disease activity among systemic lupus erythematosus patients. In both domains, best outcomes appeared to require numeric calculation skills and the persistence of numeric confidence. “Mismatched” individuals (high ability/low confidence or low ability/high confidence) experienced the worst outcomes. For example, among the most numerate patients, only 7% of the more numerically confident had predicted disease activity indicative of needing further treatment compared with 31% of high-numeracy/low-confidence patients and 44% of low-numeracy/high-confidence patients. Our work underscores that having 1 of these competencies (objective numeracy or numeric selfefficacy) does not guarantee superior outcomes.
Journal Article
Convergence of the deep BSDE method for coupled FBSDEs
by
Jihao Long, ${author.givenNamesEn}
,
Jiequn Han, ${author.givenNamesEn}
in
Algorithms
,
Convergence
,
Mathematical analysis
2020
The recently proposed numerical algorithm, deep BSDE method, has shown remarkable performance in solving high-dimensional forward-backward stochastic differential equations (FBSDEs) and parabolic partial differential equations (PDEs). This article lays a theoretical foundation for the deep BSDE method in the general case of coupled FBSDEs. In particular, a posteriori error estimation of the solution is provided and it is proved that the error converges to zero given the universal approximation capability of neural networks. Numerical results are presented to demonstrate the accuracy of the analyzed algorithm in solving high-dimensional coupled FBSDEs.
Journal Article
On cheap entropy-sparsified regression learning
by
Gagliardini, Patrick
,
Horenko, Illia
,
O’Kane, Terence
in
Algorithms
,
Applied Mathematics
,
BRIEF REPORTS
2023
Regression learning is one of the long-standing problems in statistics, machine learning, and deep learning (DL). We show that writing this problem as a probabilistic expectation over (unknown) feature probabilities, increasing the number of unknown parameters and seemingly making the problem more complex—actually leads to a simplification, allowing to incorporate the physical principle of entropy maximization. It helps decompose a very general setting of this learning problem (including discretization, feature selection, and learning multiple piece-wise linear regressions) into an iterative sequence of simple substeps, which are either analytically solvable or cheaply computable through an efficient second-order numerical solver with a sublinear cost scaling. This leads to the computationally cheap and robust non-DL second-order Sparse Probabilistic Approximation for Regression Task Analysis (SPARTAn) algorithm, that can be efficiently applied to problems with millions of feature dimensions on a commodity laptop, when the state-of-the-art learning tools would require supercomputers. SPARTAn is compared to a range of commonly used regression learning tools on synthetic problems and on the prediction of the El Niño Southern Oscillation, the dominant interannual mode of tropical climate variability. The obtained SPARTAn learners provide more predictive, sparse, and physically explainable data descriptions, clearly discerning the important role of ocean temperature variability at the thermocline in the equatorial Pacific. SPARTAn provides an easily interpretable description of the timescales by which these thermocline temperature features evolve and eventually express at the surface, thereby enabling enhanced predictability of the key drivers of the interannual climate.
Journal Article
Facial Pore Severity Is Associated With Age, Smoking Status and Tanning Bed Use: Results From a Large Dutch Population‐Based Cohort
by
Pardo, Luba M.
,
Velthuis, Peter
,
Eecen, Christina M. W.
in
facial pores
,
grading scale
,
photo‐numeric
2025
Background While facial pores are a normal skin feature, they can be perceived as a cosmetic concern. Facial pore risk factors vary and contradict within the existing literature, with limited large‐scale research in European middle‐aged to older individuals, hindering generalization. Objectives This cross‐sectional study investigated facial pore appearance distribution across demographic, lifestyle, UV‐related and dermatological variables by systematic grading of facial pores in a large population‐based study. Methods Photographs of Rotterdam Study (RS) participants were graded on an adapted photo‐numeric grading scale from one (mild) to five (severe) to assess facial pore appearance severity. Uni‐ and multivariable ordinal logistic regression analyzed associations between (non‐)dermatological variables and facial pore appearance severity, using odds ratios (ORs) with 95% confidence intervals (CIs). Interassessor reliability was assessed using intraclass correlation coefficient (ICC). Results In total, 2293 participants were included (56.7% female; median age 54.0). Most prevalent were moderate to severe facial pores (37%), with 10% being the most pronounced (grade five). Previous and current smokers [OR 1.41 (CI 95% 1.18–1.67); OR 1.46 (CI 95% 1.16–1.86)] and individuals that excessively indoor tan [OR 1.61 (CI 95% 1.03–2.56)] were significantly associated with more severe facial pore appearance in the multivariable analysis. Age had a small but statistically significant effect [OR 0.98 (CI 95% 0.97–0.99)]. Our grading method showed high reliability of measurements. Conclusions In this RS cohort, over one‐third had moderate to severe facial pore appearance scores. Smoking and indoor tanning were modifiable determinants linked to more severe facial pores. For individuals concerned about their pore appearance, quitting smoking and reducing UV exposure are advisable strategies. Photographs of participants of the Rotterdam Study were cross‐sectionally assessed on facial pore appearance severity. Moderate (grade three) facial pores were most prevalent. More severe facial pore appearance was associated with current and former smoking and excessive indoor tanning. Age had a small but inverse effect on facial pore appearance. Summary Why was the study undertaken? ◦ Little research is available to understand the underlying determinants associated with facial pore appearance variation. What does this study add? ◦ In a middle‐aged to older European population, age, smoking and indoor tanning were associated with more severe and apparent facial pores. What are the implications of this study for disease understanding and/or clinical care? ◦ As smoking and indoor tanning are modifiable determinants, quitting smoking and reducing UV exposure could be advisable strategies to reduce or prevent more apparent facial pores.
Journal Article
ICON‐O: The Ocean Component of the ICON Earth System Model—Global Simulation Characteristics and Local Telescoping Capability
2022
We describe the ocean general circulation model Icosahedral Nonhydrostatic Weather and Climate Model (ICON‐O) of the Max Planck Institute for Meteorology, which forms the ocean‐sea ice component of the Earth system model ICON‐ESM. ICON‐O relies on innovative structure‐preserving finite volume numerics. We demonstrate the fundamental ability of ICON‐O to simulate key features of global ocean dynamics at both uniform and non‐uniform resolution. Two experiments are analyzed and compared with observations, one with a nearly uniform and eddy‐rich resolution of ∼10 km and another with a telescoping configuration whose resolution varies smoothly from globally ∼80 to ∼10 km in a focal region in the North Atlantic. Our results show first, that ICON‐O on the nearly uniform grid simulates an ocean circulation that compares well with observations and second, that ICON‐O in its telescope configuration is capable of reproducing the dynamics in the focal region over decadal time scales at a fraction of the computational cost of the uniform‐grid simulation. The telescopic technique offers an alternative to the established regionalization approaches. It can be used either to resolve local circulation more accurately or to represent local scales that cannot be simulated globally while remaining within a global modeling framework. Plain Language Summary Icosahedral Nonhydrostatic Weather and Climate Model (ICON‐O) is a global ocean general circulation model that works on unstructured grids. It rests on novel numerical techniques that belong to the class of structure‐preserving finite Volume methods. Unstructured grids allow on the one hand a uniform coverage of the sphere without resolution clustering, and on the other hand they provide the freedom to intentionally cluster grid points in some region of interest. In this work we run ICON‐O on an uniform grid of approximately 10 km resolution and on a grid with four times less degrees of freedom that is stretched such that in the resulting telescoping grid within the North Atlantic the two resolutions are similar, while outside the focal area the grid approaches smoothly ∼80 km resolution. By comparison with observations and reanalysis data we show first, that the simulation on the uniform 10 km grid provides a decent mesoscale eddy rich simulation and second, that the telescoping grid is able to reproduce the mesoscale rich circulation locally in the North Atlantic and on decadal time scales. This telescoping technique of unstructured grids opens new research directions. Key Points We describe Icosahedral Nonhydrostatic Weather and Climate Model (ICON‐O) the ocean component of ICON‐ESM 1.0, based on the ICON modeling framework ICON‐O is analyzed in a globally mesoscale‐rich simulation and in a telescoping configuration In telescoping configuration ICON‐O reproduces locally the eddy dynamics with less computational costs than the uniform configuration
Journal Article
IR Tools: a MATLAB package of iterative regularization methods and large-scale test problems
by
Hansen, Per Christian
,
Gazzola, Silvia
,
Nagy, James G.
in
Algebra
,
Algorithms
,
Computer Science
2019
This paper describes a new MATLAB software package of iterative regularization methods and test problems for large-scale linear inverse problems. The software package, called IR TOOLS, serves two related purposes: we provide implementations of a range of iterative solvers, including several recently proposed methods that are not available elsewhere, and we provide a set of large-scale test problems in the form of discretizations of 2D linear inverse problems. The solvers include iterative regularization methods where the regularization is due to the semi-convergence of the iterations, Tikhonov-type formulations where the regularization is explicitly formulated in the form of a regularization term, and methods that can impose bound constraints on the computed solutions. All the iterative methods are implemented in a very flexible fashion that allows the problem’s coefficient matrix to be available as a (sparse) matrix, a function handle, or an object. The most basic call to all of the various iterative methods requires only this matrix and the right hand side vector; if the method uses any special stopping criteria, regularization parameters, etc., then default values are set automatically by the code. Moreover, through the use of an optional input structure, the user can also have full control of any of the algorithm parameters. The test problems represent realistic large-scale problems found in image reconstruction and several other applications. Numerical examples illustrate the various algorithms and test problems available in this package.
Journal Article
Max-convolution through numerics and tropical geometry
by
Brysiewicz, Taylor
,
Hauenstein, Jonathan D.
,
Hills, Caroline
in
Algebra
,
Algorithms
,
Approximation
2024
The maximum function, on vectors of real numbers, is not differentiable. Consequently, several differentiable approximations of this function are popular substitutes. We survey three smooth functions which approximate the maximum function and analyze their convergence rates. We interpret these functions through the lens of tropical geometry, where their performance differences are geometrically salient. As an application, we provide an algorithm which computes the max-convolution of two integer vectors in quasi-linear time. We show this algorithm’s power in computing adjacent sums within a vector as well as computing service curves in a network analysis application.
Journal Article