Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
LanguageLanguage
-
SubjectSubject
-
Item TypeItem Type
-
DisciplineDiscipline
-
YearFrom:-To:
-
More FiltersMore FiltersIs Peer Reviewed
Done
Filters
Reset
264,649
result(s) for
"MATHEMATICS / Applied."
Sort by:
Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next
by
Piccialli, Francesco
,
Di Cola, Vincenzo Schiano
,
Giampaolo, Fabio
in
Algorithms
,
Applied mathematics
,
Approximation
2022
Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, integral-differential equations, and stochastic PDEs. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages. The review also attempts to incorporate publications on a broader range of collocation-based physics informed neural networks, which stars form the vanilla PINN, as well as many other variants, such as physics-constrained neural networks (PCNN), variational hp-VPINN, and conservative PINN (CPINN). The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.
Journal Article
Optimization algorithms on matrix manifolds
2008
Many problems in the sciences and engineering can be rephrased as optimization problems on matrix search spaces endowed with a so-called manifold structure. This book shows how to exploit the special structure of such problems to develop efficient numerical algorithms. It places careful emphasis on both the numerical formulation of the algorithm and its differential geometric abstraction--illustrating how good algorithms draw equally from the insights of differential geometry, optimization, and numerical analysis. Two more theoretical chapters provide readers with the background in differential geometry necessary to algorithmic development. In the other chapters, several well-known optimization methods such as steepest descent and conjugate gradients are generalized to abstract manifolds. The book provides a generic development of each of these methods, building upon the material of the geometric chapters. It then guides readers through the calculations that turn these geometrically formulated methods into concrete numerical algorithms. The state-of-the-art algorithms given as examples are competitive with the best existing algorithms for a selection of eigenspace problems in numerical linear algebra.
Solving parametric PDE problems with artificial neural networks
by
KHOO, YUEHAW
,
YING, LEXING
,
LU, JIANFENG
in
Applied mathematics
,
Artificial neural networks
,
Coefficients
2021
The curse of dimensionality is commonly encountered in numerical partial differential equations (PDE), especially when uncertainties have to be modelled into the equations as random coefficients. However, very often the variability of physical quantities derived from PDE can be captured by a few features on the space of the coefficient fields. Based on such observation, we propose using neural network to parameterise the physical quantity of interest as a function of input coefficients. The representability of such quantity using a neural network can be justified by viewing the neural network as performing time evolution to find the solutions to the PDE. We further demonstrate the simplicity and accuracy of the approach through notable examples of PDEs in engineering and physics.
Journal Article
Graph theoretic methods in multiagent networks
by
Mesbahi, Mehran
,
Egerstedt, Magnus
in
Abstraction (software engineering)
,
Adjacency matrix
,
Algebraic connectivity
2010
This accessible book provides an introduction to the analysis and design of dynamic multiagent networks. Such networks are of great interest in a wide range of areas in science and engineering, including: mobile sensor networks, distributed robotics such as formation flying and swarming, quantum networks, networked economics, biological synchronization, and social networks. Focusing on graph theoretic methods for the analysis and synthesis of dynamic multiagent networks, the book presents a powerful new formalism and set of tools for networked systems.
The book's three sections look at foundations, multiagent networks, and networks as systems. The authors give an overview of important ideas from graph theory, followed by a detailed account of the agreement protocol and its various extensions, including the behavior of the protocol over undirected, directed, switching, and random networks. They cover topics such as formation control, coverage, distributed estimation, social networks, and games over networks. And they explore intriguing aspects of viewing networks as systems, by making these networks amenable to control-theoretic analysis and automatic synthesis, by monitoring their dynamic evolution, and by examining higher-order interaction models in terms of simplicial complexes and their applications.
The book will interest graduate students working in systems and control, as well as in computer science and robotics. It will be a standard reference for researchers seeking a self-contained account of system-theoretic aspects of multiagent networks and their wide-ranging applications.
This book has been adopted as a textbook at the following universities:
University of Stuttgart, GermanyRoyal Institute of Technology, SwedenJohannes Kepler University, AustriaGeorgia Tech, USAUniversity of Washington, USAOhio University, USA
Tensor-Train Decomposition
2011
A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of parameters as the canonical decomposition, but it is stable and its computation is based on low-rank approximation of auxiliary unfolding matrices. The new form gives a clear and convenient way to implement all basic operations efficiently. A fast rounding procedure is presented, as well as basic linear algebra operations. Examples showing the benefits of the decomposition are given, and the efficiency is demonstrated by the computation of the smallest eigenvalue of a 19-dimensional operator.
Journal Article
Golden ratio algorithms for variational inequalities
2020
The paper presents a fully adaptive algorithm for monotone variational inequalities. In each iteration the method uses two previous iterates for an approximation of the local Lipschitz constant without running a linesearch. Thus, every iteration of the method requires only one evaluation of a monotone operator F and a proximal mapping g. The operator F need not be Lipschitz continuous, which also makes the algorithm interesting in the area of composite minimization. The method exhibits an ergodic O(1 / k) convergence rate and R-linear rate under an error bound condition. We discuss possible applications of the method to fixed point problems as well as its different generalizations.
Journal Article
Dynamical analysis of a fractional-order predator-prey model incorporating a prey refuge
by
Jiang, Yao-Lin
,
Li, Hong-Li
,
Zhang, Long
in
Applied mathematics
,
Asymptotic methods
,
Computational Mathematics and Numerical Analysis
2017
In this paper, a fractional-order predator-prey model incorporating a prey refuge is proposed. We first prove the existence, uniqueness, non-negativity and boundedness of the solutions for the considered model. Moreover, we also analyze the existence of various equilibrium points, and some sufficient conditions are derived to ensure the global asymptotic stability of the predator-extinction equilibrium point and coexistence equilibrium point. Finally, some numerical simulations are carried out for illustrating the analytic results.
Journal Article
Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
by
Gardenghi, J. L.
,
Santos, S. A.
,
Martínez, J. M.
in
Algorithms
,
Analysis
,
Applied mathematics
2017
The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order
p
(for
p
≥
1
) and to assume Lipschitz continuity of the
p
-th derivative, then an
ϵ
-approximate first-order critical point can be computed in at most
O
(
ϵ
-
(
p
+
1
)
/
p
)
evaluations of the problem’s objective function and its derivatives. This generalizes and subsumes results known for
p
=
1
and
p
=
2
.
Journal Article
Matrices, Moments and Quadrature with Applications
by
Golub, Gene H
,
Meurant, Gérard
in
Algorithm
,
Basis (linear algebra)
,
Biconjugate gradient method
2009,2010
This computationally oriented book describes and explains the mathematical relationships among matrices, moments, orthogonal polynomials, quadrature rules, and the Lanczos and conjugate gradient algorithms. The book bridges different mathematical areas to obtain algorithms to estimate bilinear forms involving two vectors and a function of the matrix. The first part of the book provides the necessary mathematical background and explains the theory. The second part describes the applications and gives numerical examples of the algorithms and techniques developed in the first part.
Applications addressed in the book include computing elements of functions of matrices; obtaining estimates of the error norm in iterative methods for solving linear systems and computing parameters in least squares and total least squares; and solving ill-posed problems using Tikhonov regularization.
This book will interest researchers in numerical linear algebra and matrix computations, as well as scientists and engineers working on problems involving computation of bilinear forms.
Recent advances in trust region algorithms
2015
Trust region methods are a class of numerical methods for optimization. Unlike line search type methods where a line search is carried out in each iteration, trust region methods compute a trial step by solving a trust region subproblem where a model function is minimized within a trust region. Due to the trust region constraint, nonconvex models can be used in trust region subproblems, and trust region algorithms can be applied to nonconvex and ill-conditioned problems. Normally it is easier to establish the global convergence of a trust region algorithm than that of its line search counterpart. In the paper, we review recent results on trust region methods for unconstrained optimization, constrained optimization, nonlinear equations and nonlinear least squares, nonsmooth optimization and optimization without derivatives. Results on trust region subproblems and regularization methods are also discussed.
Journal Article