Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3
result(s) for
"65P99"
Sort by:
OPTIMAL TRANSPORTATION UNDER CONTROLLED STOCHASTIC DYNAMICS
2013
We consider an extension of the Monge-Kantorovitch optimal transportation problem. The mass is transported along a continuous semimartingale, and the cost of transportation depends on the drift and the diffusion coefficients of the continuous semimartingale. The optimal transportation problem minimizes the cost among all continuous semimartingales with given initial and terminal distributions. Our first main result is an extension of the Kantorovitch duality to this context. We also suggest a finite-difference scheme combined with the gradient projection algorithm to approximate the dual value. We prove the convergence of the scheme, and we derive a rate of convergence. We finally provide an application in the context of financial mathematics, which originally motivated our extension of the Monge-Kantorovitch problem. Namely, we implement our scheme to approximate no-arbitrage bounds on the prices of exotic options given the implied volatility curve of some maturity.
Journal Article
Vanilla Feedforward Neural Networks as a Discretization of Dynamical Systems
by
Guanghua, Ji
,
Yongqiang, Cai
,
Li’ang, Li
in
Algorithms
,
Approximation
,
Artificial neural networks
2024
Deep learning has made significant progress in the fields of data science and natural science. Some studies have linked deep neural networks to dynamical systems, but the network structure is restricted to a residual network. It is known that residual networks can be regarded as a numerical discretization of dynamical systems. In this paper, we consider the traditional network structure and prove that vanilla feedforward networks can also be used for the numerical discretization of dynamical systems, where the width of the network is equal to the dimensions of the input and output. Our proof is based on the properties of the leaky-ReLU function and the numerical technique of the splitting method for solving differential equations. Our results could provide a new perspective for understanding the approximation properties of feedforward neural networks.
Journal Article