Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
3 result(s) for "65P99"
Sort by:
OPTIMAL TRANSPORTATION UNDER CONTROLLED STOCHASTIC DYNAMICS
We consider an extension of the Monge-Kantorovitch optimal transportation problem. The mass is transported along a continuous semimartingale, and the cost of transportation depends on the drift and the diffusion coefficients of the continuous semimartingale. The optimal transportation problem minimizes the cost among all continuous semimartingales with given initial and terminal distributions. Our first main result is an extension of the Kantorovitch duality to this context. We also suggest a finite-difference scheme combined with the gradient projection algorithm to approximate the dual value. We prove the convergence of the scheme, and we derive a rate of convergence. We finally provide an application in the context of financial mathematics, which originally motivated our extension of the Monge-Kantorovitch problem. Namely, we implement our scheme to approximate no-arbitrage bounds on the prices of exotic options given the implied volatility curve of some maturity.
Vanilla Feedforward Neural Networks as a Discretization of Dynamical Systems
Deep learning has made significant progress in the fields of data science and natural science. Some studies have linked deep neural networks to dynamical systems, but the network structure is restricted to a residual network. It is known that residual networks can be regarded as a numerical discretization of dynamical systems. In this paper, we consider the traditional network structure and prove that vanilla feedforward networks can also be used for the numerical discretization of dynamical systems, where the width of the network is equal to the dimensions of the input and output. Our proof is based on the properties of the leaky-ReLU function and the numerical technique of the splitting method for solving differential equations. Our results could provide a new perspective for understanding the approximation properties of feedforward neural networks.