Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
19,367
result(s) for
"stochastic control"
Sort by:
A Probabilistic Approach to Classical Solutions of the Master Equation for Large Population Equilibria
by
Chassagneux, Jean-François
,
Delarue, François
,
Crisan, Dan
in
Stochastic analysis
,
Stochastic control theory
2022
We analyze a class of nonlinear partial differential equations (PDEs) defined on
FORWARD–BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND CONTROLLED MCKEAN–VLASOV DYNAMICS
2015
The purpose of this paper is to provide a detailed probabilistic analysis of the optimal control of nonlinear stochastic dynamical systems of McKean–Vlasov type. Motivated by the recent interest in mean-field games, we highlight the connection and the differences between the two sets of problems. We prove a new version of the stochastic maximum principle and give sufficient conditions for existence of an optimal control. We also provide examples for which our sufficient conditions for existence of an optimal solution are satisfied. Finally we show that our solution to the control problem provides approximate equilibria for large stochastic controlled systems with mean-field interactions when subject to a common policy.
Journal Article
On the Relation Between Optimal Transport and Schrödinger Bridges: A Stochastic Control Viewpoint
by
Georgiou, Tryphon T.
,
Pavon, Michele
,
Chen, Yongxin
in
Applications of Mathematics
,
Calculus of Variations and Optimal Control; Optimization
,
Computational fluid dynamics
2016
We take a new look at the relation between the optimal transport problem and the Schrödinger bridge problem from a stochastic control perspective. Our aim is to highlight new connections between the two that are richer and deeper than those previously described in the literature. We begin with an elementary derivation of the Benamou–Brenier fluid dynamic version of the optimal transport problem and provide, in parallel, a new fluid dynamic version of the Schrödinger bridge problem. We observe that the latter establishes an important connection with optimal transport without zero-noise limits and solves a question posed by Eric Carlen in 2006. Indeed, the two variational problems differ by a
Fisher information functional
. We motivate and consider a generalization of optimal mass transport in the form of a (fluid dynamic) problem of
optimal transport with prior
. This can be seen as the zero-noise limit of Schrödinger bridges when the prior is any Markovian evolution. We finally specialize to the Gaussian case and derive an explicit computational theory based on matrix Riccati differential equations. A numerical example involving Brownian particles is also provided.
Journal Article
Linear-Quadratic Mean Field Games
by
Bensoussan, A.
,
Sung, K. C. J.
,
Yam, S. C. P.
in
Adjoints
,
Applications of Mathematics
,
Approximation
2016
We provide a comprehensive study of a general class of linear-quadratic mean field games. We adopt the adjoint equation approach to investigate the unique existence of their equilibrium strategies. Due to the linearity of the adjoint equations, the optimal mean field term satisfies a forward–backward ordinary differential equation. For the one-dimensional case, we establish the unique existence of the equilibrium strategy. For a dimension greater than one, by applying the Banach fixed point theorem under a suitable norm, a sufficient condition for the unique existence of the equilibrium strategy is provided, which is independent of the coefficients of controls in the underlying dynamics and is always satisfied whenever the coefficients of the mean field term are vanished, and hence, our theories include the classical linear-quadratic stochastic control problems as special cases. As a by-product, we also establish a neat and instructive sufficient condition, which is apparently absent in the literature and only depends on coefficients, for the unique existence of the solution for a class of nonsymmetric Riccati equations. Numerical examples of nonexistence of the equilibrium strategy will also be illustrated. Finally, a similar approach has been adopted to study the linear-quadratic mean field type stochastic control problems and their comparisons with mean field games.
Journal Article
Reducing Obizhaeva–Wang-type trade execution problems to LQ stochastic control problems
2024
We start with a stochastic control problem where the control process is of finite variation (possibly with jumps) and acts as integrator both in the state dynamics and in the target functional. Problems of such type arise in the stream of literature on optimal trade execution pioneered by Obizhaeva and Wang (J. Financ. Mark. 16:1–32, 2013) (models with finite resilience). We consider a general framework where the price impact and the resilience are stochastic processes. Both are allowed to have diffusive components. First we continuously extend the problem from processes of finite variation to progressively measurable processes. Then we reduce the extended problem to a linear–quadratic (LQ) stochastic control problem. Using the well-developed theory on LQ problems, we describe the solution to the obtained LQ one and translate it back to the solution for the (extended) initial trade execution problem. Finally, we illustrate our results by several examples. Among other things, the examples discuss the Obizhaeva–Wang model with random (terminal and moving) targets, the necessity to extend the initial trade execution problem to a reasonably large class of progressively measurable processes (even going beyond semimartingales), and the effects of diffusive components in the price impact process and/or the resilience process.
Journal Article
Solving Stochastic Optimal Control Problem via Stochastic Maximum Principle with Deep Learning Method
by
Ji, Shaolin
,
Peng, Shige
,
Peng, Ying
in
Algorithms
,
Brownian motion
,
Computational Mathematics and Numerical Analysis
2022
In this paper, we aim to solve the high dimensional stochastic optimal control problem from the view of the stochastic maximum principle via deep learning. By introducing the extended Hamiltonian system which is essentially a Forward Backward Stochastic Differential Equation (FBSDE) with a maximum condition, we reformulate the original control problem as a new one. According to whether the optimal control has an explicit representation, three algorithms are proposed to solve the new control problem. Numerical results for different examples demonstrate the effectiveness of our proposed algorithms, especially in high dimensional cases. And even if the optimal control
u
~
in the maximum condition may not be solved explicitly, our algorithms can still deal with the stochastic optimal control problem. An important application of our proposed method is to calculate the sub-linear expectations, which correspond to a kind of fully nonlinear Partial Differential Equations (PDEs).
Journal Article
A theory of Markovian time-inconsistent stochastic control in discrete time
2014
We develop a theory for a general class of discrete-time stochastic control problems that, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We attack these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points. For a general controlled Markov process and a fairly general objective functional, we derive an extension of the standard Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. Most known examples of time-inconsistent stochastic control problems in the literature are easily seen to be special cases of the present theory. We also prove that for every time-inconsistent problem, there exists an associated time-consistent problem such that the optimal control and the optimal value function for the consistent problem coincide with the equilibrium control and value function, respectively for the time-inconsistent problem. To exemplify the theory, we study some concrete examples, such as hyperbolic discounting and mean–variance control.
Journal Article
Optimal Controls of Stochastic Differential Equations with Jumps and Random Coefficients: Stochastic Hamilton–Jacobi–Bellman Equations with Jumps
2023
We study the stochastic Hamilton–Jacobi–Bellman (HJB) equation with jump, which arises from a non-Markovian optimal control problem with a recursive utility cost functional. The solution to the equation is a predictable triplet of random fields. We show that the value function of the control problem, under some regularity assumptions, is the solution to the stochastic HJB equation; and a classical solution to this equation is the value function and characterizes the optimal control. With some additional assumptions on the coefficients, an existence and uniqueness result in the sense of Sobolev space is shown by recasting the stochastic HJB equation as a backward stochastic evolution equation in Hilbert spaces with the Brownian motion and Poisson jump.
Journal Article
A General Stochastic Maximum Principle for SDEs of Mean-field Type
by
Buckdahn, Rainer
,
Djehiche, Boualem
,
Li, Juan
in
Adjoints
,
Applied mathematics
,
Brownian motion
2011
We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim.
2
(4), 966–979,
1990
) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.
Journal Article
A Maximum Principle for SDEs of Mean-Field Type
2011
We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.
Journal Article