Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
10
result(s) for
"Remlinger, Carl"
Sort by:
Reinforcement Learning in Economics and Finance
2023
Reinforcement learning algorithms describe how an agent can learn an optimal action policy in a sequential decision process, through repeated experience. In a given environment, the agent policy provides him some running and terminal rewards. As in online learning, the agent learns sequentially. As in multi-armed bandit problems, when an agent picks an action, he can not infer ex-post the rewards induced by other action choices. In reinforcement learning, his actions have consequences: they influence not only rewards, but also future states of the world. The goal of reinforcement learning is to find an optimal policy – a mapping from the states of the world to the set of actions, in order to maximize cumulative reward, which is a long term strategy. Exploring might be sub-optimal on a short-term horizon but could lead to optimal long-term ones. Many problems of optimal control, popular in economics for more than forty years, can be expressed in the reinforcement learning framework, and recent advances in computational science, provided in particular by deep learning algorithms, can be used by economists in order to solve complex behavioral problems. In this article, we propose a state-of-the-art of reinforcement learning techniques, and present applications in economics, game theory, operation research and finance.
Journal Article
Deep Generators on Commodity Markets Application to Deep Hedging
by
Remlinger, Carl
,
Boursin, Nicolas
,
Mikael, Joseph
in
Brownian motion
,
Commodities
,
deep hedging
2023
Four deep generative methods for time series are studied on commodity markets and compared with classical probabilistic models. The lack of data in the case of deep hedgers is a common flaw, which deep generative methods seek to address. In the specific case of commodities, it turns out that these generators can also be used to refine the price models by tackling the high-dimensional challenges. In this work, the synthetic time series of commodity prices produced by such generators are studied and then used to train deep hedgers on various options. A fully data-driven approach to commodity risk management is thus proposed, from synthetic price generation to learning risk hedging policies.
Journal Article
Robust Operator Learning to Solve PDE
2022
A model solving a family of partial differential equations (PDEs) with a single training is proposed. Re-calibrating a risk factor model or re-training a solver every time the market conditions change is costly and unsatisfactory. We therefore want to solve PDEs when the environment is not stationary or for several initial conditions at the same time. By learning operators in a single training, we ensure of the robustness of optimal controls with variations of the models, options or constraints. But, ultimately, we want to generalize by solving the PDE with models or conditions that were not present during training. We confirm the effectiveness of the method with several risk management problems by comparing it with other machine learning approaches. We evaluate our DeepOHedger on option pricing tasks, including local volatility models and option spreads involved in energy markets. Finally, we present a purely data-driven approach to risk hedging, from time series generation to learning optimal policiy. Our model then solves a family of parametric PDE from synthetic samples produced by a deep generator previously trained on spot price data from different countries.
Expert Aggregation for Financial Forecasting
2023
Machine learning algorithms dedicated to financial time series forecasting have gained a lot of interest. But choosing between several algorithms can be challenging, as their estimation accuracy may be unstable over time. Online aggregation of experts combine the forecasts of a finite set of models in a single approach without making any assumption about the models. In this paper, a Bernstein Online Aggregation (BOA) procedure is applied to the construction of long-short strategies built from individual stock return forecasts coming from different machine learning models. The online mixture of experts leads to attractive portfolio performances even in environments characterised by non-stationarity. The aggregation outperforms individual algorithms, offering a higher portfolio Sharpe Ratio, lower shortfall, with a similar turnover. Extensions to expert and aggregation specialisations are also proposed to improve the overall mixture on a family of portfolio evaluation metrics.
Conditional Loss and Deep Euler Scheme for Time Series Generation
2021
We introduce three new generative models for time series that are based on Euler discretization of Stochastic Differential Equations (SDEs) and Wasserstein metrics. Two of these methods rely on the adaptation of generative adversarial networks (GANs) to time series. The third algorithm, called Conditional Euler Generator (CEGEN), minimizes a dedicated distance between the transition probability distributions over all time steps. In the context of Ito processes, we provide theoretical guarantees that minimizing this criterion implies accurate estimations of the drift and volatility parameters. We demonstrate empirically that CEGEN outperforms state-of-the-art and GAN generators on both marginal and temporal dynamics metrics. Besides, it identifies accurate correlation structures in high dimension. When few data points are available, we verify the effectiveness of CEGEN, when combined with transfer learning methods on Monte Carlo simulations. Finally, we illustrate the robustness of our method on various real-world datasets.
Deep Generators on Commodity Markets; application to Deep Hedging
by
Joseph, Mikael
,
Remlinger, Carl
,
Boursin, Nicolas
in
Commodities
,
Commodity options
,
Computer vision
2022
Driven by the good results obtained in computer vision, deep generative methods for time series have been the subject of particular attention in recent years, particularly from the financial industry. In this article, we focus on commodity markets and test four state-of-the-art generative methods, namely Time Series Generative Adversarial Network (GAN) Yoon et al. [2019], Causal Optimal Transport GAN Xu et al. [2020], Signature GAN Ni et al. [2020] and the conditional Euler generator Remlinger et al. [2021], are adapted and tested on commodity time series. A first series of experiments deals with the joint generation of historical time series on commodities. A second set deals with deep hedging of commodity options trained on he generated time series. This use case illustrates a purely data-driven approach to risk hedging.
Deep Generators on Commodity Markets; application to Deep Hedging
2022
Driven by the good results obtained in computer vision, deep generative methods for time series have been the subject of particular attention in recent years, particularly from the financial industry. In this article, we focus on commodity markets and test four state-of-the-art generative methods, namely Time Series Generative Adversarial Network (GAN) Yoon et al. [2019], Causal Optimal Transport GAN Xu et al. [2020], Signature GAN Ni et al. [2020] and the conditional Euler generator Remlinger et al. [2021], are adapted and tested on commodity time series. A first series of experiments deals with the joint generation of historical time series on commodities. A second set deals with deep hedging of commodity options trained on he generated time series. This use case illustrates a purely data-driven approach to risk hedging.
Reinforcement Learning in Economics and Finance
by
Charpentier, Arthur
,
Remlinger, Carl
,
Elie, Romuald
in
Algorithms
,
Distance learning
,
Economics
2020
Reinforcement learning algorithms describe how an agent can learn an optimal action policy in a sequential decision process, through repeated experience. In a given environment, the agent policy provides him some running and terminal rewards. As in online learning, the agent learns sequentially. As in multi-armed bandit problems, when an agent picks an action, he can not infer ex-post the rewards induced by other action choices. In reinforcement learning, his actions have consequences: they influence not only rewards, but also future states of the world. The goal of reinforcement learning is to find an optimal policy -- a mapping from the states of the world to the set of actions, in order to maximize cumulative reward, which is a long term strategy. Exploring might be sub-optimal on a short-term horizon but could lead to optimal long-term ones. Many problems of optimal control, popular in economics for more than forty years, can be expressed in the reinforcement learning framework, and recent advances in computational science, provided in particular by deep learning algorithms, can be used by economists in order to solve complex behavioral problems. In this article, we propose a state-of-the-art of reinforcement learning techniques, and present applications in economics, game theory, operation research and finance.
Reinforcement Learning in Economics and Finance
2020
Reinforcement learning algorithms describe how an agent can learn an optimal action policy in a sequential decision process, through repeated experience. In a given environment, the agent policy provides him some running and terminal rewards. As in online learning, the agent learns sequentially. As in multi-armed bandit problems, when an agent picks an action, he can not infer ex-post the rewards induced by other action choices. In reinforcement learning, his actions have consequences: they influence not only rewards, but also future states of the world. The goal of reinforcement learning is to find an optimal policy -- a mapping from the states of the world to the set of actions, in order to maximize cumulative reward, which is a long term strategy. Exploring might be sub-optimal on a short-term horizon but could lead to optimal long-term ones. Many problems of optimal control, popular in economics for more than forty years, can be expressed in the reinforcement learning framework, and recent advances in computational science, provided in particular by deep learning algorithms, can be used by economists in order to solve complex behavioral problems. In this article, we propose a state-of-the-art of reinforcement learning techniques, and present applications in economics, game theory, operation research and finance.
Expert Aggregation for Financial Forecasting
2021
Machine learning algorithms dedicated to financial time series forecasting have gained a lot of interest over the last few years. One difficulty lies in the choice between several algorithms, as their estimation accuracy may be unstable through time. In this paper, we propose to apply an online aggregation-based forecasting model combining several machine learning techniques to build a portfolio which dynamically adapts itself to market conditions. We apply this aggregation technique to the construction of a long-short-portfolio of individual stocks ranked on their financial characteristics and we demonstrate how aggregation outperforms single algorithms both in terms of performances and of stability.