Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
26 result(s) for "Pastorello, Sergio"
Sort by:
Artificial Intelligence, Algorithmic Pricing, and Collusion
Increasingly, algorithms are supplanting human decision-makers in pricing goods and services. To analyze the possible consequences, we study experimentally the behavior of algorithms powered by Artificial Intelligence (Q-learning) in a workhorse oligopoly model of repeated price competition. We find that the algorithms consistently learn to charge supracompetitive prices, without communicating with one another. The high prices are sustained by collusive strategies with a finite phase of punishment followed by a gradual return to cooperation. This finding is robust to asymmetries in cost or demand, changes in the number of players, and various forms of uncertainty.
Algorithmic Pricing What Implications for Competition Policy?
Pricing decisions are increasingly in the “hands” of artificial algorithms. Scholars and competition authorities have voiced concerns that those algorithms are capable of sustaining collusive outcomes more effectively than can human decision makers. If this is so, then our traditional policy tools for fighting collusion may have to be reconsidered. We discuss these issues by critically surveying the relevant law, economics, and computer science literature.
Iterative and Recursive Estimation in Structural Nonadaptive Models
An inference method, called latent backfitting, is proposed. This method appears well suited for econometric models where the structural relationships of interest define the observed endogenous variables as a known function of unobserved state variables and unknown parameters. This nonlinear state-space specification paves the way for iterative or recursive EM-like strategies. In the E steps, the state variables are forecasted given the observations and a value of the parameters. In the M steps, these forecasts are used to deduce estimators of the unknown parameters from the statistical model of latent variables. The proposed iterative/recursive estimation is particularly useful for latent regression models and for dynamic equilibrium models involving latent state variables. Practical implementation issues are discussed through the example of term structure models of interest rates.
Estimating and testing non-affine option pricing models with a large unbalanced panel of options
In this paper, we considerjoint estimation of objective and risk-neutral parameters for stochastic volatility option pricing models using both stock and option prices. A common strategy simplifies the task by limiting the analysis to just one option per date. We first discuss its drawbacks on the basis of model interpretation, estimation results and pricing exercises. We then turn the attention to a more flexible approach, that successfully exploits the wealth of information contained in large heterogeneous panels of options, and we apply it to actual S&P 500 index and index call options data. Our approach breaks the stochastic singularity between contemporaneous option prices by assuming that every observation is affected by measurement error, essentially recasting the problem as a non-linear filtering one. The resulting likelihood function is evaluated using a Monte Carlo Importance Sampling (MCIS) strategy, combined with a Particle Filter algorithm. The results provide useful intuitions on the directions that should be followed to extend the model, in particular by allowing jumps or regime switching in the volatility process.
Maximization by parts in extremum estimation
In this paper, we present various iterative algorithms for extremum estimation in cases where direct computation of the extremum estimator or via the Newton-Raphson algorithm is difficult, if not impossible. While the Newton-Raphson algorithm makes use of the full Hessian matrix, which may be difficult to evaluate, our algorithms use parts of the Hessian matrix only, the parts that are easier to compute. We establish consistency and asymptotic efficiency of our iterative estimators under regularity and information dominance conditions. We argue that the economic interpretation of a structural econometric model will often allow us to give credibility to a well-suited information dominance condition. We apply our algorithms to the estimation of the Merton structural credit risk model and to the Heston stochastic volatility option pricing model.
Mean-variance econometric analysis of household portfolios
We investigate households' portfolio choice using a microeconometric approach derived from mean-variance optimization. We assume that households have heterogeneous expectations on the distribution of excess returns and that they cannot take short positions in risky assets. Assuming two such assets, we derive an explicit solution of the model characterized by four possible portfolio regimes, which are analyzed using two structural probit and tobit specifications with three latent state variables. Both specifications are estimated by weighted maximum likelihood on a cross-section of US households drawn from the 2004 SCF. The tobit specification is simulated in order to evaluate the regressors' effects on regime probabilities and asset demands. We also assess to what extent the predicted state variables are consistent with the self-reported expected returns and risk aversion elicited from the SCF questionnaire.
Statistical Inference for Random-Variance Option Pricing
This article deals with the estimation of continuous-time stochastic volatility models of option pricing. We argue that option prices are much more informative about the parameters than are asset prices. This is confirmed in a Monte Carlo experiment that compares two very simple strategies based on the different information sets. Both approaches are based on indirect inference and avoid any discretization bias by simulating the continuous-time model. We assume an Ornstein-Uhlenbeck process for the log of the volatility, a zero-volatility risk premium, and no leverage effect. We do not pursue asymptotic efficiency or specification issues; rather, we stick to a framework with no overidentifying restrictions and show that, given our option-pricing model, estimation based on option prices is much more precise in samples of typical size, without increasing the computational burden.
Rejoinder: Sergio Pastorello / Valentin Patilea / Eric Renault
A rejointer to comments on Pastorello, Patilea, and Renault (2003) is presented. M. Chernov's discussion interestingly focuses on the concept of nuisance parameters and gives the authors the opportunity to clarify some notational issues that seem to be responsible for some of the trouble of some discussants. Q. Dai presents an interesting decomosition of the likelihood, which is relevant for understanding the informational content of the different blocks. Durham and Geweke's comments are gladly welcomed, which, through some neat geometric interpretations, have improved the author's own understanding of the issue of nonadaptivity and of the backfittng kind of solution to the issue as well. The authors are grateful to M. Johannes and N. Polson for having carefully read the article and made an insightful account of the inference issue.
Securitization, Covered Bonds and the Risk Taking Behavior of European Banks
This study investigates the impact of securitization and the issuance of covered bonds on the credit risk taking behavior of banks. We collected data for seven major European economies for the period between 2001 and 2014, that is, both before and after the global financial crisis of 2008. In this paper. we address self-selection concerns about the endogeneity of the decision to securitize or issue covered bonds by using the Covariance Balancing Propensity Score method. We inquire whether securitizing banks hold portfolios that contain riskier assets than those of banks that issue covered bonds and whether the risk taking behavior of banks changed after the recent financial crisis. Our results suggest that European banks typically view securitization as a financing rather than a risk management tool. Therefore, our findings do not support the conventional wisdom that the absence of skin in the game causes banks to assume more risk. Instead, we find evidence that securitizing banks have been opting for lower risk asset portfolios after the 2008 crisis.