Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,185 result(s) for "Risikomaß"
Sort by:
\Dice\-sion-Making Under Uncertainty: When Can a Random Decision Reduce Risk?
Stochastic programming and distributionally robust optimization seek deterministic decisions that optimize a risk measure, possibly in view of the most adverse distribution in an ambiguity set. We investigate under which circumstances such deterministic decisions are strictly outperformed by random decisions, which depend on a randomization device producing uniformly distributed samples that are independent of all uncertain factors affecting the decision problem. We find that, in the absence of distributional ambiguity, deterministic decisions are optimal if both the risk measure and the feasible region are convex or alternatively, if the risk measure is mixture quasiconcave. We show that some risk measures, such as mean (semi-)deviation and mean (semi-)moment measures, fail to be mixture quasiconcave and can, therefore, give rise to problems in which the decision maker benefits from randomization. Under distributional ambiguity, however, we show that, for any ambiguity-averse risk measure satisfying a mild continuity property, we can construct a decision problem in which a randomized decision strictly outperforms all deterministic decisions.
Machine Learning and Portfolio Optimization
The portfolio optimization model has limited impact in practice because of estimation issues when applied to real data. To address this, we adapt two machine learning methods, regularization and cross-validation, for portfolio optimization. First, we introduce performance-based regularization (PBR), where the idea is to constrain the sample variances of the estimated portfolio risk and return, which steers the solution toward one associated with less estimation error in the performance. We consider PBR for both mean-variance and mean-conditional value-at-risk (CVaR) problems. For the mean-variance problem, PBR introduces a quartic polynomial constraint, for which we make two convex approximations: one based on rank-1 approximation and another based on a convex quadratic approximation. The rank-1 approximation PBR adds a bias to the optimal allocation, and the convex quadratic approximation PBR shrinks the sample covariance matrix. For the mean-CVaR problem, the PBR model is a combinatorial optimization problem, but we prove its convex relaxation, a quadratically constrained quadratic program, is essentially tight. We show that the PBR models can be cast as robust optimization problems with novel uncertainty sets and establish asymptotic optimality of both sample average approximation (SAA) and PBR solutions and the corresponding efficient frontiers. To calibrate the right-hand sides of the PBR constraints, we develop new, performance-based k -fold cross-validation algorithms. Using these algorithms, we carry out an extensive empirical investigation of PBR against SAA, as well as L1 and L2 regularizations and the equally weighted portfolio. We find that PBR dominates all other benchmarks for two out of three Fama–French data sets. This paper was accepted by Yinyu Ye, optimization .
Investor Attention and Stock Returns
We propose an investor attention index based on proxies in the literature and find that it predicts the stock market risk premium significantly, both in sample and out of sample, whereas every proxy individually has little predictive power. The index is extracted using partial least squares, but the results are similar by the scaled principal component analysis. Moreover, the index can deliver sizable economic gains for mean-variance investors in asset allocation. The predictive power of the investor attention index stems primarily from the reversal of temporary price pressure and from the stronger forecasting ability for high-variance stocks.
Measuring Uncertainty
This paper exploits a data rich environment to provide direct econometric estimates of time-varying macroeconomic uncertainty. Our estimates display significant independent variations from popular uncertainty proxies, suggesting that much of the variation in the proxies is not driven by uncertainty. Quantitatively important uncertainty episodes appear far more infrequently than indicated by popular uncertainty proxies, but when they do occur, they are larger, more persistent, and are more correlated with real activity. Our estimates provide a benchmark to evaluate theories for which uncertainty shocks play a wie in business cycles.
Forecasting Value at Risk and Expected Shortfall Using a Semiparametric Approach Based on the Asymmetric Laplace Distribution
Value at Risk (VaR) forecasts can be produced from conditional autoregressive VaR models, estimated using quantile regression. Quantile modeling avoids a distributional assumption, and allows the dynamics of the quantiles to differ for each probability level. However, by focusing on a quantile, these models provide no information regarding expected shortfall (ES), which is the expectation of the exceedances beyond the quantile. We introduce a method for predicting ES corresponding to VaR forecasts produced by quantile regression models. It is well known that quantile regression is equivalent to maximum likelihood based on an asymmetric Laplace (AL) density. We allow the density's scale to be time-varying, and show that it can be used to estimate conditional ES. This enables a joint model of conditional VaR and ES to be estimated by maximizing an AL log-likelihood. Although this estimation framework uses an AL density, it does not rely on an assumption for the returns distribution. We also use the AL log-likelihood for forecast evaluation, and show that it is strictly consistent for the joint evaluation of VaR and ES. Empirical illustration is provided using stock index data. Supplementary materials for this article are available online.
Good and Bad Variance Premia and Expected Returns
We measure “good” and “bad” variance premia that capture risk compensations for the realized variation in positive and negative market returns, respectively. The two variance premium components jointly predict excess returns over the next one and two years with statistically significant positive (negative) coefficients on the good (bad) component. The R 2 s reach about 10% for aggregate equity and portfolio returns and 20% for corporate bond returns. To explain the new empirical evidence, we develop a model that highlights the differential impact of upside and downside risk on equity and variance risk premia. The online appendix is available at https://doi.org/10.1287/mnsc.2017.2890 . This paper was accepted by Neng Wang, finance.
Quantile-Based Risk Sharing
We address the problem of risk sharing among agents using a two-parameter class of quantile-based risk measures, the so-called range-value-at-risk (RVaR), as their preferences. The family of RVaR includes the value-at-risk (VaR) and the expected shortfall (ES), the two popular and competing regulatory risk measures, as special cases. We first establish an inequality for RVaR-based risk aggregation, showing that RVaR satisfies a special form of subadditivity. Then, the Pareto-optimal risk sharing problem is solved through explicit construction. To study risk sharing in a competitive market, an Arrow–Debreu equilibrium is established for some simple yet natural settings. Furthermore, we investigate the problem of model uncertainty in risk sharing and show that, in general, a robust optimal allocation exists if and only if none of the underlying risk measures is a VaR. Practical implications of our main results for risk management and policy makers are discussed, and several novel advantages of ES over VaR from the perspective of a regulator are thereby revealed. The e-companion is available at https://doi.org/10.1287/opre.2017.1716 .
Robust Solutions of Optimization Problems Affected by Uncertain Probabilities
In this paper we focus on robust linear optimization problems with uncertainty regions defined by φ -divergences (for example, chi-squared, Hellinger, Kullback-Leibler). We show how uncertainty regions based on φ -divergences arise in a natural way as confidence sets if the uncertain parameters contain elements of a probability vector. Such problems frequently occur in, for example, optimization problems in inventory control or finance that involve terms containing moments of random variables, expected utility, etc. We show that the robust counterpart of a linear optimization problem with φ -divergence uncertainty is tractable for most of the choices of φ typically considered in the literature. We extend the results to problems that are nonlinear in the optimization variables. Several applications, including an asset pricing example and a numerical multi-item newsvendor example, illustrate the relevance of the proposed approach. This paper was accepted by Gérard P. Cachon, optimization.
Modeling Dependence in High Dimensions With Factor Copulas
This article presents flexible new models for the dependence structure, or copula, of economic variables based on a latent factor structure. The proposed models are particularly attractive for relatively high-dimensional applications, involving 50 or more variables, and can be combined with semiparametric marginal distributions to obtain flexible multivariate distributions. Factor copulas generally lack a closed-form density, but we obtain analytical results for the implied tail dependence using extreme value theory, and we verify that simulation-based estimation using rank statistics is reliable even in high dimensions. We consider \"scree\" plots to aid the choice of the number of factors in the model. The model is applied to daily returns on all 100 constituents of the S&P 100 index, and we find significant evidence of tail dependence, heterogeneous dependence, and asymmetric dependence, with dependence being stronger in crashes than in booms. We also show that factor copula models provide superior estimates of some measures of systemic risk. Supplementary materials for this article are available online.
Model Comparison with Sharpe Ratios
We show how to conduct asymptotically valid tests of model comparison when the extent of model mispricing is gauged by the squared Sharpe ratio improvement measure. This is equivalent to ranking models on their maximum Sharpe ratios, effectively extending the Gibbons, Ross, and Shanken (1989) test to accommodate the comparison of nonnested models. Mimicking portfolios can be substituted for any nontraded model factors, and estimation error in the portfolio weights is taken into account in the statistical inference. A variant of the Fama and French (2018) 6-factor model, with a monthly updated version of the usual value spread, emerges as the dominant model.