Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,882 result(s) for "Risk (Insurance) Mathematical models."
Sort by:
Ruin probabilities
The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence.
EQUILIBRIA IN HEALTH EXCHANGES: ADVERSE SELECTION VERSUS RECLASSIFICATION RISK
This paper studies regulated health insurance markets known as exchanges, motivated by the increasingly important role they play in both public and private insurance provision. We develop a framework that combines data on health outcomes and insurance plan choices for a population of insured individuals with a model of a competitive insurance exchange to predict outcomes under different exchange designs. We apply this framework to examine the effects of regulations that govern insurers' ability to use health status information in pricing. We investigate the welfare implications of these regulations with an emphasis on two potential sources of inefficiency: (i) adverse selection and (ii) premium reclassification risk. We find substantial adverse selection leading to full unraveling of our simulated exchange, even when age can be priced. While the welfare cost of adverse selection is substantial when health status cannot be priced, that of reclassification risk is five times larger when insurers can price based on some health status information. We investigate several extensions including (i) contract design regulation, (ii) self-insurance through saving and borrowing, and (iii) insurer risk adjustment transfers.
INFERRING LABOR INCOME RISK AND PARTIAL INSURANCE FROM ECONOMIC CHOICES
This paper uses the information contained in the joint dynamics of individuals' labor earnings and consumption-choice decisions to quantify both the amount of income risk that individuals face and the extent to which they have access to informal insurance against this risk. We accomplish this task by using indirect inference to estimate a structural consumption-savings model, in which individuals both learn about the nature of their income process and partly insure shocks via informal mechanisms. In this framework, we estimate (i) the degree of partial insurance, (ii) the extent of systematic differences in income growth rates, (iii) the precision with which individuals know their own income growth rates when they begin their working lives, (iv) the persistence of typical labor income shocks, (v) the tightness of borrowing constraints, and (vi) the amount of measurement error in the data. In implementing indirect inference, we find that an auxiliary model that approximates the true structural equations of the model (which are not estimable) works very well, with negligible small sample bias. The main substantive findings are that income shocks are moderately persistent, systematic differences in income growth rates are large, individuals have substantial amounts of information about their income growth rates, and about one-half of income shocks are smoothed via partial insurance. Putting these findings together, the amount of uninsurable lifetime income risk that individuals perceive is substantially smaller than what is typically assumed in calibrated macroeconomic models with incomplete markets.
Equity derivatives and hybrids : markets, models and methods
\"Since the development of the Black Scholes model, research on equity derivatives has evolved rapidly - to the point where it is now difficult to cut through the myriad of literature to find relevant material. Written by an experienced practitioner and acknowledged authority on quantitative equity research, this book provides an up-to-date account of equity and equity-hybrid (equity-rates, equity-credit, equity-foreign exchange) derivatives modeling from a practitioner's perspective. The content reflects the requirements of practitioners in financial institutions: Quants will find a survey of state of the art models and guidance on how to efficiently implement them with regards to market data representation, calibration and sensitivity computation. Traders and structurers will learn about structured products, selection of most appropriate models as well as efficient hedging methods while risk managers will better understand market, credit and model risk and find valuable information on advanced correlation concepts.Equity Derivatives and Hybrids provides exhaustive coverage of both market standard and new approaches, including: Empirical properties of stock returns including autocorrelation and jumpsDividend discount modelsNon-Markovian and discrete time volatility processesCorrelation skew modeling via copula as well as local and stochastic correlation factorsHybrid modeling covering local and stochastic processes for interest rate, hazard rate and volatility as well as closed form solutions Credit, debt and funding valuation adjustment (CVA, DVA, FVA) Monte Carlo techniques for sensitivities including algorithmic differentiation, path recycling as well as multilevelWritten in a highly accessible manner with examples, applications, research and ideas throughout it, this book provides a valuable resource for quantitative-minded practitioners and researchers everywhere. \"-- Provided by publisher.
HETEROGENEOUS CHOICE SETS AND PREFERENCES
We propose a robust method of discrete choice analysis when agents’ choice sets are unobserved. Our core model assumes nothing about agents’ choice sets apart from their minimum size. Importantly, it leaves unrestricted the dependence, conditional on observables, between choice sets and preferences. We first characterize the sharp identification region of the model’s parameters by a finite set of conditional moment inequalities. We then apply our theoretical findings to learn about households’ risk preferences and choice sets from data on their deductible choices in auto collision insurance. We find that the data can be explained by expected utility theory with low levels of risk aversion and heterogeneous non-singleton choice sets, and that more than three in four households require limited choice sets to explain their deductible choices. We also provide simulation evidence on the computational tractability of our method in applications with larger feasible sets or higher-dimensional unobserved heterogeneity.
Unravelling the predictive power of telematics data in car insurance pricing
A data set from a Belgian telematics product aimed at young drivers is used to identify how car insurance premiums can be designed based on the telematics data collected by a black box installed in the vehicle. In traditional pricing models for car insurance, the premium depends on self-reported rating variables (e.g. age and postal code) which capture characteristics of the policy(holder) and the insured vehicle and are often only indirectly related to the accident risk. Using telematics technology enables tailor-made car insurance pricing based on the driving behaviour of the policyholder. We develop a statistical modelling approach using generalized additive models and compositional predictors to quantify and interpret the effect of telematics variables on the expected claim frequency. We find that such variables increase the predictive power and render the use of gender as a rating variable redundant.
A Review of Flood Loss Models as Basis for Harmonization and Benchmarking
Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework.