Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
37 result(s) for "Derrig, Richard A"
Sort by:
Predictive modeling applications in actuarial science
\"Predictive modeling involves the use of data to forecast future events. It relies on capturing relationships between explanatory variables and the predicted variables from past occurrences and exploiting this to predict future outcomes. Forecasting future financial events is a core actuarial skill - actuaries routinely apply predictive-modeling techniques in insurance and other risk-management applications. This book is for actuaries and other financial analysts who are developing their expertise in statistics and wish to become familiar with concrete examples of predictive modeling. The book also addresses the needs of more seasoned practicing analysts who would like an overview of advanced statistical topics that are particularly relevant in actuarial practice. Predictive Modeling Applications in Actuarial Science emphasizes life-long learning by developing tools in an insurance context, providing the relevant actuarial applications, and introducing advanced statistical techniques that can be used by analysts to gain a competitive advantage in situations with complex data\"-- Provided by publisher.
A Comparison of State-of-the-Art Classification Techniques for Expert Automobile Insurance Claim Fraud Detection
Several state-of-the-art binary classification techniques are experimentally evaluated in the context of expert automobile insurance claim fraud detection. The predictive power of logistic regression, C4.5 decision tree, k-nearest neighbor, Bayesian learning multilayer perceptron neural network, least-squares support vector machine, naive Bayes, and tree-augmented naive Bayes classification is contrasted. For most of these algorithm types, we report on several operationalizations using alternative hyperparameter or design choices. We compare these in terms of mean percentage correctly classified (PCC) and mean area under the receiver operating characteristic (AUROC) curve using a stratified, blocked, ten-fold cross-validation experiment. We also contrast algorithm type performance visually by means of the convex hull of the receiver operating characteristic (ROC) curves associated with the alternative operationalizations per algorithm type. The study is based on a data set of 1,399 personal injury protection claims from 1993 accidents collected by the Automobile Insurers Bureau of Massachusetts. To stay as close to real-life operating conditions as possible, we consider only predictors that are known relatively early in the life of a claim. Furthermore, based on the qualification of each available claim by both a verbal expert assessment of suspicion of fraud and a ten-point-scale expert suspicion score, we can compare classification for different target/class encoding schemes. Finally, we also investigate the added value of systematically collecting nonflag predictors for suspicion of fraud modeling purposes. From the observed results, we may state that: (1) independent of the target encoding scheme and the algorithm type, the inclusion of nonflag predictors allows us to significantly boost predictive performance; (2) for all the evaluated scenarios, the performance difference in terms of mean PCC and mean AUROC between many algorithm type operationalizations turns out to be rather small; visual comparison of the algorithm type ROC curve convex hulls also shows limited difference in performance over the range of operating conditions; (3) relatively simple and efficient techniques such as linear logistic regression and linear kernel least-squares support vector machine classification show excellent overall predictive capabilities, and (smoothed) naive Bayes also performs well; and (4) the C4.5 decision tree operationalization results are rather disappointing; none of the tree operationalizations are capable of attaining mean AUROC performance in line with the best. Visual inspection of the evaluated scenarios reveals that the C4.5 algorithm type ROC curve convex hull is often dominated in large part by most of the other algorithm type hulls.
Insurance Fraud
Insurance fraud is a major problem in the United States at the beginning of the 21st century. It has no doubt existed wherever insurance policies are written, taking different forms to suit the economic time and coverage available. From the advent of \"railway spine\" in the 19th century to \"trip and falls\" and \"whiplash\" in the 20th century, individuals and groups have always been willing and able to file bogus claims. The term fraud carries the connotation that the activity is illegal with prosecution and sanctions as the threatened outcomes. The reality of current discourse is a much more expanded notion of fraud that covers many unnecessary, unwanted, and opportunistic manipulations of the system that fall short of criminal behavior. Those may be better suited to civil adjudicators or legislative reformers. This survey describes the range of these moral hazards arising from asymmetric information, especially in claiming behavior, and the steps taken to model the process and enhance detection and deterrence of fraud in its widest sense. The fundamental problem for insurers coping with both fraud and systemic abuse is to devise a mechanism that efficiently sorts claims into categories that require the acquisition of additional information at a cost. The five articles published in this issue of the Journal of Risk and Insurance advance our knowledge on several fronts. Measurement, detection, and deterrence of fraud are advanced through statistical models, intelligent technologies are applied to informative databases to provide for efficient claim sorts, and strategic analysis is applied to property-liability and health insurance situations.
The Impact of Rate Regulation on Claims: Evidence From Massachusetts Automobile Insurance
The article tests the hypothesis that insurance price subsidies created by rate regulation lead to higher insurance cost growth. The article makes use of data from the Massachusetts private passenger automobile insurance market, where cross‐subsidies were explicitly built into the rate structure through rules that limit rate differentials and differences in rate increases across driver rating categories. Two approaches are taken to study the potential loss cost reaction to the Massachusetts cross‐subsidies. The first approach compares Massachusetts with all other states while controlling for demographic, regulatory, and liability coverage levels. Loss cost levels that were about 29 percent above the expected level are found for Massachusetts during years 1978–1998, when premiums charged were those fixed by the state and included explicit subsidies for high‐risk drivers. A second approach considers changing cost levels across Massachusetts by studying loss cost changes by town and relating those changes to subsidy providers and subsidy receivers. Subsidy data based on accident year data for 1993–2004 show a significant and positive (relative) growth in loss costs and an increasing proportion of high‐risk drivers for towns that were subsidy receivers, in line with the theory of underlying incentives for adverse selection and moral hazard.
Fraud Classification Using Principal Component Analysis of RIDITs
This article introduces to the statistical and insurance literature a mathematical technique for an a priori classification of objects when no training sample exists for which the exact correct group membership is known. The article also provides an example of the empirical application of the methodology to fraud detection for bodily injury claims in automobile insurance. With this technique, principal component analysis of RIDIT scores (PRIDIT), an insurance fraud detector can reduce uncertainty and increase the chances of targeting the appropriate claims so that an organization will be more likely to allocate investigative resources efficiently to uncover insurance fraud. In addition, other (exogenous) empirical models can be validated relative to the PRIDIT-derived weights for optimal ranking of fraud/nonfraud claims and/or profiling. The technique at once gives measures of the individual fraud indicator variables' worth and a measure of individual claim file suspicion level for the entire claim file that can be used to cogently direct further fraud investigation resources. Moreover, the technique does so at a lower cost than utilizing human insurance investigators, or insurance adjusters, but with similar outcomes. More generally, this technique is applicable to other commonly encountered managerial settings in which a large number of assignment decisions are made subjectively based on \"clues,\" which may change dramatically over time. This article explores the application of these techniques to injury insurance claims for automobile bodily injury in detail.
Using Kohonen's Self-Organizing Feature Map to Uncover Automobile Bodily Injury Claims Fraud
Claims fraud is an increasingly vexing problem confronting the insurance industry. In this empirical study, we apply Kohonen's Self-Organizing Feature Map to classify automobile bodily injury (BI) claims by the degree of fraud suspicion. Feed forward neural networks and a back propagation algorithm are used to investigate the validity of the Feature Map approach. Comparative experiments illustrate the potential usefulness of the proposed methodology. We show that this technique performs better than both an insurance adjuster's fraud assessment and an insurance investigator's fraud assessment with respect to consistency and reliability.
Modeling Hidden Exposures in Claim Severity Via the Em Algorithm
We consider the issue of modeling the latent or hidden exposure occurring through either incomplete data or an unobserved underlying risk factor. We use the celebrated expectationmaximization (EM) algorithm as a convenient tool in detecting latent (unobserved) risks in finite mixture models of claim severity and in problems where data imputation is needed. We provide examples of applicability of the methodology based on real-life auto injury claim data and compare, when possible, the accuracy of our methods with that of standard techniques. Sample data and an EM algorithm program are included to allow readers to experiment with the EM methodology themselves.
Catastrophe Management in a Changing World: The Case of Hurricanes
This article features a presentation and discussant comments on hurricane and wind insurance organized by Richard A. Derrig for the American Risk and Insurance Association (ARIA) 2007 Annual Meeting in Quebec City, Quebec, Canada. The principal presenter is Jay S. Fishman, Chairman and Chief Executive Officer of The Travelers Companies, Inc. The focus of the discussion concerns the tail of loss distributions. The tail is associated with catastrophes, or cat risk, such as the devastation from Hurricanes Katrina, Wilma, Andrew, and others. In this article, Fishman provides his perspective on the effect of wind in catastrophic losses while Professors Joan Schmit and Martin Grace will follow with discussions that put wind catastrophes in general insurance contexts.
Auto Insurance Fraud: Measurements and Efforts to Combat It
This article features a panel discussion on insurance fraud organized by Richard Derrig for the 2005 World Risk and Insurance Economics Congress (WRIEC). The moderator is Richard Derrig. Richard is president of OPAL Consulting LLC, Providence, RI. Richard formed OPAL after retiring in 2004 from the Auto Insurers and Insurance Fraud Bureaus of Massachusetts. The presenters are Beth Sprinkel and Dan Johnston. Beth Sprinkel's presentation is based on the 2002 claim study by the Insurance Research Council. Beth is senior vice president of the American Institute for CPCU and the Insurance Institute of America (the Institutes). The first speaker, Beth Sprinkel, will discuss the results of a study of 2002 closed auto injury insurance claims. Next, Dan Johnston will present the results of ongoing activity concerning insurance fraud in Massachusetts, including an interesting case that has important long-term implications.
Equity Risk Premium
The equity risk premium (ERP) is an essential building block of the market value of risk. In theory, the collective action of all investors results in an equilibrium expectation for the return on the market portfolio excess of the risk-free return, the ERP. The ability of the valuation actuary to choose a sensible value for the ERP, whether as a required input to capital asset pricing model valuation, or any of its descendants, is as important as choosing risk-free rates and risk relatives (betas) to the ERP for the asset at hand. The historical realized ERP for the stock market appears to be at odds with pricing theory parameters for risk aversion. Since 1985, there has been a constant stream of research, each of which reviews theories of estimating market returns, examines historical data periods, or both. Those ERP value estimates vary widely from about −1% to about 9%, based on a geometric or arithmetic averaging, short or long horizons, short- or long-run expectations, unconditional or conditional distributions, domestic or international data, data periods, and real or nominal returns. This paper examines the principal strains of the recent research on the ERP and catalogues the empirical values of the ERP implied by that research. In addition, the paper supplies several time series analyses of the standard Ibbotson Associates 1926-2002 ERP data using short Treasuries for the risk-free rate. Recommendations for ERP values to use in common actuarial valuation problems also are offered.