Printer Friendly

Fuzzy trends in property-liability insurance claim costs.

where |m.sub.i~ = the membership function score for method i, and

||Delta~.sub.i~ = the projected inflation factor produced by method i.

The weighted trend factors are presented in the last row of Table 6. They range from 13.3 percent for the bounded difference operator to 14.6 percent for the ordinary operator. These are quite close to the trend factors produced by methods 11(7) and 11(8), 14.5 and 13.5 percent, respectively. Except Introduction

The problem of forecasting expected loss costs for a prospective policy period is one of the fundamental responsibilities of the property-casualty actuary. Forecasting accuracy may well be the most important ingredient in determining the ultimate profitability of the policy cohort (Derrig, 1987). Despite its critical nature, the actuarial literature on forecasting is rudimentary (Casualty Actuarial Society, 1990), and the more advanced research has focused primarily on forecasting severity (Cummins and Powell, 1980; Cummins and Harrington, 1985; and Cummins and Griepentrog, 1985).

The actuarial trending literature devotes little attention to the evaluation of alternative methods. Actuaries typically choose a model from a limited set of alternatives that has been assembled judgmentally. The model selection process usually does not have a formal feedback loop; that is, the selected model or models are almost never tested ex post to determine whether they performed well. The insurance economics literature (articles cited above) has focused on the forecasting accuracy of alternative methods, including time trend and econometric models. However, these studies have been oriented toward the identification of a "best" forecasting method, whereas the true objective of the process should be the selection of a forecast and not a method. Stated differently, it is not necessarily true that a method that was in some sense "best" in the past will provide the most accurate forecast of claim costs in the future. The best-method approach not only focuses on the wrong objective but also disregards the information provided by alternative trending methods. This is inappropriate in view of the growing body of literature demonstrating that forecast accuracy can be improved by combining forecasts (Zarnowitz and Braun, 1992; Clemen, 1989) and that no one method can be expected to perform best over time for different data series (McNees, 1992).

In our view, the presumption that there exists a single "best" model is fundamentally flawed. Not only do alternative models provide additional information, but models with more or less equal historical performance rankings can provide vastly different ex ante forecasts. Moreover, many of the criteria used in selecting a forecast are inherently vague; for example, actuaries may seek forecasts that are "accurate," "adequate," or "reasonable." A general setting or framework is needed within which the complexity and inherent vagueness of the model/forecast selection problem can be highlighted and accommodated.

The framework proposed here is fuzzy set theory (FST), a relatively new branch of mathematics, which provides a mechanism for dealing with vague or "fuzzy" data and decision criteria. FST can be used to model linguistic variables such as "a tall person," "a good risk," and "accurate forecast," etc., which are mostly beyond the realm of conventional mathematics. FST also provides a way to systematize a decision process in the presence of multiple criteria, some or all of which are fuzzy. It offers a framework for retaining useful information provided by models that may not be "best" in the conventional sense. It also facilitates the integration of judgmental information with data that are more or less objective. Much more than a purely theoretical construct, fuzzy logic has been used in a wide variety of practical applications ranging from the design of video cameras and subway cars to medical diagnoses and computer network design. FST has been applied to insurance by Lemaire (1990).

The fundamental concept of FST is the alternative formalization of membership in a set to include the degree or strength of membership. In conventional set theory, membership in sets is dichotomous: an element either is or is not a member of a given set. In FST, on the other hand, elements can be definite members or non-members of a set, but they also can be more-or-less members of the set. The black and white membership of conventional set theory is replaced by black, white, and all shades of gray.

The attractiveness of FST for the forecasting problem comes from an exposition of what is meant by a "good" forecast or forecasting method. FST can be used to address practical forecasting problems by viewing individual forecasting methods as candidates for membership in the fuzzy set of "good" methods and/or by viewing individual forecasts as candidates for membership in the fuzzy set of "good" or "accurate" forecasts. FST provides a natural framework for studying forecasting problems that combine statistics and judgment to satisfy multiple decision criteria, some or all of which are expressed verbally rather than quantitatively.

This article demonstrates the use of fuzzy set theory as a methodology for the selection of insurance claim cost forecasts. The intent is to develop a method for integrating judgmental and statistical information to evaluate or grade a set of forecasts according to an appropriate set of criteria. The method provides a natural framework for capturing the information from forecasts produced by a wide range of methods in proportion to their degrees of membership in the fuzzy set of "good" forecasts.(1) Contrary to the conventional approach, the goal here is not to choose the "best" forecasting method. The answer to the forecasting problem posed in this article is the fuzzy set of good forecasts rather than any given forecast or forecast method. However, because insurers must choose a single trend factor to include in the rates, we also propose a method for producing a trend factor taking into account the information provided by the fuzzy set.

We conduct forecast experiments similar to those presented in Cummins and Powell (1980), Cummins and Harrington (1985), and Cummins and Griepentrog (1985). But while they tested forecasts of severity, we focus on pure premiums, because the ultimate effect of forecasting on profits is determined by the accuracy of forecasts of pure premiums, not by severity atone. Massachusetts private passenger automobile insurance data are used in the experiments. Fuzzy set theory is then used to rank the forecasting methods in terms of their membership values in the fuzzy set of "good" forecasts, and the membership values are used to select a trend factor.

The section below outlines the claim cost forecasting problem, and the next section discusses the results of the experiments. The results point to the difficulty of choosing the best forecasting method because of multiple "goodness" criteria and the diversity of forecasts generated by alternative methods. Following an overview of fuzzy set theory and a discussion of its use in decision problems, we illustrate the use of fuzzy set theory by evaluating the alternative forecasts in the Massachusetts experiments.

The Forecasting Problem in Property-Liability Insurance

Importance of Claim Cost Forecasting

The important role of claim cost forecasting in property-liability insurance ratemaking is apparent from a simple discounted cash flow ratemaking model, which gives the premium for a policy issued at time t with a single loss payment at time t+j:

|P.sub.t~ = |L.sub.t~ 1 + ||Delta~.sub.t+j~/|(1 + r).sup.j~, (1)

where |P.sub.t~ = the premium,

|L.sub.t~ = expected loss payments at price levels prevailing at time t,

||Delta~.sub.t+j~ = the predicted change in the price level between the issue date (time t) and period t+j,

where current price level = 1.0,

r = the risk-adjusted discount rate for losses = (1+|r.sub.f~)(1-|Pi~)-1,

|r.sub.f~ = the risk-free rate, and

|Pi~ = the risk premium (profit loading).

Anticipated changes in claim costs are reflected in the trend factor, (1 + ||Delta~.sub.t+j~). If insurance claim cost inflation were identical to general (economy-wide) inflation, it would not be necessary to devote much attention to estimating the trend factor. Interest rates are highly correlated with anticipated economy-wide inflation, and there is considerable evidence in support of the Fisher hypothesis, which states that: (1+|r.sub.f~) = (1+i)(1+|r.sub.f~), where i = the expected rate of inflation, and |r.sub.r~ = the real rate of interest. If expected insurance and general inflation were identical and the Fisher hypothesis were correct, no explicit trend factor would be needed and claim costs would be discounted at (1+|r.sub.f~)(1-|Pi~).

Insurance inflation usually is not equal to general inflation. For example, during much of the 1980s, inflation in private passenger auto insurance on a countrywide basis exceeded general inflation by a margin approaching three to one (Cummins and Tennyson, 1992). Thus, it is important for the trend factor to reflect anticipated insurance inflation. Claim cost forecasting is the method used to estimate insurance (claim cost) inflation.

Errors in the trend factor have nearly a one-for-one effect on realized profit. To illustrate, consider a policy with a single loss payment one period from the policy issue date (t). Assuming that insurer assets are invested at |r.sub.f~, the insurer's net income (Y) at the end of the period is

Y = |P.sub.t~(1 + |r.sub.f~) - |L.sub.t~(1 + ||Delta~.sub.a~) = |L.sub.t~ 1 + |Delta~/1 - |Pi~ - |L.sub.t~ (1 + ||Delta~.sub.a~), (2)

where |Delta~ = the predicted change in the price level, and

||Delta~.sub.a~ = the actual change in the price level during the period.

Suppose net income is normed to the correct (perfect foresight) premium accumulated at interest, that is, to |P.sub.a~(1+|r.sub.f~) = |L.sub.t~(1+||Delta~.sub.a~)/(1-|Pi~). Then we have

|Mathematical Expression Omitted~,

where |P.sub.a~ = |L.sub.t~(1+||Delta~.sub.a~)/(1+r) = the perfect foresight premium.

The bracketed expression in equation (3) is the proportionate error in predicting loss growth, which we call the total predicted change error (TPCE). The TPCE is a direct offset against expected profit. Because |Pi~ is usually small (less than 0.05), trending errors can drastically alter the insurer's profit position. For example, if |Pi~ = 0.025, |Delta~ = 0.05, and ||Delta~.sub.a~ = 0.08, the insurer will realize a profit of -0.0028 rather than the expected profit of 0.025.

One of the principal reasons that insurers expect to make a profit by underwriting insurance policies is that the ultimate cost associated with those policies is uncertain at the time of sale. Doherty and Garven (1986) and Cummins (1988) have shown through options pricing models that even unsystematic loss cost uncertainty directly affects the fair price for the policy. Thus, the risk of claim cost trending error will be reflected in insurance prices. Insurers that are relatively unsuccessful at forecasting loss costs will not earn an adequate return on equity, reinforcing the need for the adoption of the best available forecasting techniques.

The Actuarial Approach to Trending

Expected losses are determined by the following relationship:(*)

E(L) = E(F) E(S), (4)

where E(L) = expected losses per exposure unit (the pure premium),

E(F) = expected claim frequency per exposure unit, and

E(S) = expected claim severity (monetary loss per claim).

Thus, ratemaking could proceed by either developing separate trend factors for frequency and severity and multiplying the results to obtain the pure premium trend or developing a single trend factor for the pure premium. Under some circumstances, disaggregation can create a pooling effect due to offsetting errors, leading to improved forecasts (Armstrong, 1978, pp. 455-458). However, this effect is not always present, and the value of disaggregation is ultimately an empirical issue.

The standard actuarial approach to forecasting claim costs is to develop separate trend factors for frequency and severity using simple linear or exponential time trend models. The models are estimated using ordinary least squares (OLS). The following equations and estimation period are typical:

ln(|S.sub.t-i~) = ||Alpha~.sub.S~ + ||Beta~.sub.S~(t - i) + ||Epsilon~.sub.|S.sub.t-i~~ (5)

ln(|F.sub.t-i~) = ||Alpha~.sub.F~ + ||Beta~.sub.F~(t - i) + ||Epsilon~.sub.|F.sub.t-i~~, (6)

where t-i = estimation period index, i = 0, 1 ,..., 11, and time t = the present,

||Alpha~.sub.j~ and ||Beta~.sub.j~ = coefficients to be estimated,

|Epsilon~ = is a random error term,

|S.sub.t-i~ = the average paid claim cost in period t-i, and

|F.sub.t-i~ = the average claim frequency per exposure unit in period t-i.

The trend factor is obtained as follows:

||Delta~.sub.t+j~ = |(1 + |b.sub.s~).sup.j~ |(1 + |b.sub.F~).sup.j~, (7)

where |b.sub.S~ and |b.sub.F~ = estimated values of the slope coefficients in equations (5) and (6), and

j = the length of the forecast period (quarters).

The forecast period is the time between the midpoint of the data period used in calculating the indicated rate level change and the average accident date for the policies that will be written under the new rates. A seven- or eight-quarter forecast period is typical.

Prior Research

The actuarial approach to forecasting is rudimentary, and it would not seem difficult to obtain better forecasts using more advanced methods. This line of investigation has been pursued by Cummins and Powell (1980), Cummins and Harrington (1985), and Cummins and Griepentrog (1985), who compare the actuarial approach with econometric models and more sophisticated time trending methods. The studies used severity data--that is, countrywide average paid claim costs for private passenger automobile insurance--and produced the following general findings:

1. Econometric models generally work better than the simple time trend models.

2. Broad-based indices forecast claim costs better than specific sectorial indices. The private sector wage rate performed well in predicting aggregate claim costs.

3. Single-variable econometric models are better than multiple-variable models. Adding another price or wage index as an explanatory variable does not improve predictive accuracy.

4. Simple time trend models (i.e., linear or exponential trending) usually perform better than complicated (ARIMA-type) trend models.

5. Forecasts can often be improved by estimating the forecasting equation using a method that corrects for first-order autocorrelation of the residuals.

6. Complicated econometric estimation adjustments generally do not improve forecast results.

7. A priori reasoning is a valuable guide for specifying models and selecting variables, but the best methods can be identified only by testing. A comprehensive survey of forecasting by Armstrong (1978) suggests that most of these conclusions apply in general to the forecasting of many types of economic and social science phenomena.

How to Identify a Good Forecast

The objective of this study is to explore the decision-making process for identifying a good forecast. Conventionally, the forecast selection process is based on theoretical arguments regarding the statistical properties of estimators and forecast errors or on empirical tests of predictive accuracy. The conventional approach aims to choose a single best forecasting method based on the results of the empirical tests. There are two problems with this approach. First, isolating a single method based on somewhat subjectively selected accuracy criteria disregards the information contained in alternative forecasting methods that perform nearly as well as the conventionally-selected best method. The "nearby" methods could have performed "best" based on different accuracy criteria or under different economic conditions. Second, the conventional approach focuses on the choice of a method, even though the true objective of the process is the selection of the trend factor and not the method itself.(2) By way of analogy, consider the selection of a stock for investment purposes. The investor's objective is to choose a stock that will perform well in the future, not to develop methods that explain past performance in some optimal way.

This article proposes an alternative approach to the trend factor decision problem, the use of fuzzy set theory. FST is the mathematical approach that explicitly models the vagueness of linguistic variables such as "accurate," "better," or "reliable" (Lemaire, 1990). Because FST provides a mechanism for simultaneously considering several forecasting methods and distinguishing among them by means of an explicit grading or ranking process, it allows the decision-maker to retain more of the information contained in the set of

alternative forecasts. Unlike conventional methods, our FST approach allows for explicit consideration of the output of the forecasting methods--the trend factor--as part of the decision-making process. Thus, it potentially provides a better model of the actual decision process than the conventional approach.

Actual forecasting processes typically combine judgmental and statistical criteria and often involve a range of forecasting methods. Extensive support for the multiple forecast approach has been provided in the macroeconometrics literature (Clemen, 1989; Zarnowitz and Braun). Support for using judgmental criteria also is provided in the literature. For example, McNees (1990) observes that the best forecasts are made not by abandoning models or judgment but by blending both sources of information. He concludes that the forecast of a model, adjusted judgmentally to take into account information not incorporated in the model, tends to be more accurate than one generated mechanically. The forecasting problem in property-liability insurance is to find the appropriate blend of forecast technique and judgment. Below, technique and judgment are blended within the context of fuzzy set theory.

Our study conducts and then evaluates a set of 72 benchmark forecasting methods using private passenger auto insurance data from Massachusetts. The methods are compared using conventional criteria to search for a good forecast. Then, FST is used to consolidate the methods in terms of fuzzy criteria. The output of the FST process is a fuzzy set of "good" trend factors for use in setting prices beyond the sample period. Finally, for those who need a single forecast, we relate the fuzzy set of forecasts to the conventional use of weighted averages.

Claim Cost Forecasting Experiments

This section presents the results of claim cost forecasting experiments using Massachusetts private passenger automobile insurance data to develop "accurate" forecasts of pure premiums. Experiments were conducted using various time-trend and econometric techniques. The tests focus on both the predictive accuracy and the bias of the methods, where bias is defined as the tendency for a method to yield forecasts that are consistently higher or lower than realized claim costs.

Measurement of Forecast Accuracy

Forecast accuracy is gauged by estimating and then applying the trend factors over seven-quarter forecast periods to obtain predicted claim cost inflation and then comparing predicted inflation with actual claim cost inflation over the same forecast periods.(3) The principal measure of accuracy used here is the total predicted change error (TPCE), where

Trend Factor = 1 + |Delta~, (8)

Actual claim cost index = 1 + ||Delta~.sub.a~ = |x.sub.t+7~/|x.sub.t~, and (9)

TPCE = 1 + |Delta~/1 + ||Delta~.sub.a~ - 1 = |Delta~ - ||Delta~.sub.a~/1 + ||Delta~.sub.a~, (10)

where |x.sub.t~ = the value of the forecasted variable in quarter t. The TPCE provides a direct indication of the impact of trending errors on profit.

The forecasting experiments are designed to replicate as closely as possible the results that would have been obtained under each forecasting method if that method had been used consistently in the past. Thus, the experiments assume that the only available data for each forecast period are those prior to the period. For example, forecasts for the seven-quarter forecast period 1981.1 through 1982.3 were based exclusively on data available prior to 1981.(4)

Forecasting experiments were conducted for automobile bodily injury liability (BIL) coverage. The maximum number of forecast experiments was conducted; that is, the number of experiments was limited only by the starting point of the pure premium or economic series used in the forecasts.(5)

The Forecasting Methods

The forecasting methods chosen for analysis represent a synthesis of current actuarial time trending techniques and the econometric methods proposed by, for example, Cummins and Griepentrog (1985). The goal is to span the set of reasonably simple methods identified as "good" in terms of current actuarial practice and prior performance. We have chosen not to follow the conventional approach of judgmentally discarding methods from this set a priori. Instead, we bring judgment into the process through the application of fuzzy methods to the entire set.

The time trend models rely exclusively on claim cost data and are very similar to the prevailing actuarial methods. Linear and exponential time trending equations are used. The exponential equations are presented as equations (5) and (6). The linear equation for frequency is:

|F.sub.t-i~ = ||Alpha~.sub.F~ + ||Beta~.sub.F~(t - i) + ||Epsilon~.sub.|F.sub.t-i~~, (11)

where i = 0, 1,..., I-1, and I is the number of periods used in estimating the equation. The severity equation is analogous.

The frequency and severity trend factor calculations are illustrated below for the case of exponential trending:(6)

Frequency: 1 + ||Delta~.sub.F~ = exp(|a.sub.F~ + (I + 7)|b.sub.F~)/|F.sub.t~. (12)

Severity: 1 + ||Delta~.sub.S~ = exp(|a.sub.s~ + (I + 7)|b.sub.s~)/|S.sub.t~. (13)

Overall Pure Premium Trend: 1 + |Delta~ = (1 + |Delta~.sub.F) * (1 + ||Delta~.sub.S~). (14)

|F.sub.t~ and S|sub.t~ represent frequency and severity in the last quarter of the estimation period.

The econometric forecasts differ from the time trend methods in using the external (economic) data series as the independent variable in the severity equation. Linear and log-linear specifications are used, consistent with the prior literature:

Linear Severity: |S.sub.t-i~ = ||Alpha~.sub.S~ + ||Beta~.sub.s~ |C.sub.t-i~ + ||Epsilon~.sub.|S.sub.t-i~~ (15)

Log-linear Severity: log(|S.sub.t-i~) = ||Alpha~.sub.S~ + ||Beta~.sub.S~ log(|C.sub.t-i~) + ||Epsilon~.sub.|S.sub.t-i~~, (16)

where |C.sub.t-1~ = the value of an economic variable in period t-i, and i = 0, 1,..., I-1. The calculation of the severity trend factor for the log-linear econometric case is illustrated below:

|Mathematical Expression Omittted~,

where |Mathematical Expression Omittted~ = forecasted value of |C.sub.t+7~. The severity forecast factor is used in equation (14), along with the frequency time trend factor, to obtain the overall trend factor.(7) Econometric models were not used in frequency forecasting because preliminary work suggested that the use of economic indices would not improve forecast performance.

The severity forecast for the econometric case requires an estimate of the external (economic) index for the last quarter of the forecast period (i.e., an estimate of |C.sub.t+7~). As a benchmark, the forecasts were conducted using the actual values of Q+7. Forecasted values for the external economic index were also obtained using time trending. Denoting the last quarter of the estimation period for equations (15) and (16) as period t, the forecasting equations for |C.sub.t-+7~ were based on estimation periods ending in t+3, reflecting the more timely availability of the economic data series.(8)

Only one economic index was used in the experiments for each coverage--the Massachusetts composite auto insurance price index for the coverage under consideration. These indices have been developed by the Massachusetts State Rating Bureau and are supposed to represent the costs of goods and services covered by automobile insurance policies.(9) The forecasts of the economic series span periods t+4 through t+7.

A forecasting method is characterized by an estimation period, an estimation technique, a frequency model, and a severity model. The forecast methods are summarized in Table 1. Section A of Table 1 summarizes the combinations of estimation periods, estimation techniques, and frequency models. Three estimation periods were used to obtain the forecasting equations: I = eight quarters, twelve quarters, and all available data prior to the forecast period. Two estimation techniques were used: ordinary least squares (OLS) and least squares adjusted for first-order autocorrelation of the residuals (ARI).(10) Linear and exponential time trends (equations |11~ and |6~) were used as frequency models.

For each combination of estimation period, estimation technique, and frequency model, eight severity forecasts were used. These are summarized in section B of Table 1. Thus, a total of 96 forecasts (twelve combinations from section A of Table 1 times eight severity forecasts per combination) were conducted for each forecast period. Twenty-four of these were benchmark forecasts using actual rather than predicted values of the economic indices. The forecast comparisons are based on the 72 experiments that do not assume knowledge of any information during the forecast period.

Comparison of Trending Methods Using Conventional Criteria

The annualized inflation rate in Massachusetts automobile (BIL) pure premiums from 1978 through 1988 was 12.3 percent, whereas the inflation rate in the Consumer Price Index (CPI) during this period was only 6.2 percent. Thus, it would be erroneous to assume that Massachusetts BIL inflation is equal to general inflation. Trend factors are needed to project claim costs.(11)

To test the accuracy of alternative claim cost forecasting methodologies, experiments were conducted using Massachusetts BIL claim cost data provided by the Automobile Insurers Bureau of Massachusetts.(12) The results of the experiments are reported below. Unless stated otherwise, all discussions of forecast accuracy refer to the pure premium series.(13)


As indicated above, the total predicted change error is the primary measure of forecast performance used in this study. Both average absolute and average TPCEs are reported. Earlier studies used absolute TPCEs as the principal measure of overall forecast accuracy. In real-world applications, average errors are equally important. For example, insurers will experience problems if they use methods with low absolute errors that consistently under-predict claim costs. However, average errors must be interpreted with caution because a low average error can mask large offsetting positive and negative errors. As explained below, FST provides a mathematically consistent way to capture information provided by both performance measures.

The BIL forecast errors are summarized in Tables 2 and 3. Table 2 presents absolute average total predicted change errors, while Table 3 presents average TPCEs. The tables report the results of seven-quarter forecasts conducted beginning in 1981.1 and in every subsequent quarter up to 1987.2. The last forecast was for the last available data point, 1988.4. A total of 26 separate forecasts was conducted for every method reported in Tables 2 and 3.(14) The number presented in each cell of Table 2 (3) is the average of the absolute (non-absolute) error for the 26 forecasts using the method combination reported in the cell. The rows in Tables 2 and 3 correspond to the twelve combinations of estimation techniques, estimation periods, and frequency models presented in section A of Table 1, while the columns correspond to the severity models specified in section B of Table 1. Methods often are referred to by their row and column numbers; for example, method 5(3) refers to the results reported in row 5, column 3 of Table 2 or 3.

The forecast errors reported in Table 2 for bodily injury liability pure premiums range from slightly less than 4 to about 8 percent. Most of the methods have errors in the range from about 6 to 8 percent. The methods which use all available data to fit the forecasting equations (the all-data methods) clearly stand out as the most accurate in terms of the absolute average TPCE criterion.(15) The average TPCEs for BIL are reported in Table 3. Most of the errors are quite low (the largest in absolute terms is 4.3 percent), but the majority tend to be biased toward under-predicting claim costs. Considering both average and absolute average errors, the best methods seem to be 6(7), 6(8), 11(7), and 11(8). The average absolute TPCEs for these methods are among the lowest in Table 2, and their average TPCEs are virtually indistinguishable from zero.

Additional information on forecast accuracy is presented in Table 4, which provides average and absolute average errors for frequency, severity, and pure premiums for two of the conventionally chosen "best" methods, 6(7) and 6(8). The forecast results illustrate a phenomenon that might be termed "forecast pooling." The methods summarized in the exhibit obtain relatively low overall average errors even though they consistently over-predict severity and underpredict frequency, because the frequency and severity errors tend to be TABULAR DATA OMITTED TABULAR DATA OMITTED TABULAR DATA OMITTED offsetting. This suggests that relative forecast errors may be reduced by pooling across forecasts (see Armstrong, 1978, and Granger and Newbold, 1986), a technique that is worth exploring in future work. Table 4 provides additional evidence that it is advisable to consider both average and absolute average errors.

Trend Factors for 1989.1-1990.3

The forecasting models were used to estimate trend factors for the period 1989.1-1990.3 on the assumption that the last available data point is 1988.4. The trend factors are presented in the Appendix. The average pure premium factor for all benchmark methods is 13.7 percent. The 72 individual trend factors, however, show wide variations, even among the "best" methods. Obviously, the wide range of trend factors produced by the alternative forecasting methods creates uncertainty with regard to the choice of a forecast. The next step in the traditional analysis would be to apply statistical methods to eliminate inaccurate forecasts and combine the remaining forecasts to reduce forecast error variance. Such methods are available in the statistics and econometrics literature (e.g., Granger and Newbold, 1986), and their application represents a promising direction for future research. The following sections, however, illustrate the evaluation of the forecasting methods using fuzzy set theory.

Elements of Fuzzy Set Theory

In order to describe the fuzzy set theory context for the pure premium forecasting problem, some basic definitions are needed. We follow Lemaire (1990) in describing some fundamentals of FST. Examples and applications focus on the forecasting problem. See also Kandel (1986) and Zimmerman (1991).

Basics of FST

The fundamental concept of FST is the alternative formalization of membership in a set to include the degree or strength of membership. Let X = {x} be an ordinary set of objects. For example, X might be the set of all feasible trend factors, a subset of the real numbers. A fuzzy set A in X is a set of ordered pairs

A = { x, |U.sub.A~ (x) }, x |is an element of~ X, (18)

where |U.sub.A~ maps X to the interval |0, 1~. The value |U.sub.A~(X) is called the grade or value of membership of x in the fuzzy set A with 0 and 1 representing, respectively, the lowest and highest grades (values) of membership. When {|U.sub.A~(X)} only contains the two points 0 and 1, A is non-fuzzy.

EXAMPLE 1: A "GOOD" BI FORECASTING METHOD. Let X be the set of 72 BIL forecasting methods whose average absolute TPCEs are shown in Table 2, that is, X is the set of models considered in this study.(16) Let us assume that a "good" trend methodology is one that predicted well enough in the past (according to the TPCE values shown in Table 2) to be "near" the "best" method according to the Table 2 test results; namely, the method that uses linear OLS time trends with all data for frequency and an exponential time trend for severity. That method, denoted method 6(2) for row 6 column 2 in Table 2, is chosen as the "best" method according to this criterion because it has the lowest average absolute TPCE.

We define the membership function for X by

|U.sub.|g.sub.1~~ (X) = TPCE|6(2)~/TPCE|X~ = 3.96/TPCE|X~, (19)

where |U.sub.g1~(X) = the membership function for the fuzzy set of forecasting methods that are "good" in terms of historical forecast accuracy. Then, for example, |U.sub.g1~|6(2)~ = 1. Hence 6(2) is definitely a "good" method. On the other hand, for two other methods we have

A. All Data, OLS, Exponential Frequency, Exponential Severity: |U.sub.g1~| 5(2) ~ = 3.96/4.24 = 0.934 and

B. Eight-Quarter, OLS, Linear Frequency, Exponential Severity: |U.sub.g1~| 2(2) ~ = 3.96/6.98 = 0.567.

Liberally interpreted, Method A is quite good, although not the best, while Method B is little more than half as good as the best. Figure 1 illustrates the membership function of the set of good BIL forecast methods from the best 6(2) to the "worst" 7(8), the latter with a membership value of only 0.508. In this example, the fuzzy set is the entire set of objects (the 72 benchmark methods). If we enlarged the set of objects considered to all forecast methods in some suitable space, the membership function above would "fuzzify" the benchmark forecasts by setting the membership function (19) for all other methods equal to zero.

A fuzzy set is said to be normal if Sup x |U.sub.A~(X) = 1. Subnormal fuzzy sets can be normalized by dividing each |U.sub.A~(X) by |Sup.sub.X~ |U.sub.A~(X).(17)

The membership function for a fuzzy set A is used to define the complement of A so that the grade of membership in A is formally related to the grade of membership in the complement. Specifically, |A.sub.c~ is said to be the complement of A if and only if

|U.sub.|A.sub.c~(x) =1 - |U.sub.A~(x), for all x. (20)

Unlike the ordinary set theoretic definition, the fuzzy set complement allows objects x to be members, albeit gray-shaded members, of both A and its complement |A.sub.c~. In our forecast example, the literal translation for membership in the complement of good methods might be: "This method is not so good but it's not so bad either."

In many cases, it is useful to consider the membership of a set of objects in more than one fuzzy set. Without loss of generality, consider the membership of the ordinary set of objects X = {x} in the fuzzy sets A and B. Several fuzzy set concepts are needed to study membership in more than one set. For example, a fuzzy set A is contained in or is a subset of fuzzy set B (A |subset~ B) if and only if

|U.sub.A~(x) |is less than or equal to~ |U.sub.B~(x), for all x. (21)

The union of A and B, denoted A |union~ B, is defined as the smallest fuzzy set containing both A and B. Its membership function is given by

|U.sub.A|union~B~(x) = max ||U.sub.A~(x), |U.sub.B~(x)~, x |is an element of~ X. (22)

The intersection of A and B, denoted A |intersection~ B, is defined as the largest fuzzy set contained in both A and B. Its membership function is given by

|U.sub.A|intersection~B(x) = min ||U.sub.A~(x), |U.sub.B~(x)~, x |is an element of~ X. (23)

It can be shown that these definitions of fuzzy union and intersection are the only ones that naturally extend the corresponding standard set theory notions by satisfying all the usual requirements of associativity, commutativity, idempotency, and distributivity (Lemaire, 1990). They are also the only operators with other desirable properties such as that "membership in A |intersection~ B requires more, and membership in A |union~ B requires less, than the membership in one of A or B" (Dubois and Prade, 1980, p. 12).

Other Definitions of the Intersection

Other definitions of the intersection have been suggested; they correspond to more flexible interpretations of common membership in two fuzzy sets. The selection of a specific operator for intersection, other than the minimum operator, will depend on its possibilities to allow for cumulative effects, interactions, and compensations among criteria. These properties are desirable for "softer" versions of intersection operators.

Property 1 (cumulative effects): Two infringements are worse than one.

|U.sub.A|intersection~B(x) |is less than~ min ||U.sub.A~(x), |U.sub.B~(x)~, if |U.sub.A~(x) |is less than~ 1 and |U.sub.B~(x) |is less than~ 1. (24)

Property 2 (interactions between criteria): Assume |U.sub.A~(x) |is less than~ |U.sub.B~(x) |is less than~ 1. Then the effects of a decrease of |U.sub.A~(X) on |U.sub.A|intersection~B~(X) may depend on |U.sub.B~(X).

Property 3 (compensations between criteria): If |U.sub.A~(X) and |U.sub.B~(X) |is less than~ 1, the effect of a decrease of |U.sub.A~(X) on |U.sub.A|intersection~B~(X) can be reduced by an increase of |U.sub.B~(X) (unless, of course, |U.sub.B~(X) reaches 1).

The reason for seeking alternative definitions of the intersection is that the standard FST intersection operator (the minimum operator, equation |23~) does not allow for any of the three properties even though it is desirable to allow for them in some applications. Lemaire lists four alternative definitions and extensions of the standard fuzzy intersection definition (equation |23~). Each one provides for systematic ways to introduce linguistic decision variables into FST. These are the algebraic product, the bounded difference, the Hamacher operator, and the Yager operator.

The algebraic product of A and B is denoted AB and is defined by

|U.sub.AB~ = |U.sub.A~(x)|U.sub.B~(x). (25)

The bounded difference of A and B is denoted |Mathematical Expression Omittted~ and is defined by

|Mathematical Expression Omitted~.

The Hamacher and Yager operators, H and Y, define the intersection of two fuzzy sets A and B by

|Mathematical Expression Omittted~

|Mathematical Expression Omittted~.

Fuzzy intersection operators differ in the degree to which they satisfy properties 1 through 3. The bounded difference satisfies only properties 1 and 3, thereby excluding interactions. The algebraic product satisfies all three and provides for maximum interactions. The Hamacher and Yager operators allow for all three properties but temper the effects somewhat in comparison with the algebraic product. The degree of tempering is reflected by the parameter p. A more complete discussion of operators and criteria for selecting among them is presented in Zimmerman (1991).

The |Alpha~-Cut Concept

Finally, we will find useful the notion of the |Alpha~-cut. If A is a fuzzy subset of X, its |Alpha~-cut |A.sub.|Alpha~~ is defined as the non-fuzzy subset such that

|A.sub.|Alpha~~ = {x |where~ |U.sub.|Alpha~~(x) |is greater than or equal to~ |Alpha~}, for 0 |is less than or equal to~ |Alpha~ |is less than or equal to~ 1. (29)

EXAMPLE 2: REASONABLE TREND FACTORS. Traditional choices of "best" forecasting methods have relied on choosing the method with the optimal value under some pre-defined test criterion or criteria. In the claim cost forecasting experiments discussed above, the minimum absolute TCPE criterion would produce method 6(2) as the "best." The judgmental or "subjective intersection" of the absolute average and average TPCE criteria leads to the selection of the four "good" methods 6(7), 6(8), 11(7), and 11(8), none of which was the "best" by any single-criterion.

Other methods that were almost as good as the conventionally-selected "best" choices would be discarded by optimizers, but not necessarily by actuaries or intervalists. FST membership functions allow for "good" or "reasonable" methods, as well as the traditional best method; |Alpha~-cut models discard values that are extreme in some sense, that is, beyond the minimum grade of |Alpha~ for membership.

The Appendix shows the 72 benchmark trend factors for the trending period 1989.1-1990.3. Let us assume that a fuzzy subset R of X, the set of "reasonable" trend factors, is defined by the membership function:

|Mathematical Expression Omitted~

where ||Delta~.sub.x~ = the trend factor produced by method x,

|Mathematical Expression Omitted~ = the average trend factor over all methods,

|Sigma~ = the standard deviation of the trend factors, and

|U.sub.R~(X) = the membership function of the fuzzy set of "reasonable" trend factors.

We constructed equation (30) to reflect our lack of confidence in extreme forecasts, and our objective of choosing a "good" trend factor rather than the historically "best" method(s). Accordingly, equation (30) is independent of the historical accuracy of the methods under consideration. It has been our experience that real-world actuaries and regulators invariably give less credibility to extreme results when confronted with a range of alternatives. Empirical support for this approach is provided by the forecasting literature, which shows that average forecasts tend to outperform individual forecasts and that past performance is not an infallible guide to future accuracy (e.g., Zarnowitz and Braun, 1992; McNees, 1992).

The |Alpha~-cuts defined by equation (30) correspond to discarding those trend factors that are "extreme" in the sense that the trend factors not in the |Alpha~-cut are beyond the grand mean by some multiple of the standard deviation. For instance, based on equation (30), we have

|Alpha~ |Alpha~-cut

0.75 within 1/2 |Sigma~ 0.50 within 1 |Sigma~ 0.25 within 1 1/2 |Sigma~

0 within 2 |Sigma~

We now use these FST concepts to reinforce Lemaire's expectation that FST has a wide range of application to actuarial problems by exploring FST as a decision-making method for forecasting.

Evaluation of Trends Using Fuzzy Set Theory

Fuzzy set theoretic concepts can be used to reformulate the forecasting decision process. For example, the decision-maker may specify one or more goals {g} to be attained. Provided that fuzzy membership functions can be created to measure the degree to which each goal is met, the decision problem can be recast within FST.(18)

Straightforward intersection of the goal fuzzy sets will yield a fuzzy solution set. More importantly, however, the calculation of the fuzzy intersection ranks the alternatives with a degree of membership in the final fuzzy set of alternatives. Other fuzzy intersection operators also can be used to reformulate the problem using alternative criteria. Each intersection operation yields a fuzzy set of alternatives. The "answer" to the decision problem is the fuzzy intersection-set rather than a specific forecast or method. However, this fuzzy set can be used to select a forecast, as shown below.

The FST decision-making procedure can be illustrated using the BIL trend factors and historical measures of accuracy presented above. In order to keep the arithmetic simple, we initially use only the two "best" methods shown in Table 4 plus the method with the lowest absolute average TPCE. Our decision criteria are based on the following goals:

Goal 1: Historical Accuracy--The forecasting method should be reasonably good for past predictions as measured by the average absolute total percentage change error.

Goal 2: Unbiasedness--The forecasting method should be reasonably unbiased as measured by the average total percentage charge error in the historical forecasts.(19)

Goal 3: Reasonableness--The trend factor selected should be reasonably close to the average benchmark forecasts; that is, extreme choices should be avoided.

The goal 1 concept of "reasonably good for past predictions" can be made operational using the Table 2 test results and the membership function |U.sub.g1~(X), defined in equation (19). Application of equation (19) yields the fuzzy set of historically good forecast methods illustrated in Figure 1.

The goal 2 concept can be made operational in an analogous manner by formulating a membership function. Strictly unbiased forecast methods as demonstrated by past performance are the "best" under goal 2. The following membership function operationalizes this goal:

|U.sub.|g.sub.2~~(x) = e-|absolute value of~ 100 Average(|TPCE.sub.x~), (31)

where Average(|TPCE.sub.x~) = the average TPCE for method x, and |U.sub.g2~(x) = the membership function of the fuzzy set of forecasts that are "good" according to the historical unbiasedness criterion (goal 2). Equation (31) also would be an appropriate membership function if the average error were used as an alternative measure of accuracy rather than an indicator of bias.

Finally, the goal 3 concept of a reasonable (or "moderate," "not too extreme," "realistic," etc.) method can be made operational by applying equation (30) using the mean and standard deviation of the 72 trend factors shown in the Appendix. The use of deviations from the mean forecast to define "reasonableness" is justified by the growing evidence that average forecasts outperform individual forecasts (Zarnowitz and Braun, 1992). An alternative would be to use deviations from the mean of some |Alpha~-cut of the intersection-set of goals 1 and 2.(20)

As an example of the FST decision-making process, we use two of the conventionally-chosen "best" methods and, for contrast, the method with the lowest TPCE in Table 2:

Method A: All data, OLS, linear frequency, exponential economic index, linear severity: 6(7).

Method B: All data, OLS, linear frequency, exponential economic index, log-linear severity: 6(8).

Method C: All data, OLS, linear frequency, exponential time trend severity: 6(2).

Applying the FST definitions associated with our three goals, we have the following decision array.
Grade of Membership

Fuzzy Set Method A Method B Method C

Goal 1 0.941 0.925 1.000
Goal 2 0.827 0.795 0.237
Goal 3 0.634 0.503 0.577

An examination of the membership grades for all 72 methods led to a reevaluation of goal 2. This goal turned out to be much more rigorous than the other two decision criteria. For example, the average membership grade among the 72 methods according to goal 2 is only 0.43, whereas the average for goal 1 is 0.70. In addition, methods with relatively small average errors (e.g., less than 2 percent) were given very low scores by the goal 2 membership function (equation |31~). For example, method C had an average error of 1.44 percent and a membership function value of 0.237 for goal 2. Using goal 2 in its unmodified form would give much more weight to bias than to the other criteria.

Fortunately, fuzzy set theory provides methods for dealing with situations such as the assigning of unusually high or low weights by a membership function. One obvious approach would be to change the function. However, in some instances, the nature of the problem dictates the choice of a particular function. Although this is not the case in our application, we will take advantage of the relative stringency of goal 2 to illustrate two additional operations permitted by fuzzy set theory.

The operations are part of a class "that have no counterparts in ordinary set theory; they are uniquely fuzzy" (Lemaire, 1990, p. 44). One operation used here, dilation, is defined as

|U.sup.DIL~(x) = ||U(x)~.sup.a~, a |is less than~ 1. (32)

Dilation stretches the fuzzy set by increasing the grade of membership of all elements in the set with membership function values greater than 0 and less than 1. The parameter a is called the power of the operation. A related operation is concentration, defined as

|U.sup.CON~(x) = ||U(x)~.sup.a~, a |is greater than~ 1. (33)

To align the stringency of goals 1 and 2 and illustrate dilation and concentration, the membership function for goal 1 was concentrated by raising equation (19) to the power +2, while goal 2 was dilated by raising equation (31) to the power 0.75. The concentration operation lowered the average membership function value for goal 1 from 0.696 to 0.504, while dilation increased the average goal 2 membership values for the 72 forecast methods from 0.432 to 0.506. The goal 1 membership values for methods A, B, and C change from 0.941, 0.925, and 1 to 0.885, 0.856, and 1; the goal 2 membership function values change from 0.827, 0.795, and 0.237 to 0.867, 0.842, and 0.340, respectively. More detailed comparisons reveal that the revised membership functions for goals 1 and 2 are much more consistent in terms of stringency than the original membership functions. Consequently, the revised membership functions were adopted for the remainder of the analysis.

The selected membership functions illustrate the solution of the decision problem. First, consider the three methods A, B, and C. The problem is solved by taking the intersection of the three membership functions. Applying the ordinary intersection operator of equation (23) plus the other four intersection operators from the previous section yields the results presented in Table 5. Methods A and B are clearly better than method C, the best method under criterion 1. Conventional decision-makers might be inclined to decide that A is dominant and to discard the other two methods. In fuzzy decision making, however, the "answer" for any given intersection operator is the fuzzy set consisting of A, B, and C and their membership function values. We would like to retain the information provided by methods A, B, and C in proportion to their degrees of membership in the fuzzy set of "good" forecasts. A method for doing this is proposed below.


Returning to the set of 72 forecast methods, Figure 2 shows the fuzzy membership values for all 72 forecasts according to goals 1 and 2. In general, the historically best methods are among the least unbiased and, conversely, the most unbiased methods are the least accurate historically. This illustrates the importance of looking at multiple criteria. An insurer choosing a forecast method with the lowest TPCE will not be successful if this method consistently underestimates claim costs. In ordinary decision making, it is difficult to combine information on the "goodness" of forecasts when multiple criteria are used. The intersection operators of FST provide a natural way to solve this problem. This well-established procedure is known as fuzzy averaging (Zimmerman, 1991).

Figure 3 shows the ordinary intersection of goal 1 and goal 2 fuzzy sets for the entire sample of 72 forecast methods. Notice that the fourth, sixth, eleventh, and twelfth best criteria 1 methods are shown in Figure 3 to be the better methods using both criteria simultaneously. Those four methods are 6(7), 6(8), 11(7), and 11(8), respectively. These methods also would be identified by conventional methods as the best of the group. However, unlike FST, conventional methods would not provide information on how "good" these methods are relative to the other methods, especially when more than one goodness criterion is used.

The next step is to perform all three intersection operations. This is done sequentially, that is, the fuzzy intersection set for goals 1 and 2 is itself intersected with the fuzzy set produced by goal 3. We perform the intersection using membership function values from equation (19) for goal 1, concentrated to the power +2, equation (31) for goal 2, dilated to the power 0.75, and equation (30) for goal 3. The membership function values are shown in columns F, G, and H of Table 6. Columns I through M of Table 6 present the results obtained by applying the five intersection operators (equations |23~ and |25~ through |(28~) to the 72 forecast methods.

The numbers in each intersection-operator column of Table 6 can be viewed as a fuzzy set. For example, column 1 represents the fuzzy set obtained by applying the ordinary intersection operator to the membership function values for goals 1, 2, and 3.

To choose a trend factor, one must first select an intersection operator. The choice of an operator depends upon how rigorously the decision-maker wants to assign membership function values and on how important it is to allow for cumulative effects, compensations, and interactions. The bounded difference operator is the most stringent (i.e., assigns the lowest membership values), while the ordinary operator is the least stringent. Since the choice is context dependent, we do not select an operator but proceed to consider all five.

One possibility for choosing a trend factor would be to adopt the method with the highest membership function value. The highest membership function values for all operators are obtained using methods 11(7) and 11(8). The superiority of these two methods in the fuzzy optimization problem is due to their having relatively high, but not necessarily the best, scores on all three rating criteria.

Choosing 11(7) or 11(8) as the solution to the decision problem, however, would disregard the information provided by closely ranked forecasting methods. Another solution is to use the membership function scores for each intersection operator as weights to compute weighted average trend factors over the entire set of methods. This approach has the advantage of retaining information provided by the set of methods under consideration in proportion to the membership function scores. The weighted average trend factors are computed as follows:

|Mathematical Expression Omitted~,
COPYRIGHT 1993 American Risk and Insurance Association, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Cummins, J. David; Derrig, Richard A.
Publication:Journal of Risk and Insurance
Date:Sep 1, 1993
Previous Article:Dividend policy and signaling by insurance companies.
Next Article:Evidence on the time series properties of insurance premiums and causes of the underwriting cycle: new support for the capital market imperfection...

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters