# Do catastrophe models mislead?

We all recognize the large and increasing potential for truly high-stakes risks to disrupt our world. And while this potential lies mostly beneath the observable surface of day-to-day risk, it occasionally demonstrates its terrible potential. Hurricanes, earthquakes, tsunamis and a variety of manmade perils have all wrought havoc on the financial and the physical well-being of the world's population in recent years. It is natural that we somehow attempt to analyze both the potential for these untoward events and how to manage them.Enter the field of catastrophe modeling. Catastrophe modeling attempts to provide the basis for sound risk management decisions in the face of overwhelming calamity. It does so by analyzing the potential severity and probability of such losses using a variety of scientific methods, most based on statistics.

Science, however, can only take us so far in a world where our ability to know the true nature of things is limited. The complexity and dynamics of physical and social systems is such that we may never be able to truly understand the workings of many facets of our world. The best we can do is use workable approximations. The same applies to the world of risk, especially when the stakes are high. Applying scientifically precise, yet inapplicable, standards to the investigation of risk in the complex real world could lead to results that are useless at best and seriously misleading at worst. To be effective at what they do, risk managers need to understand this.

Analyzing Risk

A characteristic of all the risks in our world is that they occur randomly. We do not know when or where the next big loss will occur. Rather, we can only estimate some long-run probability of its occurrence. Or can we? The wider field of risk analysis, based on the statistical estimation of probabilities from data, has been an effective cornerstone of risk management for decades. The key to its success is the fact that many random events can be treated statistically. These events fall into a relatively near-time horizon where the results of our risk management actions can be gauged directly. Events that fall within probability horizons in the near term are most easily managed. This near term suggests time periods approximately between 10 to 25 years, with outside limits of 50 or 100 years, depending on the kind of risk. Dealing with probabilities through longer time spans greatly impairs our ability to verify the results--verification being an absolute prerequisite for scientific justification.

We exclude here analyses based on repetition under controlled conditions. We can examine the manufacture of a safety-critical bolt over productions runs of thousands, hundreds of thousands and even millions, but such controlled experimentation is virtually impossible in the natural world. We cannot generally perform such experiments with windstorms or earthquakes, for example. Rather, we need to rely on a historical statistic record that is fairly small. Once again, genuine statistical data runs to tens, or at best, hundreds of years in this regard.

The problem for catastrophe modeling is that catastrophic events have probabilities of loss that lie far below what is statistically observable. We deduce this from observations, as well as logical intuition. These observations, while giving us a rough idea of the probabilities involved, can never be precise. There simply is not enough information to provide validation of any precise probabilities in this domain. Nonetheless, catastrophe modelers persist, by extrapolating beyond what we do, or can, know. It is not uncommon for probability estimates of some natural disaster to be offered as some precise numerical figure, say an annual probability of .0002, suggesting the event occurs on average once every 5,000 years. But how can we possibly justify such precise numbers, outside the realm of controlled experimentation? Logical justifications and limited statistical data are helpful, yet, they cannot fully justify precise estimates. The best we can do under such conditions of imperfect knowledge is to specify rough intervals of possibility. That is, a more realistic earthquake estimate for a region may surround some best guess estimate, say .005 (one in 500), with an interval of uncertainty. This interval might range from .01 (one in 100) to .001 (one in 1,000), for example. While we will not go into support for this argument further here, the interested reader need merely enter the key words "earthquake probability" and "uncertainty" into their favorite Internet search engine and check out the results.

These uncertainty intervals are judged instrumentally and not by measurement. We establish them based on logical parameters that may suggest limits for the estimates. Ultimately, we judge the size of the intervals by how well they let us deal with a very complex and uncertain world. We can narrow the interval, but we have to realize that by doing so, we gain precision at the expense of truth. That is, the more precise a statement bound by uncertainty, the more likely it is to be wrong. On the other hand, statements that are too wide ("it will either rain tomorrow or it will not") are vacuous in that they do not form the basis for any reasonable actions. Specificity vs. truth: Herein lies the true challenge of those who attempt to model catastrophe.

Note that the assessment of uncertainty intervals in this fashion is very different from attempting to develop statistical confidence intervals around our best guess. Confidence intervals show the variation around the average value that we can experience from repeated sampling from a uniform population. The larger the sample, the closer the sample average to the true population average. While the idea of experiencing the "one in a thousand" loss in a natural environment is silly enough, the idea that we can somehow form a reasonable sampling distribution and related confidence intervals around such estimates borders on the preposterous. And what do these "probabilities of probabilities" tell us? That in the long run, the average will usually equal the point estimate. Thus we are back to that nasty point estimate, and we are no better off for it.

Unknown Probabilities

Unfortunately, the type of uncertainty that is due fundamentally to imperfect knowledge is rarely identified in catastrophe modeling efforts. Why? First and foremost, such recognition would severely dilute the effectiveness of such analyses. Combined with the logically sound model of expected value decision-making, catastrophe modeling provides precise guidance for managing high-stakes situations. Expected value decision-making relies on loss estimates weighted by the probability of loss. The expected value of an exposure, defined as probability multiplied by loss, provides a long-run estimate of the cost of loss. This estimate can in turn be compared to the costs of loss prevention, control or avoidance, and a simple, easy-to-apply, cost-benefit decision results. If the expected value of loss is more than the cost to mitigate or prevent it, we take the preventive action and realize a net gain. If not, we reject prevention and take our chances. Once again, decisions based on such expected value comparisons will yield the highest payoff over time.

Over time ... but how much? In the relatively short run, the payoff of expected value decisions is readily recognized. How do we recognize the payoff of a decision based on probabilities involving occurrences with probabilities of one in a thousand years or one in a hundred thousand years? Again, the results are simply not verifiable. Furthermore, the mere potential for catastrophic losses may be enough to make probabilities irrelevant, regardless of the degree of accuracy with which we may assess them. What does it really tell a risk manager if the probability of a devastating earthquake is .001? Let us say the potential damage at stake is $1 billion. Expected value criteria suggest we should take action, as long as that action costs less than $1 million ($1 billion multiplied by .001). So, what if preventive action costs $1.5 million? If we forgo prevention and a staggering loss of $1 billion occurs over the next year, what have we gained? The comfort of knowing that we can recoup over the next 1,000 years?

Faced with catastrophic consequences, making things right in the long-run is of cold comfort. In the world of expected-value decision making, once you have an estimate of the probability of loss and its potential, as well as a fixed cost for prevention, the decision is easy. In the real world, high-risk decisions are not easy. They are very, very hard. What makes them so difficult is not a lack of sophisticated methods to come up with more exact probabilities of loss. It is the uncertainty about measurements and methods for dealing with low probability, high-stakes losses.

Once we recognize that probabilities of loss can only be assessed very imperfectly, regardless of how much science we throw on them, the potential for abuse of risk assessments becomes obvious. If, as we suggest, realistic probability estimates can only be made based on intervals, just exactly where on the interval we choose our precise looking point estimate is arbitrary. As such, there is a temptation to choose our estimates so as to make our point. It is here that catastrophe modeling can easily succumb to special interests. Consider the use of catastrophe modeling to set insurance premiums. Say, for simplicity, we live in a community of $500,000 homes, all subject to the earthquake peril. No earthquake estimates can be made with a high degree of precision. Just how wide an interval of uncertainty applies may be debatable, based on limited statistical evidence, arguments from structural geology, and perhaps even some simulations (however imperfect) based on the physical properties of the local area. Maybe the best we can do is to come up with an estimated annual probability of one in 100 (.01) to one in 1,000 (.001). Simplistically, the range of pure insurance premiums suggested by these numbers (premium is, after all, just a form of expected value) is $500 to $5,000! Who is to say that special interest on the part of insurers might not push for an estimate in the high end of the range? Doing so, or choosing any other point estimate along the range, hides the very real uncertainty involved in any such estimate. And while proponents of catastrophe modeling may suggest that our results here are contrived, a concerned homeowner may feel that all the scientific say-so of consultants hired by the insurer may themselves carry little or no additional weight. Grin and bear it because the scientists told you so? The legislative resistance to insurance rate-making based on catastrophe models is not surprising.

Some may argue at this point that while catastrophe modeling is not perfect, it is the best we have. When the best decision methods we have available show little or no benefit over arbitrary or random choice (i.e., a coin flip), the degree of faith we have in those methods should be adjusted accordingly. The danger is that we are making decisions on these very imperfect, but best available methods, without recognizing how faulty these decisions may be.

Alternate Approaches

What, then, is the answer to dealing with the problem of high-stakes losses? First of all, both the measurement and methodological difficulties we have with assessing probabilities of rare, high-impact events and their application to expected value decision criteria suggest we dispense with probabilities altogether. Rather, we base decisions of the impact of the losses alone. This obviously assumes we are able to determine, however roughly, some dividing line for practical impossibility. Otherwise we are doomed to the paralysis of fearing that anything can happen. Once high-impact scenarios are determined credible, however, we focus entirely on their potential to do damage. Logically, then only two treatment options remain: Accept the exposure and do nothing (the fatalistic approach), or avoid the exposure altogether.

Undoubtedly the fatalistic approach has some sway on the way we go about life in the modern world, but it is the other logical alternative, precautionary avoidance, that is gaining greater and greater credence. The key to precautionary avoidance is timing. The biggest criticism of avoidance is that it keeps us from doing anything, because anything has the potential for loss. Realistically, everything does not have the potential for loss. Precaution hinders progress only to the extent we narrowly define progress as going down the same path. Natural catastrophe modelers suggest that modeling is needed more now than ever before because we have unprecedented construction in wind- and earthquake-prone areas. They argue that we need to address the loss potential of living in those areas. But the point we have tried to make is that there is no way to address that potential aside from limiting or restricting the exposure beforehand, for example, by adopting a precautionary stance to building in high-wind areas, flood plains and earthquake zones. The idea seems "anti-progress" only to the extent that headlong progress toward disaster has already been made. Obviously, precautionary avoidance is complex and difficult, and it is a far cry from the cold calculation embodied in catastrophe modeling and expected value decision theory.

Similar approaches can be taken to the establishment of insurance pools. Assessing probabilities at the high end of the uncertainty interval may make some sense, but not to the extent that it simply serves to enrich private insurers and their stockholders. Rather, insurance programs need to be coupled with an enforceable policy of precaution. The public program of flood insurance in the United States is a case in point. Insurability goes hand in hand with sensible land usage. This in effect removes the higher portion of the probability spectrum and the need to charge accordingly. To the extent the need for catastrophe "loadings" in the premium (based on uncertainty) remains, it can be bolstered by more equitable social mechanisms (such as taxes on those that promote building in the more flood prone areas for their own profit).

Similarly, the extension of catastrophe models to manmade perils, such as terrorism, border on the ridiculous. The roots of terrorism loss prevention are political, not physical. Once again, catastrophe modelers, and those who believe them, are buying into a pitch to save the status quo. The only thing, however, is that it does not work.

The catastrophe modelers' lot would be much better if they could claim to have prevented hurricane damage or reduced the damage from a terrorist event rather than redoubling their efforts only after untoward events occur. If anything, these events show how powerless we really are in the face of low-probability, high-impact events. This is not to detract from the modelers innate abilities. The degree of knowledge necessary to effectively develop these models may simply be unobtainable. To continue to profess that we can when we cannot (and then charge for the advice), seems less than honest. But don't we need to hold out some hope? Yes, but not at the expense of fooling ourselves. If there is a way out of high-stakes risk in the modern world, we have to look for it with eyes wide open, not with heads stuck in the ground.

This is not to say catastrophe modeling of probabilities is useless. We just need to realistically assess and recognize its limitations. Properly circumscribed, probability estimation is an important part of the exploratory modeling of losses. Exploration suggests that we approach the problem of high-stakes loss estimation not form the standpoint of precise and unknowable models, but rather by identifying an ensemble of possible models. The size of this collection of plausible models is a natural measure of the uncertainty involved. Collectively, the envelope of these models forms our interval estimate of probabilities. Decision proceeds from there. In this way, we tailor our decision methods to the data (or lack thereof), and not vice versa. As we have shown, one decision-making approach that is ideally suited to the exploratory approach is precautionary avoidance. Exploratory modeling feels out the boundaries of our knowledge and in turn, signals the need for precaution.

This sort of approach is more in line with how many insurers use a simple form of catastrophe modeling as a basis for a precautionary approach of their own. Specifically, the geographic loss estimates of natural hazard models, however imperfect, are used to help insurers assess the potential for losses to a concentrated mass of policyholders, thereby increasing the potential for ruin given a widespread hazard (such as windstorms). Insurers in this way can adjust their concentration of writings (not premiums) in an area, as a precaution against a concentration of policyholders (and subsequent ruin or financial difficulty). This simple form of modeling does not rely on probabilities, except in the trivial sense that they are greater than zero.

The Value of Intuition

Effective catastrophe modeling assumes a lot. Given the inherent knowledge imperfections that exist in the world of risk, it would seem that more fundamental decision-making models are more suited to the treatment of high-stakes risk. Precautionary avoidance, and its variants, rely on a much looser model of probabilities. Their results do not depend on our ability to estimate probabilities precisely. As a result, these models are much more realistic than those built on the illusions of scientific precision. The model of risk that results is really a far more intuitive one, one more in line with the way human decision-makers think. That intuition has led to a rather remarkable streak of evolutionary survival. By being more up-front about the uncertainties involved, they are also less subject to manipulation in favor of special interests.

That said, the ability of scientific-sounding endeavors like catastrophe modeling to better our lot is questionable. For catastrophe modelers and their proponents to suggest they can, in any appreciable way, is misleading. In this complex world, risk managers are seeking guidance in making important risk decisions. This guidance will not be simple, and it may not be obvious. It will definitely require us to think beyond the simple dimensions of risk, in terms of probability and severity of losses, and take a wider world view.

Mark Jablonowski, CPCU, ARM, is a professional risk manager with over 25 years of experience in risk management and risk assessment, as well as property/ casualty insurance underwriting and underwriting management.

Printer friendly Cite/link Email Feedback | |

Author: | Jablonowski, Mark |
---|---|

Publication: | Risk Management |

Geographic Code: | 1USA |

Date: | Jul 1, 2005 |

Words: | 3033 |

Previous Article: | Estimating environmental liabilities: one price does not fit all. |

Next Article: | A strategic approach to loss prevention engineering. |

Topics: |