Printer Friendly

Retooling auto: the science of automobile insurance ratemaking is benefiting from modeling advancements.

The mechanics, science and art of automobile insurance ratemaking have undergone major changes over the past three decades, particularly in recent years.

* The hardware: from calculators to time-sharing computers, to in-house mainframes, to desktop computers, and to IAN and WAN networks.

* The software: from basic calculator math to procedure-oriented languages such as APL, spreadsheets, Visual Basic[R], and to statistical packages such as SAS[R].

* The algorithms: from one-dimensional analyses to generalized linear modeling, neural networks, decision trees and multivariate adaptive regression spines.

* The rating arguments: from dozens to hundreds of rating territories and risk classifications; from relatively simple class plans to credit scores, tiers and multiple carriers within a corporate group.

Where is the evolution going, and what, if anything, should industry leaders do about it?

Ratemaking Basics

Insurance ratemaking is informed risk assessments based upon empirical data, observed trends, assumptions regarding underlying data, and judgment factors. It is both a science and an art. There is no "cookbook" or complete "A to Z" set of procedures that ratemakers follow to develop insurance rates, nor should there be. For each task, there are often several alternative algorithms that a ratemaker considers in determining what would be most appropriate, given the nature of the insurance coverage and discerned patterns in the underlying data. The applicability of an algorithm often is based on assumptions that rarely are certain. Usually, these algorithms have been widely published and employed. A ratemaker's experience, technical skills and "feel for the numbers" weigh heavily in the worthiness of the assessment.

In a typical analysis, the ratemaker first will compute an indicated statewide rate-level change for each insurance coverage and then allocate the overall rate change to individual classes of risks. The ratemaker will measure each class of risk by assigning weights to the experience of both the insurance company and the industry, and will employ the actuarial techniques and factors believed to be most applicable and reasonable. Different ratemakers working on identical data could develop significantly different rate-change indications, particularly with regard to individual risk classifications. "Indicated" rate changes give rise to "selected" rate changes. The latter usually address marketing concerns, regulatory demands and/or state laws.

Control of the Process

Private-passenger automobile insurance rate filings are thicker, less standardized, and more innovative and enterprising than ever. Often, they are bound in creative actuarial soundness with volumes of data-mined support and with a view to what competitors are charging. With desktop computers and ready access to data-mining systems, the expectation levels in the depth, scope and flexibility in insurance ratemaking continue to rise. The once unknown, low-risk consumer now pays less in premium dollars, and the newfound, high-risk consumer pays more. However, nontechnical executive officers are relying on an ever-growing, mysterious process--and insurance regulators are working harder and longer to check for unfair discrimination, pockets of excess profits and rate inadequacy. It steadily is becoming more difficult for both parties to control the process.

Automobile Models

One solution may be the development of automobile-insurance ratemaking models much like the hurricane models that have been developed since Hurricane Andrew hit South Florida in 1992. Such hurricane models must be certified as meeting certain criteria promulgated by a regulatory commission. In Florida, for example, to secure certification, software developers must provide extensive documentation and evidence that the models' technical components have met certain standards set by a commission. The standards address aspects of vulnerability, validation, computing, statistics and the actuarial use of the modeled loss costs.

Many critical assumptions underlie rate changes--assumptions such as loss development factors, loss cost trends, factors to convert historical premiums to current premiums, projected investment income, and assumed underwriting expenses. The models could unveil all the critical assumptions and parameters so carriers and regulators may examine them for reasonableness, ascertain consistency between insurance coverages and between successive rate filings, and with a view toward vulnerability, measure the sensitivity of the indicated rate level to changes in the assumptions and parameters.

No new data need be generated. All the underwriting and financial data that currently are used in rate Filings can be uploaded to a model via the Internet or via the National Association of Insurance Commissioners' System for Electronic Rate and Form Filing. Similar data already are being filed regularly with statistical agencies such as the Insurance Services Office Inc. Furthermore, the models could be designed so that no carrier would need to reformat its database structure. Additionally, much of the rate filing end-product data, which currently are being provided to regulators, also could be eliminated.

Benefits for the Carriers

Both insurance carriers and regulators would benefit from such models. For the insurance executive, ratemaking is among the most critical functions in the company. Its leverage on premium volume, loss ratios and the bottom line is momentous. An executive who does not have an actuarial background, and who wants to personally test the soundness of a proposed rate filing, has few analytical resources. That person may, for example, compare the proposed rates with competitors' rates, compute with simple mathematics the overall rate change needed to get the loss ratio at a desired level, and review the ratemaker's sensitivity--"What-If?"--analysis if available.

The first of the three analyses can result in the "blind leading the blind" and often contributes to the severe peaks and troughs the industry has experienced in its underwriting cycle. The second analysis is too simple; more than simple math is required to compute the overall required rate change. Furthermore, the overall change does not address critical rate changes that may be required within a book of business. The last option--a review of a "What-If" analysis--allows the insurance executive to study the sensitivity of the proposed rate-level changes to various ratemaking assumptions, and in doing so, focus on the reasonableness of such assumptions.

The ratemaking models could incorporate all three analytic methods, with emphasis, however, on the third. Most sensitivity analyses that are conducted today are done on spreadsheets, but spreadsheets have their limitations. Although they give ratemakers a great deal of flexibility in customizing their work, spreadsheets are unwieldy when they attempt to link the many ratemaking algorithms that feed output data into one another. Rarely, if ever, can all the algorithms be software-linked as they can be with a procedural computer language that would embody the proposed models. Because of the linkage limitation, such spreadsheet work does not have the capacity to readily test multiple alternative actuarial algorithms, readily weigh and combine alternative techniques, and facilitate complicated analyses. Executives, who typically have neither the time nor the skills to conduct spreadsheet sensitivity analyses, must rely on the ratemaker's judgment, or on a limited number of "What-If" printouts. Often too, the ratemaker has not articulated the ratemaking assumptions. With an eye on competition and premium volume, executives may second guess or overrule a ratemaker's actuarial recommendations, without full appreciation for the soundness of the implied underlying assumptions.

The proposed automobile ratemaking models would allow executives who don't have technical expertise to review the rate levels that are generated at the end points of each acceptable range of assumptions--for example: probable loss ratios and competitive rate comparisons for various loss cost trends, commissions and home office expenses, discounts, loss development factors, profit allowances, investment income yields and credibility standards.

Benefits for Regulators

State insurance regulators, too, have been impacted heavily by the evolution of ratemaking. Most regulators no longer have the staff or hinds to examine the actuarial soundness of the thousands, if not tens of thousands, of risk classifications that are being defined and priced in today's rate filings. Furthermore, certain congressional leaders, in their pursuit for federal oversight, are arguing for more modernization, effectiveness and uniformity in the regulatory process.

The development of modern ratemaking models can go a long way in easing the regulators' workload, honoring the regulators' mission and addressing congressional criticism. Carriers would be uploading raw data to models that would process such data in preapproved ratemaking modules. Instead of being reactive, regulators would become proactive. Instead of wading through innumerable algorithmic techniques and black box output, regulators would be processing the raw data supplied to certified models via the Internet. The regulators would be saying in effect: "Let's see what the rate indications look like when submitted to preapproved models."

Such models should not stymie risk analyses, experimentation and growth. The models' primary focus should be on the standardization of alternative methods for the determination of overall statewide rate-level changes. The standardization and development of such methods should not be a controversial issue for carriers or regulators.

A secondary focus--actuarial support for the allocation of the overall rate-level indication to the risk classifications within a carrier's book of business--could become controversial. Standard ratemaking models that do not allow the introduction of sound statistical analytical techniques could undermine the benefits of competitive pricing. But each new ratemaking technique could undergo, in effect, a "patenting" process and, upon approval, be incorporated into the ratemaking models as an alternative technique.

Future Expectations

Executives and insurance regulators are losing command of the ratemaking process. The growth of hardware resources, programming software, algorithmic techniques and risk classifications has improved but complicated the ratemaking function. While insurance executives appreciate the leverage that pricing has on their loss ratios and premium volume, few have the technical knowledge or time to test and dictate the critical ratemaking assumptions that will shape their company's future; and while insurance regulators recognize their obligation to protect consumers from excess profits and unfair discrimination, few have the resources to examine all the new and innumerable risk classifications and test the scores of new and varied data-mining and rate-development techniques.

There is no sign that the evolution in ratemaking is abating. Furthermore, and ironically, certain members of Congress continue to argue that the insurance industry is not modernized enough and that federal oversight is needed. Computer models that provide more means of examination for the insurance regulators and more management controls for non-technical insurance executives can be developed. Such models would allow regulators to capture a carrier's underwriting experience and risk classes, and to process such data in preapproved algorithms. Additionally, the models could contain tools for sensitivity--or "What-If"--analyses that would allow both regulators and company executives to test the impact that the critical ratemaking assumptions are having on the indicated rate levels. Coincidentally, insurance cycles, in highly competitive lines of business, should smooth as insurance executives and regulators unveil and scrutinize the ratemaking assumptions that underlie rate filings.

Key Points

* Auto insurance ratemaking models could benefit insurers and regulators.

* The growth of hardware resources, programming software, algorithmic techniques and risk classifications have complicated ratemaking.

Joseph P. DiBella is the president & chief executive officer of General Management Systems Inc. in Miami Beach, Fla.
COPYRIGHT 2005 A.M. Best Company, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Comment:Retooling auto: the science of automobile insurance ratemaking is benefiting from modeling advancements.(Auto)
Author:DiBella, Joseph P.
Publication:Best's Review
Geographic Code:1USA
Date:Feb 1, 2005
Words:1787
Previous Article:Finding the good life: life reinsurers enjoy the prospect of fresh business opportunities in the United States and elsewhere as more life insurance...
Next Article:Glimpsing the future: emerging technologies bring insurers a competitive advantage.
Topics:


Related Articles
Automobile managed care: beware the newest scheme.
Class Distinctions.
Credit Check.
Technology To the Rescue.
Room for recovery: by understanding how vehicle characteristics impact their auto salvage rate of recovery, insurers can reduce claims costs.
MetLife Auto & Home Group takes a bite out of fraud.
Centrally located: as consumers demand more services from auto dealers, insurers can become the key link in the new value chain.
Transportation risk management and insurance--avoiding gaps and traps: risk management.
Word of advice to the UAW.
Database essentials: insurers are faced with the fairness-consistency challenge in settling bodily injury claims.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters