Printer Friendly

Benchmarking tools for public risk management programs.

Benchmarking can be an effective tool to improve the performance of risk management programs.

Editor's note: Reprinted and adapted from Public Risk, May/June 1998, with permission.

Many finance officers are confronted today with the challenge of improving risk management programs. To assess improvement in performance it is important to establish a baseline by measuring actual experience and setting targeted levels of performance. This article outlines benchmark techniques and their applications in the risk management area.

A benchmarking study is typically used to connote a comparative analysis involving internal or external performance measurements. Historically, the term benchmark traces its origins to the 1791 British Ordinance Survey (BOS). The BOS used the mark on rocks to identify altitude and elevation. The horizontal line above the arrow was a slit in the rock that formed a temporary bench for a surveyor's leveling staff. Thus, conceptually these original benchmarks correspond to measuring a level of performance in contrast to current usage that seeks to use the benchmark to improve performance.

The performance to be improved might be general or specific. If a performance objective is to reduce the number of automobile incidents by 20 percent, it can be made more specific by limiting the objective to private passenger vehicles in one arm (e.g., city officials) of a government. Or, it can be made more general by expanding the objective to the annual cost of automobile incidents to a public entity.

A benchmarking study involves the following steps:

* set a benchmark(s) that represents management's expectation of performance;

* measure actual experience;

* compare the measurement to the benchmark; and

* identify and implement appropriate steps to improve performance.

The central challenge of risk management benchmarking is how to make best use of available data to establish benchmarking, evaluate performance relative to them, and identify opportunities for improvement. But good studies have fringe benefits. Through providing clear performance evaluations, benchmarks facilitate communication both within and among government entities - and they focus and clarify objectives.

Professional Interpretation

Successful use of benchmarks for public risk entities requires both professional risk management skill and familiarity with government operations. For example, suppose one were to compare the average cost per liability case in the United States to that of a specific public entity. If the entity is in a state with limitations on liability claims against governments, the national benchmark would be too high because it includes states without such limitations. Therefore, a national benchmark is not a reasonable comparison for benchmarking in this case. If the entity is in a state without limitations, the national benchmark would be too low because it includes states with limitations and does not prove to be a reasonable comparison. It is essential to know how a public entity compares to the population underlying a benchmark.

Issues of professional interpretation in benchmarking studies arise in other ways. For example, a benchmark might be based on data spanning a different period of time than that spanned by data available for a public entity. Or, there might be isolated large cases contributing to the benchmark but not to the public entity's data, or conversely. Depending on circumstances, such complications might require the skills of a professional casualty actuary. Other times, the data provider for the benchmark will be able to provide sufficient documentation to enable evaluation of the benchmark.

The key point is that benchmarking studies require professional interpretation. One simply does not know whether a glass is half empty or half full without knowing whether someone is drinking from the glass or pouring beverage into the glass.

Classifying Benchmarks

Good benchmarking studies are characterized by a well-thought-out approach that will bring about objectives that are consistent with the benchmark and the measurable data. Although it may seem obvious, it cannot be emphasized enough that managing risk, not the benchmarking study, is the goal. Perhaps the most flattering comment that a benchmarking expert can offer is that there is not enough loss data to establish a credible benchmark. By implication, the only way the benchmarking study in this example can attain credibility is for the risk management program to incur more claims - a totally inappropriate risk management goal.

There are many ways to classify benchmarks. Following are some classifications that may facilitate a better understanding of their range of use.

Empirical vs. Conceptual. Empirical benchmarks are used when actual experience is readily available. Inference or conceptual benchmarking also can be used. A conceptual benchmark for engine timing in automobile manufacturing could be based on estimates from engineering theory. based on estimates from engineering theory.

Internal vs. External. Internal benchmarks are derived from a risk management program's history. The goal is to improve on past performance. The major danger of relying on internal benchmarks is possible misinterpretation when external factors significantly affect them. In a softening market, the annual cost of risk is likely to decrease. The market would be an external factor that would need to be taken into account when benchmarking. The major danger of misinterpretation for external benchmarks arises from dissimilarities between the studied entity and the experience used to set the benchmark.

System vs. Component. System benchmarks apply to the risk management program as a whole. Component benchmarks apply to particular aspects of a risk management program, such as time from injury to first report or opportunities for return-to-work studies. Care must be exercised in drawing systematic conclusions from component benchmarks. For instance, there can be a compounding effect, whereby being slightly worse than average on several components, translates into being significantly worse than average on an overall basis.

Individual vs. Group. Benchmarks that apply to an individual need do not necessarily apply to a group. Professional football coaches often are judged by the percentage of games that they have won, but in any given season, professional football coaches as a group never average better than 50 percent. Likewise, in public risk management, it's rarely possible for a pool representing essentially all the local governments in a state to outperform statewide averages.

Olympic vs. Normal. Olympic athletes measure their performance against world records. The parallel in business is benchmarking studies that try to identify best practices as a target for performance. At times it is more appropriate to perform benchmarking studies that determine a normal value and interpret performance as being better or worse than the average represented by the normal value. Generally, approaching an Olympic benchmark bespeaks better performance than approximating a normal benchmark.

The nature of the benchmarks available to risk management determines the types of evaluations that can be made regarding the risk management program's performance.

Data Considerations

The more closely the data underlying a benchmark matches the conditions of the risk management program being evaluated, the more straightforward the interpretation of results. Data should match or be homogenous in numerous risk characteristics, such as exposures, legal environment, and coverages. For example, a benchmark that includes the police department experience of some, but not all, communities contributing data to the benchmark might not be well-suited to a community that maintains a distinct police risk management program. A benchmark that's based on statewide averages might not be well-suited to claims settled in courts exclusively in either an urban or a rural area of a state. A benchmark for a state with unlimited liability might not be well-suited to a state with a cap on government liability. In general, benchmarking produces more accurate evaluations of performance when benchmarking data agrees well with data for the public entity under study.

In some benchmarking studies, certain factors may affect the data and need to be isolated. Benchmarking the risk management performance of government entities can be especially challenging, even to the benchmarking expert. Differing state regulations, demographics, physical size, and the ability to collect data contribute to the challenges facing public entity benchmarking. Knowledge of the intricacies of the particular public entity is critical to the choice of peers and to design factors of benchmarking studies in general, which should be tailored and not purchased off the rack.

Sometimes tailoring can be accomplished by editing and adjusting data. Sometimes inconsistent data elements can be netted out. Sometimes judgmental interpretation is the most efficacious way to recognize differences in data. And sometimes combinations of factors must be considered. One municipality wanted to determine whether lawyers' fees in their loss experience are reasonable and chose a municipality in another state with similar risk characteristics as a peer after comparing judicial systems in the two states and the implications for lawyers' fees.

Benchmarking as a Basis

A benchmarking study does not end when the data comparisons are complete. Since the goal is to improve risk management, results must be interpreted and potential improvements must be identified.

The underlying reasons for the differences from the benchmark need to be isolated and explained. Continuing a previous example, a municipality finding a difference in lawyer fees relative to a benchmark also will want to compare claims characteristics such as time-to-settlement in order to interpret the comparison. The identification of claim characteristics associated with this difference can be critical to appropriate interpretation of the benchmark. A processing lag or a reluctance to negotiate with claimants offer immediate opportunities for improvement; a delay attributable to court calendars requires changes outside one's direct control.

Expert use of benchmarks in risk management includes identification of reasons for differing from benchmarks and assessment of opportunities to improve performance. A good benchmark will enable identification of both areas that need more support and areas that need redesign.

Maintaining Benchmarks

Cost is a major consideration in maintaining benchmarks. For a given liability claim it is theoretically possible to develop benchmark statistics relating to the claimant's background, choice of attorney, court and judge, related damages to family members, and so on. If this is the only claim to be handled, chances are that retaining an expert adjuster or attorney to handle the claim will prove more cost effective. If this is one of many claims, providing some benchmark guidelines to adjusters can provide valuable assistance to them in the settlement of claims and to a public entity in addressing future claims frequency. Design and maintenance of benchmarking studies should reflect the scope of the risk management program and the role of benchmarks in its evaluation.

In general, the effectiveness of benchmarking increases when:

* benchmarking is an ongoing process and not a one-time study, start-up costs are spread and users are familiar with the information;

* benchmarking is incorporated into the risk management information system and does not require special programming;

* the risk management information system captures data as part of processing information on claims and exposures and does not require a separate coding function; and

* costs can be shared with others - as when data is reported to a central database that generates benchmarking statistics.

Another issue to consider is adjusting historical benchmarks to current conditions. Satisfaction of last year's benchmarks does not imply satisfaction of today's. The four-minute mile is no longer an Olympic record. Nonstop coast-to-coast flights are common. Modern speeds in excess of 9600 baud are routine. And, reductions in risk management cost have become the management's expectation. Historic benchmarks should not be assumed to apply in today's world. Correct adjustment of historic benchmarks and determination of appropriate new benchmarks are important issues in maintaining benchmarks.

Valuable Tool

Benchmark studies help public entities assess performance and identify problem areas. But valid, accurate benchmark studies that support risk management decisions can be complex, intricate exercises requiring expertise in data sampling and collection, claims handling and operations, actuarial and financial analysis, and matching to reliable external benchmarking data. By applying the preceding benchmarking concepts and examples from public risk management to overcome the challenges and pitfalls to benchmarking studies, the result will be a valuable tool for improving public risk management performance.

ALFRED O. WELLER is principal and LISA SAYEGH is associate principal in information services at Insurance Services Office Inc - ISO - which provides insurance, actuarial consulting, data management, and property risk assessment expertise to the risk management market.
COPYRIGHT 1998 Government Finance Officers Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Weller, Alfred O.; Sayegh, Lisa
Publication:Government Finance Review
Date:Oct 1, 1998
Words:2000
Previous Article:Contracting for enterprise financial software: the methodology of parallel negotiations.
Next Article:Does year 2000 put finances at risk?
Topics:


Related Articles
Benchmarking to become best in class: guiding principles in Gresham, Oregon.
Benchmarking: the key to influencing physicians. (Positively Influencing Physicians).
Boosting business performance through benchmarking: benchmarking both external and internal data and using business performance management (BPM)...
Survey says: the Investment Recovery Association's latest Benchmarking survey examines how companies manage their surplus assets.
Innovations in managing public funds: benchmarking and total return.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters