Printer Friendly

Quality counts: high-caliber exposure data is the key to effective catastrophe risk management.


Companies rely on catastrophe models to provide reliable estimates of loss, whether to manage risks over the long term or to understand their loss potential in real time as an actual event unfolds.

However, the reliability of model output is only as good as the quality of the exposure data used as input. Information on property valuation, location and building characteristics needs to be readily available and reliable. Exposure data challenges are present at all points on the risk-transfer value chain.

For example, in an effort to accelerate the underwriting process, insurers often don't require the collection of all necessary property information for catastrophe modeling. Even if the data are captured, internal-systems disconnect between underwriting and portfolio management may prevent it from being used in a portfolio analysis. Additional challenges arise in the transfer of exposure data from one company to another, as different systems require different data standards. Furthermore, reinsurers who receive portfolios from cedents have limited options to assess or, if necessary, enhance data quality.

With the constant threat of increasingly large catastrophe losses, driven in part by an expanding concentration of property value in at-risk areas, the need for companies to reassess exposure data collection practices and put processes in place to enhance data quality is more important than ever.

Just as Hurricane Andrew was the catalyst for the widespread adoption of catastrophe modeling, Hurricane Katrina sparked an intensive focus on the quality of exposure data used in modeling. After Katrina, a thorough analysis of detailed claims data by AIR Worldwide revealed shortcomings in the quality and completeness of data on insured properties, especially on commercial policies.

From a rating agency perspective, A.M. Best is taking more interest in the exposure data-quality issue with the new requirements for its 2008 Supplemental Ratings Questionnaire. This has made high-quality exposure data no longer something simply to be wished for, but rather a necessity.

A.M. Best is trying to determine whether important elements--geocoded location, building replacement value, construction, occupancy, year built, total square footage and building height--are specified for individual locations. The implication is that companies without this detailed exposure data may find their ratings adversely impacted.

Flawed Exposure Data

An AIR Worldwide survey of insurers, reinsurers and reinsurance brokers indicated that exposure data quality is a significant issue for property insurers concerned with catastrophe risk. A majority of respondents attributed poor data quality to inadequate practices at the point of underwriting.

Chief among these include the pressure to conclude the sale quickly and be seen as "easy to do business with." While this may help companies win more business initially, basing underwriting and portfolio management decisions on poor-quality exposure data may place them at a competitive disadvantage down the road.

Even when insurers use quality exposure data at the point of underwriting, many policy processing and internal systems often fail to capture the information so that it can be effectively used for portfolio-level risk analyses.

Although insurers have new and strong incentives for enhancing exposure data, questions remain as to how to do so and be cost-effective and efficient. One option gaining popularity is the use of prefilled property information at the point of sale. Widely used in auto insurance, prefilled property-specific data allows brokers, agents and customer service representatives to confirm or update data, rather than collect it from scratch.

For residential properties, much of the data necessary for catastrophic risk assessment is available in public records, and therefore easily available as prefill. However, for commercial lines, public-records data alone is not sufficient for several reasons. These include multiple records for single buildings, no information for buildings with tax exemptions and inconsistencies in property characteristic collection across jurisdictions.

A more reliable approach to commercial property-specific information is data based on actual on-site building inspections. On-site building inspections, conducted by trained inspectors, already provide detailed building information on hazards of occupancy and fire-protection systems. Basic building data available today include construction type, building size, occupancy, fire protection, property replacement value and geocoded location.

Portfolio-Level Data

Ultimately, high-quality exposure data will be captured at the point of underwriting or during renewals, and passed through the risk-transfer value chain. However, this will take time, and presently the majority of premium dollars collected comes from properties already on the books. This makes it enormously important for companies to improve existing exposure data for properties currently in their portfolios. Reinsurers also need a consistent and reliable approach to assessing exposure data quality in their cedents' portfolios.

Options for companies to validate and augment portfolio-level exposure data quality have historically been limited. Many companies set up manual processes to validate exposure data, but these are typically labor-intensive and limited in scope. And while it's easy to detect missing data, it is much harder to identify incorrect or unreasonable data. When an error is found, typically the only option is to replace the incorrect data with industry averages, if available.

A better option is automated portfolio-level data validation, which requires the development of an extensive taxonomy of rules to identify unrealistic data (for example, a wood-frame building with seven stories). Once identified, the additional challenge is to determine which datum is wrong--in this example, it's either the building height or the construction type. Until now, the only solution has been to go back to the property owner--a time-consuming and expensive effort.

Companies need a way to efficiently validate and, if necessary, augment their portfolio-level exposure data in a way that aligns with their current underwriting work flow. Property-specific data can be applied at the portfolio level. An extensive database of property-specific commercial and residential building information, including geocoded location, building replacement value and building characteristics such as construction, occupancy, year built, building height and more also has been developed.

Soon, when data are identified as missing or incorrect during an automated, portfolio-level data validation process, it will be seamlessly augmented using this database of property-specific data for commercial and residential risks.

Does Yours Stack Up?

There may be cases in which insurers want to check the quality of their exposure data, or reinsurers want a simple way to compare the exposure data of their cedents. This can be accomplished through data benchmarking, which compares the exposure data contained in individual portfolios with industry averages.

For example, average replacement values can be assessed for deviation from industry averages by construction type for a specific geographic area. There may be good reasons for such a divergence, but it is useful to know that one exists and why. Even if corrective action is not warranted, this information can help communicate data-related matters to external stakeholders.

If, for instance, an insurer's average building values diverge from the industry's for good reason, a company can leverage this information in its interaction with reinsurers to help them get comfortable with the portfolio.

Catastrophe models are the industry-standard solution for catastrophe risk management. Companies now need to pay close attention to strategies that will enhance their model output. Improving the quality of exposure data input into the models should be at the top of the list.

Ensuring the use of detailed, property-specific exposure data, including building replacement value, geocoded location, construction, occupancy, size and other building characteristics will enhance catastrophe analysis at the individual- and portfolio-risk levels.

Whether an insurer needs to improve exposure data at the point of underwriting, at the portfolio level, or both, solutions already exist to provide property-specific building characteristics for the underwriting process. And new solutions am coming to augment portfolio-level data.

HOW HIGH A RISK?: Large catastrophe losses are driven by the growth of properties located in at-risk areas--such as beachfronts. Incomplete exposure data can cause unnecessary losses for insurers that don't use specific and highly detailed underwriting procedures.

* The Situation: Models are the industry-standard solution for catastrophe risk management and high-quality exposure data will enhance their output.

* What It Means: Information on property valuation, location and building characteristics needs to be readily available and reliable.

* What Needs to Happen: Insurers should carry property-specific building characteristics through the underwriting process.

Building Numbers

In addition to challenges collecting reliable property-specific building characteristics, the seemingly simple task of building identification can complicate catastrophe risk assessment. There are many cases where a single structure will have multiple street addresses, may be identified by a name in addition to an address, or have multiple buildings attached to a single address (a college campus, for example). This highlights the need to be able to identify a specific building by a unique identifier.

Once a building has been identified at the point of underwriting, it can be passed from risk manager to tinder-writer to portfolio manager to reinsurance broker to reinsurer, enabling every player in the risk-transfer chain to know which structure is being evaluated. Attaching building identification numbers to databases of property-specific information will link each building to the exposure data for that particular structure, enabling anyone evaluating that property to easily access the associated exposure data. The building identifier becomes the key to the exposure data for the property. Building identification numbers are being assigned to buildings throughout the United States and will soon be freely available for use by the insurance industry.

Additionally, the propagation of open-industry data standards is essential in the quest to enhance exposure data for catastrophe analysis. The need to manually review electronic information, check data quality and re-key information into different formats is a time-consuming and potentially error-producing process. ISO and AIR are working closely with ACORD to develop and support standards that will provide a more efficient and accurate flow of exposure data electronically across the industry.


Replacement Values: The Backbone of Modeling

Loss estimates in catastrophe models are tied directly to the building's replacement value. If a replacement value is too low, the loss estimates will be too low. If it's too high, the loss estimate will be too high.

A recent insurance-to-value analysis of a reinsurer's book of business showed underinsurance in fully 100% of the cedent portfolios analyzed. The percentage of underinsured properties varied widely by portfolio, ranging between 20% and 75%, with the magnitude of underinsurance ranging between 3% and 43%.

Exposure data fries for catastrophe modeling often include building replacement values. However, just because the field is complete does not mean the estimate is reliable. Often, the replacement value is simply duplicated from the Coverage A limit and is not independently determined. Ensuring that the estimates themselves have integrity is crucial.

On this matter, company executives must ask themselves:

* Are we using the latest component-based tools for estimating replacement costs? If not, the insurer could be using building values that don't reflect the reality of the insurance contract's promises.

* Are our replacement-cost estimates recalculated regularly? If not, the insurer runs the risk of undervaluation for policies that have been on the books for a long time.

The Importance of Building-Level Geocodes

Location should be geocoded--that is, tagged with a latitude and longitude--at the exact building location. Ideally, this will be a rooftop-level geocode, or alternatively, address-level. This is essential for reliable modeling results. Using inexact location information in the modeling process can result in geocoding to ZIP Code, city, or county centroid rather than to the property's actual location. The resulting error could place the property--from the model's perspective--one, five, 10 or more miles away from its actual location. In the case of hurricane risk, this can make a significant difference in modeled losses. A county centroid, for example, might be 10 miles inland from a waterfront property's actual location, in which case the actual risk from potential wind and storm surge will be much higher than the modeled risk. In the case of earthquake risk, failure to input a property's exact street address may result in incorrectly placing it much closer or farther from a known fault than it actually is. Both situations will lead to an incorrect estimation of risk.

AIR performed a sensitivity analysis to quantify the potential impact on loss estimates when geocodes of varying resolution are used. The results showed that the differences can be significant. When the modeled average annual loss obtained using an exact location for one property in southern Florida was compared with the AAL for the same property placed at the city centroid, the difference in this instance exceeded 70%.

Why Property-Specific Detail Is Essential

To assess the potential magnitude of using unknown property characteristics in risk modeling, AIR analyzed the possible difference in average annual loss for a Florida commercial property having a replacement value and Coverage A limit of $1 million. As part of the analysis, the replacement value and property location were held constant, while the construction and occupancy information were allowed to vary.

When AIR modeled the risk of the property with construction and occupancy coded as Unknown, the AAL was estimated at about $9,050. However, when the property was coded as a food processing plant constructed of light metal, the AAL almost tripled to nearly $22,000. In this case, modeling the property using Unknown construction and occupancy attributes severely underestimated the risk. Conversely, when the property was coded as a medical building constructed of reinforced concrete, the estimated AAL was just above $3,000. Settling for Unknown, in this case, significantly overestimated the risk.

It should be noted that this example describes the impact on modeled losses of a single property. A portfolio full of Unknown values or other incomplete data could dramatically misrepresent actual risk as depicted in the chart below.

Knowing the correct building construction and occupancy can have significant impact on exposures. Using that information at the point of underwriting and capturing it in catastrophe modeling will help to more accurately assess risk.
Unknown Risk Characteristics Can Have
A Significant Impact on Results

Florida portfolio containing
3,000 commercial properties.
($ Millions)
Occurrence Loss
(0.4% Exceedance Probability)

Reinforce Concrete Office Building      $267
Reinforced Concrete/Unknown Occupancy   $117
Office Building/Unknown Construction    $219
Unknown Construction and Occupancy      $267

Source: AIR Worldwide

Note: Table made from bar graph.

Contributors: George Davis is a vice president at AIR Worldwide, responsible for ISO's exposure data initiative. He may be reached at Bill Raichle, Ph.D., is a vice president at ISO responsible for its Risk Decision Services business unit. He may be reached at
COPYRIGHT 2009 A.M. Best Company, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Property/Casualty: Catastrophe Models
Comment:Quality counts: high-caliber exposure data is the key to effective catastrophe risk management.(Property/Casualty: Catastrophe Models)
Author:Davis, George; Raichle, Bill
Publication:Best's Review
Geographic Code:1USA
Date:Sep 1, 2009
Previous Article:Looking for the next hard market: a roundtable held in conjunction with the National Association of Surplus Lines Offices focuses on pricing and...
Next Article:Running the numbers: succeeding as an independent insurance adjuster means keeping up with technology and training.

Related Articles
Don't Miss Operational Risks.
Technology Changes Cat-Risk Modeling.
RiskBrowser Streamlines Underwriting, Investment.
Hitting potential hot spots: insurers are incorporating catastrophe modeling in underwriting individual risks.
Learning on the job: education and dedication to sound underwriting will foster success in hard and soft market conditions.
A.M. Best increases surveillance of insurers' catastrophe exposures.
Modeling after the big one: applying lessons from the 2004 and 2005 hurricane seasons, catastrophe modelers have changed their products and insurers...
Special blend: enterprise risk management allows insurers to view the whole operation as well as its individual segments.
Conventional wisdom: underwriting insurance for trade shows presents a unique risk-management task.
C-level agenda: exposure data quality is a key indicator of operating risk.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters