Printer Friendly

The state comparative performance measurement project: benchmarking organizations provide a forum for information exchanges on business practices, strategies, solutions to common problems, and innovative ideas.

Many governments benchmark or compare their performance data against data from similar organizations. Of course, the term "benchmarking" can be defined in different ways, not always yielding the same results. While comparing data is the primary activity involved, organizations undertake benchmarking efforts so they will be able to use those comparisons to improve performance. But simply collecting measures, compiling data, and publishing a report are not enough to accomplish that goal. Managers must be engaged; they need to analyze and interpret the information, and use it to set policies or make informed decisions.

This is where participating benchmarking organizations can be valuable as part of an overall performance management approach. Being a member of a benchmarking organization ensures that measure definitions are consistent and data is collected uniformly to facilitate comparisons, but it also provides a forum for information exchanges on business practices, strategies, solutions to common problems, and innovative ideas. Benchmarking organizations play a vital role by organizing and coordinating the exchange of information and ideas needed to improve results in key service areas.


To generate service improvement, benchmarking organizations fulfill several specific roles.

Defining Common Performance Measures. Comparative data means that organizations have consistent data definitions and use the same method of data collection. To be effective, performance measures also need to be defined in a way that provides useful information without being overly burdensome to the government. If an organization has too many measures, or measures that are too complex or difficult to collect, the data collection effort runs the risk of being costly, misunderstood, and easily manipulated. Defining performance measures requires functional expertise in each particular subject matter and an understanding of the goals of each membership organization.

Ensuring an Apples-to-Apples Comparison. Comparisons lose their value if there is a mismatch in data sets. Avoiding this problem requires vigilance. Benchmarking organizations need to set collection rules carefully and verify that all information is collected in compliance with its rules.

Organizing and Publishing Data. After the data is collected, benchmarking organizations provide the valuable services of compiling information, analyzing it, and publishing it. Many benchmarking organizations use a shared services model, giving each membership jurisdiction access to the same technology system for inputting and displaying performance data. The same systems then allow access to that data so managers can review it.

Bringing People Together to Share Ideas. In addition to reviewing the data, many benchmarking organizations hold annual conferences, training sessions, and other events that allow members from different organizations to network, share practices, and collaborate on solutions to common problems that were identified by the data. Sharing information is essential for successful benchmarking. Information sharing is more common in the public sector than in the private sector, and where services are similar, governments should pursue every opportunity to learn from the experiences of peer governments.

On the local government level, a few organizations have been successful at using comparative benchmarking to help start the sharing of practices, strategies, and ultimately ideas. The ICMA Center for Performance Measurement and the North Carolina Benchmarking Project are two of the best-known examples. Other successful groups include the Florida Benchmarking Consortium, the Ontario Municipal CAO's Benchmarking Initiative, the South Carolina Benchmarking Project, and the Southeastern Results Network. In fact, of respondents to a recent GFOA performance management survey, 31 percent of jurisdictions that said they use performance management indicated that they collected a common set of measures across the organization and shared data with other organizations. (1) Other organizations provide comparative data on select services such as the Federal Bureau of Investigation's uniform crime statistics. However, the State Comparative Performance Measurement Project--a new initiative by The Council of State Governments (CSG) in partnership with the Urban Institute and the Alfred P Sloan Foundation--represents the first comprehensive effort to compare state service outcomes in multiple service areas. (2)


The CSG recently released the first in what will be a series of annual reports to provide comparative data on outcomes for key state outcomes. The initial reports will cover transportation, public assistance, and child welfare, but the scope will be expanded to cover additional services. Data for the reports was provided voluntarily by state governments and collected from federal agencies. The goal of the project is to provide a way for each state to compare its outcomes to those of other states, using information that can be considered reasonably comparable across states. Exhibit 1 shows an example from the report that is focused on transportation. (3) While the most visible benefit of participating in a comparative benchmarking project like this one is the ability to assess an organization's performance against that of its peers, the most important benefit is the ability to use the information to start a discussion on improvement techniques.

By making this information available, the CSG has a number of aims. The reports are intended to help managers better respond to demands for accountability, support budget decisions, motivate employees, communicate better with the public, and, ultimately, encourage improvement of services. The data should trigger in-depth examinations of performance problems and corrective actions. And while ranking is unavoidable with benchmarking initiatives, it is important to remember that rankings are not always as straightforward as they might appear. In the public sector, organizations can have different sets of priorities or be facing different external conditions.


The CSG notes that there are a number of caveats to be observed when interpreting performance information. (4) Any comparison of data should take into consideration the following points:

* Benchmarking comparisons help provide context for communicating performance measures, but they do not of themselves provide an indication of service quality relative to other organizations. Therefore, if possible, the ranking of governments should be avoided. Data should be shared for the purpose of allowing managers to more easily identify issues and then work with peer organizations to develop effective solutions, not to determine which government is the best.

* Benchmarking provides a way to understand organizational performance by comparing one jurisdiction's data to that of other organizations that have similar missions, scope, and responsibilities. Not all organizations strive to achieve the same results.

* The outcome information does not indicate why any state's outcomes are higher or lower than those of any other state. Before drawing conclusions about performance, users should seek more information and explanation, especially for particularly high or low performance levels. Each organization likely has somewhat different definitions and data collection procedures, which can account for a number of differences between states. This is true regardless of collection method.

* Because there are likely many other variables that go into the measurement outcome beyond the services an organization provides, comparison data should not be interpreted as a ranking of service quality.

* Because organizations are not able to measure everything, comparison data probably does not include all possible outcomes--intended and unintended--that might occur from particular services.


Despite the limitations, many governments have had successful experiences with comparative benchmarking. Both the data and idea exchange made possible by these collaborative efforts are useful to the organizations involved, which is why more and more governments are getting involved. New efforts such as the CSG's State Comparative Performance Measurement Project provide a further example of how managers can use comparative information to better understand performance results and to drive improvements in the services that governments offer.

Definition: The International Roughness Index is a measure required by the Federal Highway Administration and based on a sample of road segments. The index ranges from 0 to 220, with lower values denoting better road condition. Scores of less than 95 are deemed good; 95-170, fair; and more than 170, poor.

Comments: The International Roughness Index is commonly used as the most complete measure of road condition, but it has some limitations. The accuracy of measurements can be affected by the type of equipment used to obtain the rating as well as variations in how the measurement techniques are applied from state to state and under what conditions. The index values can be in conflict with the ride quality perceived by vehicle occupants. Road condition is affected by many factors, including weather, traffic volume and vehicle type, the presence or absence of an effective preventive maintenance program, and population density.


(1.) Research for the GFOA State and Local Government Performance Management Project, conducted between 2006 and 2008.

(2.) For more information on the State Comparative Performance Project, visit

(3.) State Comparative Performance Measurement. Transportation, a national report from the Council of State Governments, 2009, is available at CPM_Transportation.pdf.

(4.) Adapted from State Comparative Performance Measurement Overview, a national report from the Council of State Governments, 2009.The overview is available at programs/policyprograms/documents/CPM_Overview.pdf.

MICHAEL J. MUCHA is a senior consultant/ analyst in the GFOA's Research and Consulting Center in Chicago, Illinois.
Exhibit I: Comparative Performance Measurement
Project Transportation Data

Percentage of roads in fair or good condition (rather than poor),
Using the International Roughness Index.

                     Good Condition   Fair Condition

Georgia                  99.88%          (15.1%)
Alabama                  99.29%          (11.2%)
Nevada                   99.15%          (15.9%)
Montana                  99.04%          (14.7%)
Florida                  98.84%           (9.9%)
Kentucky                 98.33%          (35.1%)
South Carolina           97.53%          (62.5%)
Idaho                    97.43%          (10.2%)
Tennessee                97.22%          (15.2%)
Kansas                   97.05%           (7.4%)
Wyoming                  96.71%          (24.1%)
North Dakota             96.57%           (8.5%)
Oregon                   96.50%          (12.6%)
Virginia                 96.27%          (79.4%)
North Carolina           96.27%          (75.9%)
Utah                     96.12%          (13.2%)
West Virginia            96.04%          (89.4%)
Arizona                  95.09%          (11.2%)
New Mexico               94.99%          (17.5%)
Minnesota                94.96%           (8.6%)
Missouri                 94.19%          (26.1%)
Arkansas                 94.02%          (16.5%)
Nebraska                 93.98%          (10.7%)
Delaware                 93.81%          (85.1%)
Wisconsin                93.43%          (10.3%)
Indiana                  93.25%          (11.7%)
New Hampshire            93.11%          (25.2%)
Colorado                 92.84%          (10.3%)
Texas                    92.63%          (26.0%)
Ohio                     91.87%          (15.4%)
Maine                    91.58%          (37.4%)
Washington               90.87%           (8.4%)
Pennsylvania             90.86%          (32.8%)
Oklahoma                 88.33%          (10.9%)
Iowa                     88.18%           (7.8%)
Michigan                 87.66%           (8.0%)
Vermont                  86.53%          (18.3%)
Maryland                 85.67%          (16.5%)
Illinois                 85.63%          (11.5%)
Mississippi              85.03%          (14.7%)
South Dakota             84.97%           (9.4%)
Connecticut              84.74%          (17.5%)
New York                 84.73%          (13.2%)
Louisiana                84.13%          (27.3%)
Alaska                   82.05%          (39.1%)
Hawaii                   78.98%          (21.6%)
Massachusetts            75.96%           (7.9%)
California               75.27%           (8.9%)
New Jersey               69.85%           (6.0%)
Rhode Island             66.14%          (17.0%)

Values in parentheses indicate percent of roads owned
by the state.

IRI Data Source: Federal Highway Administration,
Highway Statistics 2007.

Note: Table made from bar graph.
COPYRIGHT 2010 Government Finance Officers Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:[PM.sup.2] Connections: PERFORMANCE MEASUREMENT & MANAGEMENT
Author:Mucha, Michael J.
Publication:Government Finance Review
Geographic Code:1USA
Date:Feb 1, 2010
Previous Article:Issuers' continuing disclosure responsibilities and using EMMA: issuers of municipal bonds need to know how to use EMMA, now the official repository...
Next Article:Driving performance by building employee satisfaction and engagement: good government requires good people, which is why the key to meeting our...

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters