Printer Friendly

Perception or reality: can we trust federal statistics?

COMPLAINTS ABOUT government statistics have become a part of our everyday lives. It has become quite fashionable, even for those who use data in daily business activities, to criticize government statistical series. The critics often have good reason to question the data. They wonder whether survey methods are up to date, whether the information collected is accurate and relevant, and whether the data are released in a timely and objective manner. And they have more specific criticisms too. For example, some people insist that the Consumer Price Index does not adequately adjust for quality change and, therefore, overstates the rate of inflation. Others question the employment numbers. When the Bureau of Labor Statistics revised data last year to show fewer payroll jobs than had previously been thought, many asserted that the series was seriously flawed. And still others, pointing to the undercount in the population census, question whether any data produced by the government can be trusted.

Some of the criticisms of government statistics are valid, but many are not. Our statistical system has been starved for research funds for over a decade now, and it has been difficult to keep up with all of the fast-changing concepts and methods. It is hard to define economic and social concepts on which data are collected when the country itself cannot seem to agree on definitions of many public policy issues. These are problems, but they are not the fault of the statistical system.

Some of the criticisms are of a different kind, however. The critics say that the statistical system resists change so that at times the data that are provided by the system are no longer relevant to the issue at hand. They think that some government statistical series must have many errors because they seem to be revised so frequently. And many of the critics express concern that the political officials of the government, who would like data interpreted to their own policy advantage, pressure the agencies to prepare releases of data to further their political objectives.


The statistical system did have problems with political interference in the early 1970s. The issue became public when the press briefings customarily held by the Bureau of Labor Statistics were canceled, and several of the BLS senior staff were moved to different positions. The changes were made at the BLS in the office responsible for the employment release. The difficulties had come to a head when at a press conference, a BLS official described a slight drop in the unemployment rate as "insignificant" at a time when the Secretary of Labor was informing the press that the change was "very significant". This was a period of rising unemployment and inflation, and the White House staff wanted BLS to place more emphasis in its releases on the number of people who were working. The BLS staff continued to prepare its releases as it had in the past, however. Nevertheless, the Secretary of Labor himself began to attend the internal briefings before data were released.(1)

Upset at the cancellation of their briefings, the media argued that the data were being politicized. Congress held hearings, and the professional community established a Committee to review the charges. Senator Proxmire, then chairman of the Joint Economic Committee (JEC) of Congress, insisted that the BLS Commissioner appear before the JEC to explain the data each month. These hearings have taken place generally on the first Friday of each month ever since.

In the early days of the Ford Administration, shortly after the resignation of President Nixon, the Council of Economic Advisers (CEA) considered how best to restore confidence in the objectivity of the government's release procedure. The driving force for this initiative was Sidney Jones. The solution was contained in an OMB Directive (Circular A-91) that prescribed new rules for release of the major economic indicators; with only minor additions, the procedure has remained the same for some twenty years. The data, once compiled, are provided the afternoon before release by the statistical agency only to the CEA so that the President can be informed. No one else, not even the Secretary of the Department in which the data were compiled, receives the data from the statistical agency until the next morning when they are provided to the press and to the public. In addition, no political policy official may comment on the data until they have been in the public domain for at least one hour.

This system for the major economic indicators has worked well, rarely having been broken even during the confusion of changes in administration. The specific review procedures vary among the agencies, but the system provides users with assurance that data releases are objective and free from policy influence. The hearings before the JEC have become a tradition of high quality discussion. Because the hearings are often televised, the general public often has the opportunity to listen to the objective testimony of the BLS personnel and then listen as well to the arguments of Committee members on both sides of the aisle.

This system established nearly twenty years ago has served to eliminate one area of criticism of government statistics. In spite of these arrangements, however, criticisms of federal statistics continue, and the ensuing lack of trust is becoming a very real problem for the entire federal statistical system. One can easily respond to the "resistance to change" issue mentioned above. Statistical agencies do have a bias against change, and it is important for those in the agency to fight that bias. Change is necessary to modernize survey design and to represent economic and social developments. But for time-series data, a trade-off between improvement and consistency over time must always be made.(2) And those who complain about revisions to statistical series frequently forget that data are sometimes issued in preliminary form (before all survey responses had been received) as a service to users who want an early reading on new trends.

All of these are important points, but the remainder of this article will focus on attacks in three important areas. The arguments run as follows: 1) the data collected do not adequately represent the shifts that have occurred in the economy, especially the increasing importance of services rather than goods; 2) concepts and definitions may not be relevant to the conditions the data purport to represent; and 3) survey designs and collection procedures may not be of high enough quality to produce data that are accurate enough for public policy use.


Many allege that the U.S. data system is skewed toward goods like iron ore and steel bars and that it does not adequately cover the fast-growing service sector. The system has failed to keep up with the extraordinary pace of economic change that we have experienced over recent decades. The major government survey programs cover the goods-producing sector much more comprehensively than they cover service industries. The problem has been called the "statistical-lag syndrome" (RSS News, March 1993), and those who work in the statistical agencies bear some of the responsibility for this situation. But services are not finite objects that can be picked up and counted easily, and the business establishments that produce services tend to be much smaller than those that produce goods. One can survey a small number of locations to get coverage of the automobile industry, but to have the same reliability for the eating and drinking industry would take thousands of respondents. In addition, as The Economist put it, "Unlike things you can drop on your toe, a unit of output in services is tricky to define and difficult to measure; and it is even harder to take account of its improving quality."

Some progress has been made in working out some of these sticky problems but the resources needed to do the job well have not been made available, and the pace toward modernizing in this area is far too slow. An important part of the budget initiative proposed by CEA Chairman Michael Boskin was designed in part to speed up the transition toward production of more detailed statistics on the service-producing sector, but only a small portion of the proposed funds have been made available.


Although most would agree that it is necessary to define what is to be collected before setting out to do a survey, disagreement frequently arises over the broad concepts to be represented in some government surveys. This problem is especially acute in the collection of social and economic data. It has become a central focus of criticism about the official definition of unemployment, which counts as unemployed many without jobs who may not be suffering economic hardship and yet does not take account of the working poor. Disagreement arose over the treatment of antipollution devices in the BLS price indexes during the 1970s (Norwood 1990). And the controversy over the measurement of the price of owned homes in the consumer price index (CPI) centered on issues relating to the underlying concept for the CPI market basket. The CPI, which had included the investment aspects of homeownership, was changed to a cost of shelter concept, one more consistent with the index market basket of goods and service purchased for consumption (BLS 1981).

These are all difficult issues. But they have become much harder in recent years because the American public has found it more difficult to reach consensus on many issues that involve value judgments. The federal statistical system does its best to stay out of public policy issues determination, but, all too often, the job of defining the concepts and definitions for the data to be collected has fallen to the statistical agencies. This is the case even though these determinations often determine how people think about a particular issue (Innes 1990). Under these circumstances, the federal statistical system needs to pay much more attention than it has in the past to research on underlying conceptual issues in the formulations of its surveys. Much more work is needed in this area in the future.


The state of the statistical art keeps changing, and statistical agencies cannot stand still. A statistical agency must either move forward or it will fall behind. But it is difficult, in a period of budget retrenchment, to adopt new techniques and to make use of new technology. Careful testing must be done before changes can be made in important government surveys, and testing takes time and costs money. The fact that U.S. government rules often result in considerable delay in the acquisition of new computer hardware and software equipment also contributes to the difficulties in bringing new, advanced computer-assisted techniques into government survey collection and compilation.

But state of the art techniques involve more than the manner in which survey data are collected and compiled. Sometimes the problem can be traced to inadequacies in the sampling frame itself -- the list that is used as the universe from which the survey sample is drawn. One example of this kind of problem can be seen quite dramatically in last year's revision to the Bureau of Labor Statistics survey of employer payrolls. The BLS found, upon reviewing the actual Unemployment Insurance Tax records that form the overall universe of the 6 million or so business establishments sampled for the payroll survey, that at the very beginning of 1991, it had overestimated job growth by about 640,000. Although the correction, when issued, amounted to only 0.6 percent of total payroll employment, it happened at a time when the numbers were intensely scrutinized because of the economic downturn and the need for the Federal Reserve Board to determine what action, if any, to take.

In June 1992, BLS released its regular annual benchmark that compared the numbers from the sample survey of business establishments with the universe, the total of all establishments on the Unemployment Insurance Tax lists. This review showed that in one month -- from December 1990 to January 1991 -- the survey estimates were 640,000 higher than the universe. BLS mounted an intensive research program to find out what had happened and to take steps to prevent recurrence of the problem. The research, which has since been reviewed by an independent panel established under the auspices of the American Statistical Association, found that the revision was caused by a single month's noneconomic change in the universe listing resulting from a BLS program to improve the tax record universe. The problem had been exacerbated by the increase in company use of payroll processing firms. These firms had developed new software to eliminate overcounting from the tax reports and introduced it in January 1991 (ASA panel 1993).

This kind of problem is not restricted to the business and commercial sector. It happens with household lists as well. The list of households, and the characteristics of the people residing in them, collected every ten years in the country's population census, is used to select sample respondents for such important series as the labor force, health, and crime surveys. Population data from the Census also form the basis for the data used to expand the sample estimates to cover the entire population. The fact that some groups were undercounted in the Census -- and that the undercount affected minority groups, more than the rest of the population -- is important to the long run accuracy of the household survey estimates.(3)

How good are government survey data? Quite good. Can they be made better? Of course they can. But in evaluating the criticisms that are made, we must remember that no set of data is perfect. The problem is that we use data as though they were. Embedded in our laws are many requirements to use exact numbers to determine issues of public policy. We use government statistics to escalate income, to allocate funds to states and localities, to adjust income brackets in calculating the income tax we owe, and to trigger government programs on or off. And in each case, point estimates are used. But every survey figure has a band of uncertainty surrounding it.

In fact, the very science of statistics is a science of uncertainty. What statisticians try to do is to measure, to reduce, and to explain the band of uncertainty that surrounds each estimate so that users will have the information necessary to interpret data efficiently. The professional staffs in the federal statistical system work hard at this task. But the environment in which they work does not always permit them to bring about the quality improvements that they would like to have. Support for the system, and for its budget requests, is often hard to come by. The country must learn to insist on statistical excellence and to support the resources that are needed to bring it about.

How close is the perception to the reality? Can we trust federal government statistics? On the whole, the answer is yes. In fact, given the constraints within which they operate, those who compile government statistics in this country do quite a good job. But statistics is a constantly changing field, and the data produced can always be improved. Our perception of the problem should be clear. The reality is that we need a comprehensive data base of high quality if we are to move forward to solve the public policy problems that face us.


1 The author, then Chief of the Consumer Price Index Division of the BLS, participated in these meetings. Never before had the Secretary -- or any other political official -- been involved in review of the CPI release before it was issued. The BLS staff held its ground in these meetings, but it is important to note that they took place.

2 When, during the Carter Administration, the BLS included the effect of the new model automobile discounts in the Producer Price Index, part of the criticism of that action came from the forecasting industry who had been using the index time series.

3 Questions relating to the possibility of adjusting intercensal estimates used in the household surveys to take account of the undercount are currently being considered by the agencies in the federal statistical system that sponsor household surveys.


American Statistical Association Panel to the Bureau of Labor Statistics, March 30, 1993, unpublished document.

Statement of Dr. Janet L. Norwood, Commissioner of Labor Statistics Regarding Changes in the Consumer Price Index, Bureau of Labor Statistics, Press Release, USDL 81-506, October 27, 1981.

Research Results: March 1991 Benchmark Revisions to Payroll Survey Employment Estimates, Bureau of Labor Statistics, Office of Employment and Unemployment Statistics, January 8, 1993, unpublished.

Innes, Judith Eleanor, Knowledge and Public Policy, Transaction Publishers, 1990, 2nd edition.

Norwood, Janet L., "Distinguished Lecture on Economics in Government: Data Quality and Public Policy," Journal of Economic Perspectives, vol. 4, no. 2, Spring 1990.

Royal Statistical Society, RSS News, March 1993, vol. 20, no. 3 quoting parts of article from The Economist, December 26, 1992 - January 8, 1993.

Janet L. Norwood is a Senior Fellow at the Urban Institute, Washington, DC. From 1979 to 1991, she was U.S. Commissioner of Labor Statistics.
COPYRIGHT 1993 The National Association for Business Economists
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Norwood, Janet L.
Publication:Business Economics
Date:Jul 1, 1993
Previous Article:Assuring integrity for federal statistics: focus on GDP.
Next Article:The missing ingredient in the economic expansion - new jobs.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters