Printer Friendly

A critique of "Reinventing Government in the American States: Measuring and Explaining Administrative Reform". (A Difference of Opinion).

One of the least welcome tasks in the scholarly business is to publicly criticize the work of colleagues. The work in question is Jeffrey L. Brudney, F. Ted Hebert, and Deil S. Wright's (to be referred to as "the authors") article, "Reinventing Government in the American States: Measuring and Explaining Administrative Reform" (Brudney, Hebert, and Wright 1999, 19-30). It is an even more unpleasant task when the article in question is not only multiauthored, but it also receives an award from the field's premier journal (PAR 2000, 292). It does seem surprising, however, that not one of the authors, editors, reviewers, or awarders noticed that the article is fundamentally flawed--in its data, methodology, and design. Commentary will be made in that order, followed by a proposed exploratory hypothesis that sheds different light on the subject.

Reinvention of Government as Antagonist

But, first, it is all too easy to see why the article was so well received, despite it being too good to be true. At last, here was living proof of the triumphant victory over reinvention. Osborne and Gaebler in 1992 authored the most widely read work on reinvention. Ever since, but especially during the past few years, Public Administration Review has published many articles about reinvention--nearly all unfavorable (Mae 1994; Terry 1998; deLeon and Denhardt 2000). Some of the articles are replete with intemperate language that characterizes them as editorials, but, then again, they were simply preaching to the already converted. The "principal conclusion" reached by the authors merely reconfirmed this overt antagonism, in that "... a concerted reinvention movement does not appear to be underway across the states" (19). (1)

The authors identify the survey as the "American State Administrators Project," but they say little about its objectives. In 1994-95, the authors mailed questionnaires to 3,365 agency heads or directors of 93 agencies in the 50 states. The stated--but soon to be challenged--overall return rate was 37 percent, or 1,229 respondents.

Questionable Respondent Return Data Rates

The authors make egregious errors in data reporting and analysis, ranging from inconsistency in usage to unevenness in the distribution of respondents. First to the inconsistencies: A major disagreement exists between what the authors assert is the number of respondents compared with the figures they actually report. The authors claim to have 1,229 respondents, yet the subtotals of the states add to only 1,135 (24). The latter is 34 percent, not 37 percent.

In displaying these data, it is strange that the authors neglect to give a total number of respondents in their accompanying table, as is customary, that is, N=s. The authors fail to offer any reason why they are remiss in reporting the total number of respondents or, apparently, why they discarded 94 respondents. Assuming the authors can explain away this disparity over the return rate, their main conclusion becomes tainted if the lower figure is accurate. In particular, the authors employ two statistics of central tendency: one is correlation, and the other is percentage, which would be influenced through a change in denominators. This commentary utilizes the (N=1,135) figure in its calculations.

Large Populous States Data Sorely Under-Represented

The second data problem is unevenness in the reported respondents--a disproportionate number of them came from smaller, less populous states. Indeed, a scant number of replies turned up from the most populated states. None of the five largest states had 30 or more agency chiefs return their surveys, yet, of the 11 states that did, only one is populous (see table 1). Combining the two largest states, California and Texas, yielded only 27 respondents, 12 and 15, respectively. The third-largest state, New York, had the least, with eight returned surveys. The next two largest states fare a little better, with Florida at 19 and Pennsylvania at 25. Total respondents from the five most populated states merely amounted to 79, or 6 percent. Statistically speaking, the study says absolutely nothing about California, Texas, and New York. If anything, the conclusion reached by the authors has relevance to small and not large states. Here's why.

Ten of the 11 less populous states contributed 30 or more surveys. They included Montana (38); Wisconsin (34); West Virginia (32); Maryland (32); Alaska (31); Missouri (31); Colorado (31); Wyoming (30); Mississippi (30); and North Carolina (30). Agency chiefs in Michigan alone among states with higher populations returned 30 surveys (it ranks eighth, as table 1 shows). The subtotal for these 11 states is N=349, which comprises 31 percent of respondents--five times as many as the combined total of the five largest states. The anomalous nature of this situation should have leaped off the printouts. Moreover, in addition to the 11 high-return states, respondents from 22 other states equaled or exceeded more than the combined total of California and New York (N=20). These 22 mostly less populous states represented N=523, or 46 percent of the respondents. Taken together, the total respondents from these 33 mainly smaller states--with 20 or more agency chiefs returning surveys (N=872)--equaled 77 percent of the respondents.

Table 1 further clarifies this unevenness in the reported respondent data from larger and smaller states by reconfiguring the authors' principal table (1999, 24). The reconfigured table shows two rank orders. One order is by the number of respondents, and the other indicates the actual population rank based on the 2000 census--positions changed only slightly from 1990. A comparison between the two sets of rankings reveals an inversion between the distribution of respondents and state populations. Comparing the number of respondents from the 40 highest-returning states with the 10 lowest-respondent states clarifies the situation. That is, in addition to the 77 percent (33 states) with 20 or more respondents, the next seven states--with 17 to 19 respondents--added 11 percent, or N=128. Together, the total of these 40 states equals 88 percent of the respondents.

By contrast, the remaining 12 percent of total respondents came from 10 states returning 16 or fewer surveys. Of course, this situation would not be as problematic except, as already disclosed, this 12 percent contains the three largest states. It also includes the fifth and ninth most populated states, Illinois and New Jersey, respectively. In aggregate terms, these five states total about 34 percent of the nation's population--a figure nearly equivalent to respondents from the 11 high-return, lesser-populated states--a true inversion (U.S. Department of Commerce 2000).

The authors easily could have avoided these data problems by specifying a range for the return rate, that is, from X to Y percent. More forthrightly, the authors might have noted that the range of state returns would include a low of New York with less than X percent (probably 10 percent) to a high of Montana with more than Y percent (probably 50 percent). In order to give their discussion integrity, when the differences are so large, some statistic of dispersion is necessary--otherwise, both the unevenness between under-represented larger states and over-represented smaller ones remains unexplained. Failure to present a clearer picture of the imbalance is misleading.

Anonymity and Confidentiality Methodology Issues

The first question about methodology involves two issues--anonymity and confidentiality of respondents. Routinely, and preferably, knowledge of survey respondents is kept strictly private, that is, anonymous. Generally, this guarantee ensures greater reliability of responses. Government managers--especially in the more highly politicized populous states--would be realistically sensitive if their identities were known as part of responding to a mailed survey. In this study, anonymity did not exist. The authors had to know the identities of both respondents and nonrespondents as they conducted two follow-up telephone inquires of nonrespondents, but they may have been given a promise of confidentiality.

The two follow-up telephone calls made to "nonrespondents" might have appeared to be a form of intimidation for not replying. The first set of calls to nonrespondents (N = 110) sought to determine whether they differed from respondents across what the authors termed as "five personal attributes" (22). Of course, the authors found the two groups did not differ significantly (p < .10). The second set of phone calls to nonrespondent agency chiefs (N = 35) questioned them about the influence of governors and legislatures concerning their attitudes towards reinvention. Again, no difference (p < .10) with respondents appears. What is wrong with this picture?

For one, the authors should not have known the identities of their working sample, particularly nonrespondents. Unfortunately, they do not divulge how they learned of them, either openly or secretly. The latter is worse than the former, but both are questionable ethically and empirically. Confidentiality offers much less assurance of the credibility of replies. That is, either in the written replies or in telephone calls, agency chiefs easily could have telegraphed what the authors wanted to learn.

What is also problematic with the methodology is that, statistically speaking, even if the nonrespondent calls had been randomly selected, which the authors do not tell us, the results are still spurious. Comparisons of two groups of such unequal size--one 10 times larger and the other 20 times larger--will nearly always yield nonsignificant results. Randomization does not matter a whit here; the authors could have chosen to telephone nonrespondents alphabetically and the results would have been the same. In any case, no explanation is given as to how or why the authors found out who did or did not reply.

Unexplained Derivation of Sample Design and Distribution of Mailings

Similarly, the authors neglect to communicate how the 3,365 agency chiefs were selected. This point is important, both as a matter of design and of interpretation of the data. The authors tell us (22) that the breakdown was by seven types of agencies, as follows: natural resources and transportation (26.0 percent); human services (22.3 percent); regulatory (12.2 percent); fiscal and nonfiscal staff (12.1 percent); economic development (8.2 percent); criminal justice (8.0 percent); and other agencies (11.3 percent). How were these percentages determined? Are these proportions representative of generally accepted state budget outlays or of some proportional staffing basis? Even assuming that all states allocate budget funds along the same lines, after education programs, human services receive the greatest share of the budgets, not natural resources and transportation. The authors also fail to specify whether the return ratios between respondent (or nonrespondent) percentages fall in line with the mailed surveys.

Another troubling issue regarding those seven agencies involves the question of the criteria used to apportion them to the states. No explanation is given as to the methods used either to determine the proportion of the surveys mailed to the states, or, for that matter, to reveal the proportions received from the states. What is now obvious is that the wide discrepancies between respondents from larger and smaller states would have become compounded in a distribution analysis across those seven agency types. A rendering of that data by the authors would have been helpful. Interpreting the data judiciously requires knowing the proportion of respondents by agency type, either in the aggregate, but better, in some disaggregated form by states. It is significant that the authors report a "large agency effect" (27-28), that is, larger agencies "are more likely to have implemented reinvention reforms" (28). A similar effect might have appeared if the authors had cut the data by state respondent return rates. But then the findings would have unraveled.

Equally disturbing is the interpretation of the authors' extensive findings based on a multiple regression analysis. The analysis relates to a discussion of well-known variables typifying state agencies. First, the authors indicate the regression results "may seem modest" in that they only account for 18 percent ([R.sup.2]) of the variation in the scope of effort by states to implement reinvention (27). Yet, they do not clarify why far fewer respondents (N=853) are included in their analysis. This lower figure equals 25 percent of the respondents--not 37 percent or 34 percent--as discussed earlier. Second, since the five most populous states returned only 6 percent of the surveys, it is likely that their participation in the regression analysis was further diminished. New York was probably eliminated, with California and Texas running close behind. Using the reconfigured respondent totals, in sum, reinforces the proposition that the study applies to the less populous states, at best.

An Exploratory Hypothesis Proposed

This critique dealing with problems in data, methodology, and design raises questions about the tenability of the authors' main conclusion. Accordingly, they do not find evidence of a concerted effort by the states to implement reinvention. One would expect their findings from 1994-95 to foretell the demise of reinvention. It might, therefore, have been anticipated that by 2000, states would no longer express any interest in supporting reinvention projects. A little reality testing by the authors might have proven useful. Therefore, it is appropriate to do some exploratory research on the current situation. The authors maintain that state governors do not much influence their agency chiefs about undertaking reinvention initiatives. This finding is worth further investigation as a hypothesis about the present.

A worst-case scenario can be hypothesized. That is, to determine whether the governors of states whose agency chiefs, the authors declare, are "least adopting" of reinvention are still beating this dead horse. (The null hypothesis is that no differences prevail--no dimensions of reinvention will be found among governors.) The three states with the lowest reinvention adoption scores are Idaho, New Mexico, and Alabama (24). (2) Examining the current Internet home pages for these states makes for a very suggestive, but preliminary, test of this hypothesis.

Examining the Three "Least-Adopting" States

Looking at Alabama's site (http://www.state.al.us/[January 12, 2001]) by searching for the word "reinvention," two hits appear. Clicking on "something similar" for one of these hits, 1,698 more documents pop up. Most of the items have to do with action plans and strategies of various agencies. Hints of reinvention linger here and there. Turning to what this state's governor wants people to know he is up to, is he engaged in any reinventing these days? One of the governor's proudest moments on his Web site is the new "Report Card" that began to be issued in 1999 to parents every nine weeks, which documents (and compares) various relevant issues. The news release found on the governor's page issued by the education commissioner says: "... parents know little about the success or failure of their child's school." Is this reinvention's familiar "management by results"? Sounds that way.

What's happening with Idaho's governor? Unfortunate for its detractors, but dear to the heart of its supporters, reinvention appears in Idaho's governor's year 2000 State of the State address (http://www.state.id.us/ [January 12, 2001]), as he spells out how Idaho uses bonuses to reward public school teachers. In its second year, 90 teachers who passed the National Board Certification test are receiving a bonus of $2,000 a year for five years. In addition, Idaho's Innovative Grants program awards up to $500 for schools to enhance learning; nearly 200 schools have received such awards. The governor also proposed another awards program: "... to encourage our schools to break the molds ... to change the way schools do business in curriculums, in hours, in rules, and in other creative ways." The proposal is reminiscent of a more recent reinvention book that discusses ways to banish bureaucracy (Osborne and Plastrik 1997). (3)

Is New Mexico any different? To continue with education, its governor is most pleased to announce his proposal to create student school vouchers (http://www.state.nm.us/ [January 12, 2001]). He had proposed it previously, but in 2001 he increased the value of the voucher to $5,200, up from $3,200. In the governor's own words: "Accountability continues to be the cornerstone of education reform.... We will do all we can to see that the investment we are making actually educates our children and doesn't feed a huge bureaucracy with little or no results. I believe that we can ensure accountability by instilling competition into the system by offering every student a school voucher." (Twenty percent of the voucher returns to the school district if parents opt for a private school.) It would be hard for any public servant--let alone an agency chief--not to know the governor is unmistakably talking and doing the reinvention routine.

New Mexico's governor also articulates this language in dealing with public personnel issues. In his 2000 address, he says: "For the past five years [italics added], the State Personnel Office has made steady progress in this effort (`to bring greater economy and efficiency to the management of state affairs') by simplifying its rules and regulations; changing to recruitment by real-time vacancy; pushing decision making down to the agency level; and instituting pay-for-performance as the reward mechanism." The governor goes on to discuss a "comprehensive redesign of the classified employment system." Presumably, having been around at least since 1995, some of these results-oriented reinvention practices would have affected every state agency. In other words, agency chiefs had to have heard about them around the time they were completing their surveys. (4)

With different degrees of forcefulness, these three least-adopting states have governors who clearly seek to deploy distinct dimensions of reinvention. Currently, reinvention is far from gone in these worst-case states.

Reinvention's Demise Premature

The purpose of presenting this exploratory hypothesis examining governors' interest in reinvention initiatives has been twofold. One, it questions the authors' findings that reinvention was waning in the mid-1990s in the states; five years later, it appears to be flourishing. Announcements of its demise are premature. The benchmark of reinvention's success lies in the fact that its principles and practices are now improving and reforming public administration--certainly as voiced by the governors of the three least-adopting states.

The second point of the exploration is to ask: How is the authors' finding about governors and agency chiefs possible, that is, for chiefs to ignore their governors' ideas and actions about reinvention? (The authors were also suspicious of this situation when they made one set of follow-up telephone calls (N=35) that verified a no-difference finding between respondents and nonrespondents--and, as already noted, only to expose methodological lapses.) All may not be lost. The authors report (29) they planned to collect data in 1998; it would be worthwhile for them to compare the old and new findings for the three least-adopting states. The authors can only but find that differences will appear in two ways: one, the circulation of reinvention principles and practices is active in these states, and, two, if agency chiefs fail to acknowledge any influence by their governors' activities, the authors need to find out why they are in denial.

It would be an unwelcome finding to learn that agency chiefs cannot admit, at least, to being aware of what their governors are doing. For example, in Alabama's case, the education commissioner's news release on the governor's Web page states that he, like other parents, will receive his child's school report card. It is safe to say that other agency chiefs will be similarly informed. It would bode poorly for public administration if these agency chiefs cannot own up to what the chief executives of their states are doing.
Table 1 Reconfiguration of Reinvention State
Survey Data

State                Number of respondents   Census population
                          (N= 1,135)              rank *

 1. Montana                   38                    44
 2. Wisconsin                 34                    18
 3. West Virginia             32                    37
 4. Maryland                  32                    19
 5. Colorado                  31                    24
 6. Missouri                  31                    17
 7. Alaska                    31                    48
 8. Michigan                  30                     8
 9. North Carolina            30                    11
10. Mississippi               30                    31
11. Wyoming                   30                    50
12. Utah                      29                    34
13. Minnesota                 29                    21
14. Oregon                    28                    28
15. Ohio                      28                     7
16. Oklahoma                  27                    27
17. North Dakota              27                    47
18. Nebraska                  25                    38
19. Pennsylvania              25                     6
20. Hawaii                    24                    42
21. Kentucky                  24                    25
22. Rhode Island              23                    43
23. Vermont                   23                    49
24. New Hampshire             23                    41
25. Delaware                  22                    45
26. Indiana                   22                    14
27. Alabama                   22                    23
28. Arizona                   21                    20
29. Idaho                     21                    39
30. Nevada                    20                    35
31. Georgia                   20                    10
32. New Mexico                20                    36
33. Tennessee                 20                    16
34. Kansas                    19                    32
35. Florida                   19                     4
36. South Dakota              19                    46
37. South Carolina            18                    26
38. Maine                     18                    40
39. Iowa                      18                    30
40. Washington                17                    15
41. Arkansas                  16                    33
42. Illinois                  16                     5
43. Massachusetts             15                    13
44. Virginia                  15                    12
45. New Jersey                15                     9
46. Texas                     15                     2
47. Connecticut               13                    29
48. California                12                     1
49. Louisiana                 10                    22
50. New York                   8                     3

* Source: U.S. Department of Commerce, Bureau of the Census (2000).


Acknowledgment

I would like to thank my colleague, Dr. James D. Kent, for the helpful advice he gave in writing on this very difficult subject.

Notes

(1). It also should not go unnoticed that in the prior year, another article received an award that also questioned the long-term value of innovations ushered in by reinvention reforms (PAR 1999, iii).

(2.) The authors report a state mean reinvention score that takes in "the proportion of administrators in that state who report that their state has undertaken reinvention or similar reforms in the last four years" (24). The lowest percentage scores are for Idaho and New Mexico (0.2727) and for Alabama (0.1429).

(3.) Regarding searching for "reinvention" on Idaho's home page, it followed the Alabama pattern. A search brought up 35 documents, some of which portray reinvention activities. The seventh document, in particular, seeks to "update business process redesign of the food distribution program" for schools. It created "a commodity ordering reinvention team ... that is directed to significantly improve service to our customers ..." This effort sounds more

like the language of re-engineering that approximates the reinvention regimen.

(4.) At first pass, no "reinvention" search documents appeared for New Mexico. Instead, a prompt to email a reference librarian surfaced. An email was sent to the state library explaining this situation. A very speedy response from the librarian appeared. It is significant for two reasons. First, it captures the points made in this critique. The email expresses that "part of the problem may be that, while individual agencies or the state government as a whole have bought into particular concepts related to the idea of `Reinventing Government' (such as TQM, performance based management, or other types of government efficiency), the term `Reinventing Government' may not have been used" (January 19, 2001). Second, the librarian suggests I read the article to which this critique addresses itself and quotes the authors' main conclusion noted at the outset. It is critical that the correctives to this conclusion become known to avoid any lingering distortions about the state of reinvention in the states.

References

Brudney, Jeffrey L., F. Ted Hebert, and Deil S. Wright. 1999. Reinventing Government in the American States: Measuring and Explaining Administrative Reform. Public Administration Review 59(1): 19-30.

deLeon, Linda, and Robert B. Denhardt. 2000. The Political Theory of Reinvention. Public Administration Review 60(2): 89-97.

Moe, Terry M. 1994. The "Reinventing Government" Exercise: Misinterpreting the Problem, Misjudging the Consequences. Public Administration Review 54(2): 111-22.

Osborne, David, and Ted Gaebler. 1992. Reinventing Government. Reading, MA: Addison-Wesley.

Osborne, David, and Peter Plastrik. 1997. Banishing Bureaucracy: The Five Strategies For Reinventing Government. New York: Penguin.

Public Administration Review (PAR). 1999. Louis Brownlow Award. Public Administration Review 59(4): iii.

--. 2000. William and Frederick Mosher Award. Public Administration Review 60(4): 292.

Terry, Larry D. 1998. Administrative Leadership, New-Managerialism, and the Public Management Movement. Public Administration Review 58(3): 194-200.

U.S. Department of Commerce, Bureau of the Census. 2000. Resident Population of the 50 States. Available at http:// www.home.doc.gov/. Accessed December 28, 2000.

Donald J. Calista is the director of the Graduate Center for Public Policy and teaches in the master of public administration program at Marist College. His articles have appeared in Administration and Society, Comparative Political Studies, Policy Studies Journal, Public Productivity Review, Policy Studies Review, Public Performance and Productivity Review, among other journals. He continues his research in implementation and transactions cost analysis in the public sector and more recently has presented papers on the emergence and impact of virtual organizations. Email: donald.calista@marist.edu.
COPYRIGHT 2002 Wiley Subscription Services, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2002 Gale, Cengage Learning. All rights reserved.

 
Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:response to article by Jeffrey L. Brudney, F. Ted Hebert and Deil S. Wright, Public Administration Review, vol. 59, p. 19, 1999
Author:Calista, Donald J.
Publication:Public Administration Review
Geographic Code:1USA
Date:May 1, 2002
Words:4038
Previous Article:Defining the client in the public sector: a social-exchange perspective.
Next Article:Revisiting administrative reform in the American states: the status of reinventing government during the 1990s. (A Difference of Opinion).
Topics:

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters