Printer Friendly

Public administration as a science of the artificial: a methodology for prescription.

Public administration, in the words of Herbert Simon (1969), is a science of the artificial. Unlike natural sciences, it is concerned not so much with how things are as how things might be. Simon's position is not so different from Max Weber's (1946) emphasis on the ideal type, in that both are seeking to illustrate or obtain some optimum level of performance. Consistent with these values, many believe that the study of public administration should focus on finding ways to improve administrative performance rather than seeking knowledge for its own sake. This view of research is part of the chasm that separates academics from practitioners in the field. Academics, while occasionally finding things with practical applications, focus on how things are, on the empirical description of the administrative world. Scholarly journals reinforce that emphasis.

We argue that at partial fault for the separation between academics and practitioners is the dominant methodology used by both scholars and practitioners - regression-based techniques. Regression methods, by their very nature, tend to downplay the unusual and focus on the norm or the average. Recent developments in regression diagnostics (Belsley, Kuh, and Welsch, 1980; Rousseeuw and Leroy, 1987), although rarely used in public administration thus far, downplay the unusual cases even more. As a result, while these techniques add useful information, they increase the academic-practitioner gap. Our research introduces a new approach that we feel has the potential to transform the basic quantitative method of public administration (regression) from a tool that explains what is to a tool that can be used to search for what might be. We call this new approach substantively weighted least squares (SWLS).

We illustrate this method with an analysis of child-support enforcement at the state level. The key question is why do some organizations learn to perform their functions at a faster rate than others do? Our argument will introduce the substantive case, child-support enforcement, and briefly present the regression findings of a traditional study. We will introduce some recent techniques of regression diagnostics and argue that these methods take us in a completely wrong direction. Finally, we will illustrate substantively weighted least squares, a technique that puts more weight on the highest performing agencies. The method shows that some variables are far more important for effective performance than normal regression techniques demonstrate.

Child-Support Enforcement

In 1975, the federal government required that state governments set up procedures to compel absent parents to support their dependent children. Although some states had long operated such programs, many did not. In the past 20 years, all states have gained substantial experience operating these programs, and academic research has designated some effective collection techniques (Klawitter and Garfinkel, 1991; Michalopoulas and Garfinkel, 1989). The relative effectiveness of these programs, however, varies a great deal. In 1991, Michigan collected about $74 per capita in child support while Arizona was able to collect only nine dollars per capita. Although this state to state variation is an interesting topic in itself (Keiser, forthcoming), our concern is with how well these agencies have improved their performance over time, something that we call "organizational learning." To measure learning, we took child-support collections (per 1,000 population) in constant 1991 dollars for all years from 1982 to 1991 (see appendix A for sources of all data). We plotted these data and ran a regression. The slope of that regression is essentially the annual average improvement in real dollars collected per 1,000 population.(1)

The Original Study

Seven variables have been hypothesized to affect organization learning by the child-support bureaucracy.(2) These can be divided into three types - support by advocates, subject characteristics, and bureaucratic capacity. Support by advocates should increase learning because it provides pressure on the bureaucracy to improve, as well as providing legitimacy for the bureaucracy's function (Sabatier and Jenkins-Smith, 1993). To measure advocate support, we included the number of chapters in each state of the Association for Children for Enforcement of Support (ACES) per million population in the model. ACES is the only client-based organization focused directly on child-support enforcement (Keiser, forthcoming).

Subject characteristics, that is, the nature of the bureaucracy's inputs, should also affect learning rates (Lebovic, 1995). Three subject characteristics are of interest here - instability, ambiguity, and levels of demand. Instability in the workload refers to high rates of change in caseload numbers. Instability can facilitate learning because it may spur the bureaucracy to innovate to deal with its changing environment. The standard deviation of the child-support bureaucracy's caseload measures instability. Similar to instability, ambiguity, or heterogeneity; the mix in the type of cases the bureaucracy must deal with may increase learning by increasing the innovation needed to deal with a variety of cases (for an alternate view see, Mazmanian and Sabatier [1989]). Child-support agencies deal with two types of clients: those receiving Aid to Families with Dependent Children (AFDC) and those who do not. AFDC clients are on welfare and whether the noncustodial parent has the income to pay support is open to question (Alfasso and Chakmakas, 1983; McDonald and Moran, 1983). AFDC cases are more difficult to resolve than non-AFDC cases. The percentage of agency cases that are for AFDC clients (measured as an average from 1982 to 1991) is our measure of heterogeneity/ambiguity. The level of demand should also influence learning. More clients for the bureaucracy should increase the visibility of the problem and make it more likely that the bureaucracy will feel pressure to step up its activity. We measure the potential demand for service by the state's divorce rate.

Finally, bureaucratic capacity should also affect learning. Changes in personnel resources, organizational slack, and changes in bureaucratic monetary resources should all influence how fast the bureaucracy learns. Learning should be enhanced if an organization has ample resources to devote to experimentation or greater investment in the human resources skills of the agency. Against this logic is the notion that necessity is the mother of invention - agencies should learn faster if they are forced to because they lack the resources to do their current job. The measures are the change in agency personnel (1982 to 1991) per million population, the change in agency budgets (1982 to 1991) per thousand population, and the 1982-1991 change in agency slack (the number of employees per 1,000 active cases).

The results of an ordinary least squares (OLS) regression analysis reveal that learning is a function of both internal and external factors in an organization (Table 1). Of the external factors, only political support (ACES chapters) has a strong positive impact on organizational learning. Unstable work environments are positively related, but the coefficient does not meet the traditional academic criterion of the .05 level of statistical significance. Neither the divorce rate nor ambiguity affect the rate of learning.
Table 1

Determinants of Organizational Learning, 1982-1991

Independent Variable              Slope    Standard Error   T score

Support by Advocates

ACES chapters per million        284.795       99.831        2.85(*)

Subject Characteristics

Work load instability               .382         .206        1.85
Work load ambiguity                9.004       11.259         .80
Average divorce rate              -1.682       14.412         .12

Bureaucratic Capacity

Change in staff (per million)      4.617        1.758        2.63(*)
Average organization slack      -125.335       88.330        1.42
Average expenditure                 .294         .078        3.76(*)

[R.sup.2] .45
Adjusted [R.sup.2] .36
Standard error 649.34
F 5.00
N 50


Internally, the significant factors are expenditures and personnel. Agencies with larger budgets are able to increase their collections at a faster rate than other agencies, and agencies that have a greater increase in employees also are able to improve their collection rates. Not significant at traditional levels but still a relationship worth looking at is organizational slack. The negative coefficient supports the necessity view of learning: as organizational slack decreases, the organization is more likely to learn. Although these findings are interesting in themselves, they are introduced merely to illustrate the hazards of recent developments in regression diagnostics and to set up the application of a new method of analysis for public administration.

Regression Diagnostics

Statisticians have long known that regression analysis has many limitations. It relies on the principle of minimizing squared error to fit a regression line to a set of data. Squared error rather than average error or some other criterion was chosen originally because it permitted relatively easy derivation of formulas to use in calculation. This ease was not without a down side; that can best be illustrated by reference to Table 2, which shows the actual values of learning for each of the 50 state agencies. Because these numbers represent changes in dollars collected per year per 1,000 people, they might best be understood by dividing them by 1,000. This means the state of Alabama was able to increase child-support collections by $1.14 per person per year from 1982 to 1991 (in constant dollars). The predicted value is that which the regression equation predicts based on the independent variables. The residual is the amount by which the prediction misses the actual value. In other words, Alabama improved its collections by 56 cents less per person per year than would be expected for a state with its ACES membership, workload stability, workload ambiguity, divorce rate, expenditures, change in staff, and organizational slack. This information is useful to both the analyst and the practitioner. If one wants to visit a state agency that is performing much better than expected to see how it does it, then Wisconsin (residual 2,001) and Ohio (1,552) are the places to visit; avoid California and Illinois.

The column labeled "R Student" in Table 2 is the studentized residual; it is a measure of how far off the predictions of the regression are for that case in standardized units (Rousseeuw and Leroy, 1987; 226; Hamilton, 1992; 132).(3) A studentized residual can indicate a case that is problematic in a regression. Minimizing squared error essentially means that a case that is the distance of 2 units from a regression line (its residual) will be counted as four times as much as a case that is only a distance of 1 unit from the line. If such a case has the right characteristics, it can distort the regression line and produce misleading results. Because they are in standardized units, these studentized residuals can be used to designate cases that potentially distort a regression. The general rule of thumb is that cases greater than 2 or less than -2 should be examined.

Whether a case with a large studentized residual actually distorts the regression line depends on the individual case. In this [TABULAR DATA FOR TABLE 2 OMITTED] case, if the agency is fairly similar to all other agencies in terms of the values on the independent variables, then it will have little influence on the overall regression. If the case is unusual relative to other cases, then it is more likely to distort the regression. Statisticians have developed several measures of this distortion. One such measure, Cook's D, is a measure of how much the regression slopes change when that individual case is dropped from the regression (Table 2) (Hamilton, 1992; 132; Rousseeuw and Leroy, 1987; 227-228). A glance at Table 2 reveals that Wisconsin had the largest residual (R student = 3.61) but had virtually no influence on the regression line (Cook's D = .082). Both Ohio and Delaware had smaller residuals but exerted greater influence on the overall regression (OH R = 3.16, D = .456; DE R = 2.46, D = .626).(4)

Delaware illustrates the impact that a single case can have on a regression line. The unusual variable for Delaware was change in staff. Although the average agency increased its staffing by 55 persons per million population, Delaware reduced overall staff by 232 persons per million. If they had actually cut staff at twice that rate, the regression coefficient for staff for the entire equation would have dropped 51 percent from 4.167 to 2.24 and would have become statistically insignificant. Alternatively, if Delaware simply had kept the staffing the same size as before, the regression slope for staff would have increased 57 percent to 6.553, and the slope for slack would have increased about one-third and become statistically significant (results available from the authors on request). This illustrates that modest changes in values for individual variables can have a major influence on the results of a regression under the right circumstances.

Robust Regression

Recent work in statistics has focused a great deal of attention on diagnostics and the ability of individual cases to distort the results of regressions. A series of techniques called robust regression have been developed to ameliorate the problem (Western, 1995; Hamilton, 1992; 200-207). Essentially, the theory behind robust regression is that the concept of squared error is somewhat arbitrary. To keep extreme cases from distorting the regression, robust regression techniques systematically reduce the influence of the extreme cases by weighting them less in the overall regression.(5) The technique generates a regression equation that better represents the average cases.
Table 3

Robust Regression Results

Independent Variable                Slope   Standard Error   T score

Support by Advocates

ACES chapters per million         179.912        61.722      2.91(*)

Subject Characteristics

Work load instability                .233          .142      1.63
Work load ambiguity                 4.762         6.603       .72
Average divorce rate                2.220         8.075       .28

Bureaucratic Capacity

Change in staff (per million)       3.744         1.122      3.34(*)
Average organization slack       -174.228        52.877      3.29(*)
Average expenditure                  .306          .046      6.71(*)

[R.sup.2] .60
Adjusted [R.sup.2] .53
Standard error 338.77
F 8.89
N 50


The results of robust regression for the child-support data set are shown in Table 3. Both statistically and substantively, the results of Table 3 differ from Table 1. Although the ACES chapters variable remains statistically significant, the size (and thus the impact of the variable) dropped by about 37 percent. Staffing changes remained significant, but again, their overall impact dropped by about 19 percent. Expenditures remained positively related to learning and only changed moderately (+4 percent). The most important difference was that slack was now statistically significant and had increased its negative impact on learning by 39 percent. Faced with such findings, a consultant could well advise an agency that greater performance could be obtained by increasing resources, but only increasing them at a rate much slower than workload increases and thus reducing any organizational slack.

Despite the wide endorsement of regression diagnostics and robust regression, we feel that, if used uncritically, these are inappropriate for public administration. Adopting such techniques is the philosophical equivalent of saying that one would like to improve organization performance by looking at what the average agencies are doing. Agencies that are doing better than average (and those doing worse than average) are systematically down weighted in the regression. By analogy, we might ask the U.S. Postal Service to study the Italian postal system, or U.S. military leaders to adopt the tactics of the Iraqi army. Such a strategy might well pay off for agencies that are poor performers, but it holds no benefits for those agencies that are doing well. This is essentially what academic research using regression has been recommending to practitioners. As the quality of regression skills improve and regression diagnostics and robust regression enter public administration, academic research will become even less valuable for practicing public administrators. Our concern in public administration is not to get average performance out of an agency given the level of resources that it gets but to get above average performance from those resources.

Statistics for Optimum Performance

With some adjustment regression analysis can be converted to a useful tool for both academics and practitioners. In public administration, we should be interested in the high-performing agencies and what they can tell us relative to those agencies that do not do so well. We are interested in those agencies with positive residuals, those whose learning curve grew faster than we would have expected given their environment and internal factors (Table 2). As a rule of thumb, we propose that cases with a studentized residual of +.7 or more be designated as the high-performing cases. This criteria will generally designate about 20 percent of the cases.(6)

Rather than counting these high-performing agencies less (as robust regression would do), they should be counted relatively more than the lower performing agencies. Our technique was to rerun the regression equations using substantively weighted least squares. In the first run, we weighted the average cases (those with studentized residuals of less than .7) to count as .9 cases and left the high-performing cases as they were. We eventually ran nine regressions, each time reducing the weight on the average cases by .1 until the final regression weighted the high performing agencies at 1.0 and the average performers at .1. These regressions gradually gave relatively more weight to the higher performing agencies, and, in the process, the analyst could see how the slopes changed to determine what the high-performing agencies did that the average ones did not (Appendix B).

The change in slope coefficients for five of the independent variables - ACES, instability, staff, slack, and expenditures - are reported in Table 4. The other two variables were omitted because they were not statistically significant or close to it in either the original regression or in the robust regression. The data in the final row of the table show how much the coefficients changed when the high-performing agencies were weighted ten times that of the average agencies.(7) The actual regression coefficients for the last run are reported in Table 5.

Some of the findings were simply incremental adjustments. The slopes for ACES, staff, and expenditures increase 17 percent, 18 percent, and 16 percent, respectively. Because these figures were still held down somewhat by the average cases, we might think of these as a minimum level of distinction between the high-performing agencies and the rest. Substantively this implies that higher performing child-support agencies got 17 percent more learning from ACES political support, 18 percent more learning from additional staff, and 16 percent more learning from expenditure increases, all other things being equal. Although these are not gigantic differences, they are clearly worth investigating. Any agency that can increase its performance by 18 percent more than another because of a staff increase is clearly doing something worthwhile. Exactly what can be determined by in-depth case analysis of these agencies focusing on these variables.

The real differences, however, were for instability and organizational slack. The high-performing agencies clearly took better advantage of the instability in their environment; the slope for instability increased by 65 percent. If one were looking for major performance payoffs, these agencies should be examined for how they managed and adapted to workload fluctuations. During that process, practitioners should find the major factors that separate high-performing child-support collection agencies from mediocre ones. Equally striking is the relationship for organizational slack. While robust regression told us that slack was important to learning, SWLS showed that slack only affected the performance of the average agencies. Not only did it have little impact on the high-performing agencies, but its impact actually dropped to zero. High-performing child-support enforcement agencies were simply unaffected by the amount of slack resources in the organization. Therein lies the danger of even the best-informed regression techniques. The robust regression results would have encouraged practitioners to keep organizational slack as limited as possible to encourage better agency performance. This quite frankly is bad advice; organizational slack by itself has no impact on organizational learning among agencies that learn the quickest.
Table 4

Change in Slope Coefficients with Iterative Weighting

Weight     ACES     Instability     Staff     Slack     Expenditures

1.0        1.000       1.000        1.000     1.000        1.000
.9         1.003       1.053        1.002      .973        1.004
.8         1.007       1.112        1.003      .939        1.008
.7         1.010       1.178        1.005      .896        1.013
.6         1.012       1.253        1.008      .842        1.019
.5         1.015       1.339        1.012      .770        1.025
.4         1.019       1.437        1.019      .672        1.034
.3         1.029       1.545        1.033      .531        1.047
.2         1.059       1.649        1.068      .316        1.075
.1         1.172       1.654        1.179     -.045        1.162

Note. Figures are the value of the substantively weighted least
squares regression slopes divided by the ordinary least squares
slopes.
Table 5

Organizational Learning: The Ideal Regression

Independent Variable                Slope   Standard Error   T score

Support by Advocates

ACES chapters per million         333.752       117.107      2.85(*)

Subject Characteristics

Work load instability                .631          .242      2.61(*)
Work load ambiguity                 -.572        11.657       .05
Average divorce rate              -16.887        21.001       .80

Bureaucratic Capacity

Change in staff (per million)       5.445         2.248      2.42(*)
Average organization slack          5.579       117.953       .05
Average expenditure                  .342          .107      3.19

[R.sup.2] .68
Adjusted [R.sup.2] .63
Standard error 367.06
F 12.93
N 50


Some Caveats

We do not wish to imply that scholars should abandon ordinary least squares or regression diagnostics. These are valuable research tools. Ordinary least squares and robust regression are the preferred techniques used to generalize from a sample to a population. They demonstrate how things are. Substantively weighted least squares cannot be used to estimate relationships for a group of agencies; it is a technique used for performance isolation and recommendation. It demonstrates how things might be. We feel that both forms of analysis should be used and the results from both presented to the reader.

Although we have illustrated SWLS as a specific technique when applied to regression analysis, in reality it is a general quantitative tool. The basic principle of emphasizing high-performing cases can be used in conjunction with statistical methods other than regression.

Substantive Weighted Least Squares versus Best Practices

Although SWLS may superficially seem much like the best-practices literature, they are significantly different. The best-practices literature seeks out high performing organizations and attempts to find techniques in those organizations that can be transferred to other organizations (Osborne and Gaebler, 1992). Although both approaches seek to generalize to high performing agencies, we feel that SWLS avoids the pitfalls of the best practices literature (Overman and Boyd, 1994).

First, because our technique relies on regression and the need for comparable data, it forces the analyst to generalize to agencies that are performing the same task. The risk of applying inappropriate private sector techniques to public sector problems is avoided. Second, our technique does not rely on a subjective selection of the ideal case (Overman and Boyd, 1994; 69); we define optimum cases as those that perform better than expected given the variables that influence performance. Within a regression context, the subset of cases is highly constrained. Third, while best practices is practice driven, our technique is clearly research driven. This will provide a restraint on the hero-worship tone that characterizes some of the best practices literature (Lynn, 1987). Fourth, the best practices literature is positive and prescriptive whereas our technique is prescriptive but may well not be positive. That is, our technique is not inherently optimistic; the key variables discovered might well be beyond control of management. Finally, while the best practices research is not theory testing research (Overman and Boyd, 1994; 79), our approach starts with an effort to test theory and find relationships and only then shifts to prescription.

But So What?

Substantive weighted least squares is a general technique that we think can be used in all quantitative studies in public administration. It is akin to sensitivity analysis in that it reveals what factors affect the best agencies or the best programs and put them into that elite category. Our example of child support is simply one for illustration purposes. The process can be used to compare individual units within an organization, units in different organizations within the same jurisdiction (e.g., personnel offices), and organizations in different jurisdictions. If good performance measures exist, then SWLS will provide far more useful information than just regression alone. Both sets of results should be presented. The actual statistical process is relatively easy. Anyone with the skills to use regression can do substantively weighted least squares.

Underlying our presentation is the contention that SWLS is a methodology that can bridge the gap between academics and practitioners. Practitioners are interested in what works best. That is inherently different from the academic interest in how something works. Regression deals with the academic question, but with some adjustments, it can also deal with what works best. It illustrates how policy might be, in addition to how it actually is. The substantive weighted regressions are only the first step in this process. This analysis identifies key variables that need to be examined in those organizations that perform the best. In this way, it structures the process evaluation of the case studies; it does not just say look at agency X to see why it is doing well but look at agency X and examine its Y processes. In this way, it avoids the pitfalls of the best practices approach.

The gap between academics and practitioners in public administration will not be completely closed. Many academics believe that they study public organizations to produce knowledge for its own sake rather than for any practical benefits. Such efforts provide valuable information to the profession even without obvious applications. We have outlined a methodology that is a bridge between academics and practitioners, is consistent with the reform tradition of public administration, and meets the scholarship standards of academic journals.

Appendix A

Learning - rate of change in child-support collections per 1,000 people in the population between 1982 and 1991 (Child Support Enforcement: Tenth, Thirteenth, and Sixteenth Annual Report to Congress).

Child-support bureaucracy expenditures - average dollars of total administrative expenditures per 1,000 people between 1982 and 1991 (Child Support Enforcement: Tenth, Thirteenth, and Sixteenth Annual Report to Congress).

Staff increase - staff in the child-support bureaucracy 1991 per 1 million population minus staff in 1982 per million (Child Support Enforcement: Tenth, Thirteenth, and Sixteenth Annual Report to Congress).

Slack resources - average of staff per caseload between 1982 and 1991 (Child Support Enforcement: Tenth, Thirteenth, and Sixteenth Annual Report to Congress).

Subject ambiguity - percentage of total caseload made up of non-AFDC cases (Child Support Enforcement: Tenth, Thirteenth and Sixteenth Annual Report to Congress).

Subject instability - standard deviation of the state bureaucracy's caseload, 1982 through 1991.

Demand - percent divorced in each state, 1983-1991 (Statistical Abstract of the United States).

ACES strength - number of Association for Child Support Enforcement chapters per million in the population in 1991 (ACES).

Note. The different bases (1,000 population versus 1 million) are used to keep the regression coefficients relatively the same size. They do not affect the actual results of the analysis.

Appendix B. Methodological Concerns

Statistically, the core of our argument is that the relationship between two variables, say learning and uncertainty, is different for each agency studied. That is, some agencies deal with uncertainty better than others. If an analyst had sufficient annual data, 50 different estimates of this relationship could be generated, one for each agency. These estimates could then be used to find the high performers. To verify that substantively weighted least squares (SWLS) performs a similar function, we calculated learning as an annual measure and replicated the analysis in a pooled time series design from 1982 to 1991. We added a set of interactive variables to determine if the slopes for the high-performance agencies (as designated in this article) were the same as the low-performance agencies. We arrived at the same finding - that the relationships between the variables were different in the high-performing agencies. The estimates in the pooled study were in fact larger than the estimates with SWLS. SWLS is more conservative in its estimates because the average cases serve as an anchor for the slope estimates. This difference would also result, likely more so given the number of cases, if one just ran two regressions, one for average cases and one for high performers in the 50 case data set (although in that case the smaller sample sizes might be problematic). We opt for SWLS over the pooled procedure for practical reasons; SWLS is relatively easy to do whereas a correct pooled time series requires an extremely high level of technical skills.

We should stress that SWLS should be used in conjunction with ordinary least squares (OLS) and regression diagnostics; together they provide far more information. As Bert Kritzer has pointed out to us, our technique could be considered a form of regression diagnostics that simply has different criteria for what factors are important.

OLS coefficients, given data that fit the assumptions, are both unbiased (that is, sample slopes on the average approximate the population slope) and efficient (have the smallest standard errors). SWLS coefficients, by design, are neither. That is, we are looking for estimates that are biased in the direction of high performers. In the search process, we overstress the "unusual" cases and down grade the "average" cases. This relative weighting also means that our estimates will be less efficient than OLS estimates. This is as it should be because we are generalizing from relatively fewer cases and at a more extreme point on the curve. Although caution is important in both OLS and SWLS, it is more crucial in SWLS. This is also the reason we encourage in-depth case studies on the designated agencies. Quantitative methods are most useful when they are supplemented with substantive knowledge.

How to run SWLS

1. Run ordinary least squares with all variables unweighted and save the studentized residuals.

2. Create first-weight variable (wgt1): if studentized residual [greater than] .7, then wgt1 = 1; if studentized residual [less than] .7, then wgt1 = .9.

3. Run regression with wgtl as weight and save results (most software packages contain procedures for running weighted least squares estimates).

4. Create second weight variable (wgt2): if studentized residual [greater than] .7, then wgt2 = 1; if studentized residual [less than] .7, then wgt2 = .8.

5. Run regression with wgt2 as weight and save results.

6. Repeat (step 5) seven more times decreasing the weight by 1/10 each time for the units with studentized residuals below .7 and coding the units with studentized residuals above .7 equal to 1. Your last equation should be weighted 1 if the studentized residual is [greater than] .7, and. 1 if the studentized residual is [less than] .7.

7. Compare the ten results.

Notes

We would like to thank Larry Bartels, Bert Kritzer, Gary King, and three anonymous reviewers for their comments. Inspiration for this article came from Bartels (1996), who proposed the use of weighting to determine if groups of different cases should be pooled, and Kritzer (1996), who proposed that we should play with data like musicians play with a score, stretching the limits of what the score can do. All data and documentation necessary to replicate this analysis are available from the authors.

1. We could have created a reasonably similar measure by simply subtracting 1982 collections from 1991 collections. The disadvantage of this simpler method is that it relies on only two of the ten years to calculate a measure of change. If either one of these years is unusual for any reason, the change measure will be biased. The use of a regression slope is simply a way to get more information into the calculation of the change measure.

2. For those interested in the substantive issues addressed here, including alternative specifications, these are discussed in depth in Keiser (1996).

3. These regression diagnostics can be generated by standard statistical packages such as SAS, SPSS, STATA, and NCSS. Of these, NCSS is probably the most useful because it provides significance tests along with the diagnostics and has built in robust regression techniques.

4. The rule of thumb is that cases are influential if the Cook's D exceeds 4/n, where n is the number of cases; in this example, that would be .08. So technically Wisconsin does have some influence, but it is relatively minor compared to the Ohio and Delaware cases.

5. There are several robust regression weighting techniques that work better or worse depending on the distribution of the data; see Western (1995). We prefer the Andrews' sine weights since they are relatively effective regardless of the data distribution and when data fit a multivariate normal distribution to produce estimates equal to ordinary least squares (see Andrews, 1974). This is an iterative procedure that gradually changes weights until the regression coefficients stabilize; we iterated the equations three times.

6. The number of cases is a tradeoff. The fewer the number that are designated, the more the outcome will be the result of one or two cases which may or may not be generalizable. The analyst wants sufficient cases to be able to say that the relationships hold in a lot of agencies, but not so many cases that we generalize to the mediocre cases.

7. This is not as extreme as it sounds since there are four times as many average agencies. The high performers as a group, therefore, are weighted to contribute about 2.5 times what the average performers contribute to the regression as a group.

References

Alfasso, H. and J. Chakmakas, 1983. Who Are We Missing? A Study of the Non-Paying Absent Parent. Albany, NY: Bureau of Operations Analysis, Department of Social Services.

Andrews, David F., 1974. "A Robust Method for Multiple Linear Regression." Technometrics, vol. 16 (November), 523-531.

Bartels, Larry, 1996. "Pooling Disparate Observations." American Journal of Political Science, vol 40 (August), 905-942.

Belsley, David A., Edwin Kuh, and Roy E. Welsch, 1980. Regression Diagnostics. New York: John Wiley and Sons.

Child Support Enforcement: Tenth/Thirteenth/Sixteenth Annual Report to Congress, 1985, 1988, 1991. Washington, DC: Department of Health and Human Services.

Hamilton, Lawrence C., 1992. Regression with Graphics. Pacific Grove, CA: Brooks/Cole Publishing Company.

Klawitter, Marieka and Irwin Garfinkel, 1991. "The Effects of Routine Income Withholding of Child Support on AFDC Participation and Costs." Discussion paper no. 961-91. Madison, WI: Institute for Research on Poverty.

Keiser, Lael R., 1996. "Bureaucracy, Politics, and Public Policy: The Case of Child Support." Unpublished Ph.D. dissertation, University of Wisconsin-Milwaukee.

-----, forthcoming. "The Influence of Women's Political Power on Bureaucratic Output." British Journal of Political Science.

Kritzer, Herbert M., 1996. "The Data Puzzle: The Nature of Interpretation in Quantitative Research." American Journal of Political Science, vol. 40 (February), 1-33.

Lebovic, James H., 1995. "How Organizations Learn: U.S. Government Estimates of Foreign Military Spending." American Journal of Political Science, vol. 39 (November), 835-863.

Lynn, Lawrence E., 1987. "Public Management: 'What Do We Know? What Should We Know? and How Will We Know It?'" Journal of Policy Analysis and Management, vol. 7 (Fall), 178-187.

Mazmanian, Daniel A. and Paul A. Sabatier, 1989. Implementation and Public Policy. Lanham, MD: University Press of America.

McDonald, J. and J.R. Moran, 1983. Wisconsin Study of Absent Fathers: Ability to Pay Child Support. Madison, WI: Wisconsin Department of Health and Social Services and Institute for Research on Poverty.

Michalopoulos, Charles and Irwin Garfinkel, 1989. "Reducing Welfare Dependence and Poverty of Single Mothers by Means of Earnings and Child Support: Wishful Thinking and Realistic Possibilities." Discussion paper no. 882-89. Madison, WI: Institute for Research on Poverty.

Osborne, D. and T. Gaebler, 1992. Reinventing Government. Reading, MA: Addison-Wesley.

Overman, E. Sam and Kathy J. Boyd, 1994. "Best Practice Research and Postbureaucratic Reform." Journal of Public Administration Research and Theory, vol. 4 (October), 67-84.

Rosseeuw, Peter J. and Annick M. Leroy, 1987. Robust Regression and Outlier Detection. New York: John Wiley and Sons.

Sabatier, Paul A. and Hank C. Jenkins-Smith, 1993. Policy Change and Learning: An Advocacy Coalition Approach. Boulder, CO: Westview Press.

Simon, Herbert A., 1969. The Sciences of the Artificial. Cambridge, MA: MIT Press.

Statistical Abstract of the United States. Various Years. Washington, DC: Department of Commerce.

Weber, Max, 1946. From Max Weber: Essays in Sociology. H.H. Gerth and C. Wright Mills, trans. New York Oxford University Press.

Western, Bruce, 1995. "Concepts and Suggestions for Robust Regression Analysis." American Journal of Political Science, vol. 39 (August), 786-817.

Kenneth J. Meier is a professor of political science at the University of Wisconsin-Milwaukee and currently editor of the American Journal of Political Science. His research focuses on the political side of government agencies, and he is currently working on a normative theory of bureaucracy.

Lael R. Keiser is an assistant professor of political science at the University of Missouri-Columbia. She is interested in the role bureaucracy plays in social policy. Current research projects include child support enforcement and the impact of welfare policy on crime.
COPYRIGHT 1996 Wiley Subscription Services, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1996 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Meier, Kenneth J.; Keiser, Lael R.
Publication:Public Administration Review
Date:Sep 1, 1996
Words:6118
Previous Article:Women, research, and mentorship in public administration.
Next Article:Representative bureaucracy: an estimation of the reliability and validity of the Nachmias-Rosenbloom MV Index.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters