Printer Friendly

Performance budgeting in the U.S. federal government: history, status and future implications.

1. INTRODUCTION

Performance measurement and performance budgeting are topics that have long been a part of the public administration agenda in the United States (Eghtedari and Sherwood, 1960a; 1960b; Schick, 1971). Interest in performance budgeting developed in response to perceived abuses at the local government level as early as the 1870s in the U.S. (e.g., in New York City), at the beginning of the 20th century (e.g., in the budget research of the New York City Bureau of Municipal Research, which eventually became the Brookings Institution), from a response to growth of government at all levels and particularly of the federal government during the presidency of Franklin D. Roosevelt during the Depression of the 1930s, through the mid-1940s and in the post-World War II era, especially during the Eisenhower presidency (McCaffery and Jones, 2001: 43-67). The first wave of what we would identify as pure performance budgeting occurred during the 1950s under the guidance of the President's Bureau of the Budget (BOB). In the view of the leadership of the BOB in the 1950s, performance budgeting and performance measures were intended to emphasize efficiency and effectiveness. Then, as now, broader measures (outcomes) were desired but often were difficult to define and measure, particularly in human services programs. Nonetheless, performance budgeting in concept has proven persistent over time in the U.S. federal government, as well as in state and local government. Evidence of this trend is found in the huge expansion of the literature in this area in the past decade. It is interesting to note that until the 1990s, in the U.S. and internationally, performance data were intended for internal government use, primarily by budget and management control offices and line agency managers, and after 1993 by members of oversight committees of Congress. The external use of performance measures arose in some part in response to the customer orientation of reform initiatives intended to make government more accessible and to improve service quality to citizens. Such initiatives include the National Performance Review in the U.S., Total Quality Management, and a range of initiatives that have been lumped together, described and analyzed as the New Public Management (see for example Borins, 1997; Schedler, 1997; Thompson, 1997; Lynn, 1997).

More recently for the U. S. federal government, the key measures that stimulated increased interest in performance and budgeting was approval by Congress into statute in 1993 of the Government Performance and Results Act (GPRA), and in 1994 of the Government Management and Results Act (GMRA). The 1994 GMRA extended the provisions of the 1993 Act across the entire federal government. Pursuit of the National Performance Review under the Clinton administration (1992-2000) also stimulated interest in this topic (National Performance Review, 1993). Our study does not ignore this long history of interest in performance as part of the modern era of effort devoted to instilling "good government" and governance practices. Rather, informed by this history, it focuses on implementation of GPRA/GMRA and identifies some of the lessons learned from this initiative, its application and use, misuse and non-use. It also addresses selected impediments to reform as they relate to incentives for budgeting based on performance.

Before delving into the specifics of performance budgeting, let us note the tremendous increase in interest in performance measurement and management that characterizes the field presently and has done so for much of the past decade. Measurement of performance and efficiency in the public sector has become a sub-field unto itself (Ingraham, Joyce and Donahue, 2003; Robinson, 2007). Scholars are interested and attempting to use various types of measurement, including tools including CompStat/CitiStat (Behn, 2008a: 206-235) and techniques including Laspeyres and Paasche index numbers, principal-components methods, use of canonical analysis and eigenvalues in weighting relevant variables, dynamic factor modeling, instrumental variables, and what are termed hedonic approaches using regression coefficients in the weighting of a range of types of variables, balanced scorecards and their linkage to organizational strategy (Kaplan and Norton, 1996; Kaplan and Norton, 2001; Simons, Davila and Kaplan, 1999), and other approaches (see for example Smith and Street, 2005: 401-417; Stone, 2002: 405-434; Rouse and Putterill, 2003: 791-805; Reichmann and Sommersguter-Reichmann, 2007). While our study focuses on use of performance measures in budgeting, the question of how best to measure performance, or whether it can be measured well at all, remains an issue of considerable academic and practitioner interest.

2. HISTORICAL EVOLUTION OF PERFORMANCE BUDGETING

General acceptance of the concepts of performance budgeting may be dated from the recommendations of the Commission on Organization of the Executive Branch of the Government (commonly called the Hoover Commission) in 1949. Performance budgeting was initially mandated by amendments to the National Security Act in 1949. These amendments required the Department of Defense to install performance budgeting in the three military departments (63 Stat 412, 1949).

The federal government as a whole entered into performance budgeting as a consequence of the Budget and Accounting Procedures Act of 1950. This act required the heads of each agency to support "... budget justifications by information on performance and program cost by organizational unit" 64 Stat 946, 1950). The first wave of experimentation with performance budgeting in the U.S. federal government occurred during the 1950s when the President's Bureau of the Budget recommended and Congress accepted and adopted a number of proxies for performance in budgets, primarily as a means for simplifying the task of budgeting (McCaffery and Jones, 2001; see also Hilton and Joyce, 2007). For purposes of definition at this time, performance budgets were intended generally to identify and emphasize activities performed and their costs, and to include various performance measures in the budget to document what was gained from what was spent. These measures typically included unit cost comparisons over time, or between jurisdictions when used at state and local government levels.

While the federal government was developing performance measures and moving toward performance budgeting, some state and local governments quickly adopted the concept. Early attempts included Detroit, MI, Kissimmee, FL, San Diego, CA, and various states including Oklahoma, California, and Maryland (Seckler-Hudson, 1953, 5-9). The City of Los Angeles provides another case study of this era from 1952 to 1958. In 1951, Los Angeles created the position of City Administrative Officer (CAO) and filled it with Samuel Leask, Jr. Just one year after his appointment, Leask had instituted a performance budget system throughout the city. The heart of the Los Angeles system was a performance contract. This contract was based on goals and targets developed in the budget process. The parties to this contract were the mayor, city council and the CAO, on the one hand, and department administrators on the other. These performance contracts were monitored by the CAO during budget execution to ensure goals were achieved (Eghtedari and Sherwood, 1960b: 83). Performance contracts were based upon work programs that became the starting point from which questions about timing, size, and nature of expenditures could be framed. Departmental appropriations were based on work programs, and a government-wide reporting system was used to compare units of work performed to man-hours expended in the budget execution process. This was the final check on actual versus proposed performance (Eghtedari and Sherwood, 1960b, 83).

A study of this system, conducted primarily in the Building, Safety, and Library Departments, found that the performance approach resulted in a strengthening of the executive budget, program planning, and central control of decisions going into the executive budget. Measurement of work in a governmental jurisdiction was found to be practical and feasible, with positive benefits gained from such measurements (Eghtedari and Sherwood, 1960b: 82-88). In this case, perhaps the major finding of the study was in the value of creating and staffing the Office of City Administrative Officer (CAO). The study found that the CAO had "... improved the quality of program planning, had brought about a higher degree of coordination among the essentially independent departments than ever before existed, and has made some contribution to the overall efficiency of municipal operations" Eghtedari and Sherwood, 1960b). Additionally, the new budget process assisted in increased control for the city administrator by creating the performance contracts. Perhaps the most important result of the Los Angeles experiment was the proof that a performance-style budget in a governmental agency was feasible and beneficial.

Since the 1950s and particularly beginning in the 1990s, the United States federal government was not alone in its recognition of the potential of performance budgeting. Various U.S. state and local governments continued to experiment with versions of performance budgeting, and continue to do so to the present (Sanger, 2008). More than fifty countries have implemented various aspects of performance budgets in the period from the late 1960s through 2000. Among the leaders in this endeavor were New Zealand, Australia, the United Kingdom, Sweden, Canada, and France. Most early attempts in foreign nations merely supplemented the traditional budget and were usually issued as separate documents (Axelrod, 1988, pp. 272-273). Meanwhile, the performance budget experiment in the U. S. federal government was integrated first into Program Budgeting in the 1950s and then the Planning, Programming, Budgeting System (PPBS) during the 1960s. And while the government-wide PPBS experiment was terminated by President Richard Nixon in 1969, PPBS continues to be used by the Department of Defense to the present, although in the early 2000s it was renamed the Planning, Programming, Budgeting and Execution System (PPBES), indicating the importance of the entire budget cycle and not just the front end of budget preparation, negotiation and decision (Jones and McCaffery, 2008).

3. PERFORMANCE BUDGETING METHODOLOGY

Performance budgeting (PB) requires administrators to separate programs into the basic activities in which their agency engages, decide what performance measures best fit each activity and develop budgets based on costs for each measure. In this respect, PB has much in common with performance management (for more on this topic, see Behn, 2008a; 2008b). The typical performance budget has narrative describing what the unit does, performance measures that indicate activities and trends, and a breakdown by typical budget category. A "pure" performance budget would consist of activity classifications, workload data, other measures of performance, unit costing data, and program goals. Other data typically found in budgets that are modeled after performance budgets consist of narratives discussing the activity or program, several years of data for comparisons, mission statements, and desired outcomes. It should be noted, most budgets that are called performance budgets are not of the pure format.

Once programs have been separated into activities, measurements for performance must be generated for program evaluation. There are five generic performance measures used with performance budgeting. These include input, workload, efficiency ("doing the thing right"), effectiveness ("doing the right thing"), and impact or outcome measures. Input measures describe the resources, time, and personnel used for a program. They typically appear in the budget as dollars for salary and supporting expenses. They also might be presented as staff training hours and number of person years expended on an activity.

Workload measures are volumetric measures of what an agency does. Such items as number of audits done, returns filed, checks issued, number of arrests made, or miles of highway constructed are typical workload measures. Workload measures are the lowest form of performance measurement. The trouble with workload measures is that there are a lot of them and they do not tell anyone very much without further analysis. While workload measures do describe the activities of a program, they do not define how well the program is accomplishing its mission (Jones & Thompson, 1999; 2007). To do this, workload measures must be converted into measures of efficiency, effectiveness, and outcome.

Efficiency measures take workload data and merge it with cost data to develop unit cost measures. Then efficiency can be gauged on such items as the cost per arrest made, the cost of issuing a check, or the cost of flying an aircraft per hour. Efficiency is a much better indicator of performance than simple workload data since it gives outputs a direct cost relationship. These costs per unit can then be compared over time or against other similar activities to gauge competitiveness or improvement. This is important since it allows administrators a simple way to keep track of complex programs. At higher levels, the legislature can track efficiency measures to keep costs down, and the public can be assured its taxes are being spent efficiently. Efficiency, however, does not necessarily indicate effectiveness.

Effectiveness measures are used to mark output conformance to specified characteristics. Such items as quality, timeliness, and customer satisfaction fall into this category. These measures require the manager to determine goals for the particular program activity and to identify who their customers are and what type of characteristics customers would want within the products or services delivered to them. Then effectiveness measures indicate how well the agency is satisfying these needs. Effectiveness measures are better than efficiency measures in that the primary focus of effectiveness is on the customers, whereas the primary focus of efficiency is the organization. If efficiency focuses on cost per unit, effectiveness measures tend to focus on rates of accomplishment, like percent of satisfied clients or ratio of clients helped as opposed to ratio of clients seen. Effectiveness is associated with the quality of service and includes such things as responsiveness, timeliness, accessibility, availability, participation, safety, and client satisfaction. If efficiency measures are largely internal, effectiveness measures connect the operator to his clientele, the citizen to his government.

Outcome measures are the most difficult level of measure used in a performance budget system. These are measures of outcome, impact, or result. They attempt to capture performance based on achieving what the program wanted to do as a whole. Simply put, they ask if the program achieved the mission it set out to do from the start. Has the city become cleaner, the streets safer, students more knowledgeable, and customers more satisfied? In the 1970s, several cities took photographs of their city streets and used a standardized photograph rating scale to judge whether their streets were actually getting cleaner as a result of sanitation efforts. This information could then be compared against historical data or against a rating for a different neighborhood. After all, it is possible to collect many more tons of trash and to have the unit cost drop, but have the city streets getting dirtier. The photo scale technique was meant to provide an outcome measure. As a practical matter, outcome measures have proved to be very difficult to develop and maintain. They tend to be particularly difficult in human service areas, such as education or public safety, where global statements are easy, but precise measurement difficult. Additionally, in the budget process, even where measurement is easy, sometimes it is difficult for policymakers to decide how many dollars it will take to move up an increment or two on a rating scale and if it is worth it. This further assumes more clarity in cause-and-effect relationships than may be possible in the real world. Constructing and evaluating measures adds technical difficulty to the budget process, which many participants view as complicated enough in terms of determination of cost and political preference.

Early versions of performance budgeting focused on measures of workload and efficiency. The indicators focused primarily on the agency itself and were primarily internal measures of what the agency did and what it cost. These were input and workload measures, like salary cost and tons of trash collected, with efficiency ratios developed to measure the change in cost of collecting a ton of trash from one year to the next. More recent attempts at performance budgeting have included measures of effectiveness and outcomes or results, and have a focus on the clients and customers of the agency, with some measures constructed to evaluate client or customer satisfaction or response time to customer demand. Whatever measures are used, they are compared between similar agencies or over several years to measure competitiveness or improvement.

This type of budget aims to assist managers in wisely spending such that maximum output is achieved with as little input as possible, with the focus shifting from objects of expenditure to program activities as the basis for budgeting. Therefore, instead of budgeting for salaries, utilities, and travel expenses, the manager would base the budget upon the activities his unit performs, and policymakers would judge which activities should be increased or decreased.

4. ADVANTAGES AND DISADVANTAGES OF PERFORMANCE BUDGETING

As with any budget type, performance budgets have advantages and disadvantages associated with them. The basic premise of PB is that it will instill incentives to improve the performance and productivity of agencies and their employees. Some additional potential advantages include improved planning, more effective administrative control, increased decentralization of decision making, improved public relations as a consequence of greater transparency of program and spending information, improved focus on the activities of the organization, and provision of more precise quantitative measures, which if pertinent and feasible, are better than vague generalities for evaluating the organization according to a set of established standards (Wildavsky and Jones, 1994).

Some potential disadvantages also may be noted. The first is that there is no empirical evidence in the literature of any case where PB actually has had significant influence on agency budgets or productivity (Gilmour and Lewis, 2006; Robinson and Brumby, 2005). However, there are cases where disincentives built into PB agency reviews have retarded performance, e.g., when budgets have been reduced. Thus, one disadvantage is that when PB review is performed and the results are positive, leading to increased funding, no productivity increases are evident. But, when the opposite occurs, performance can decrease. Secondly, performance budgeting is not equally applicable to all organizations or agencies. Many agencies do not do work or exhibit performance that is easily quantifiable (Schick, 2001). Thirdly, efficiency is not guaranteed by identifying and using unit cost data for comparison to performance measures. Legislators and administrators may use a performance budget to identify problem areas or wasteful agencies, but this by itself does not increase efficiency. Fourthly, in a political context, it may be difficult to define an appropriate set of measurements for workload or performance. In practice, many indicators have proven to be inappropriate, and agencies may have to go through several iterations before a satisfactory set of measures is found. Fifthly, the end product of many agencies may not be measurable by any known means. Measures of effectiveness and outcome or impact are extremely difficult to develop in some areas, e.g., social services and education. Lastly, this type of budgeting may not be practicable for relatively small agencies. The staffing time and costs associated with monitoring indicators year-round may inhibit or prevent smaller agencies from using performance budgets effectively.

Two attributes appear to distinguish current performance measurement and budgeting initiatives from the efforts of the 1950s. The reincarnation of performance measurement beginning in the 1990s in the U.S. federal government focused considerably on external relations, with customer satisfaction measures viewed as a major strength of new performance measurement systems. Additionally, when implementing performance measurement, a shared sense of vision from the top to the bottom of the organization was thought to be critical by advocates. However, despite that agencies now prepare and submit annual and long-term program plans to OMB and Congress in conformance with the requirements of law, as explained in the next section of this study, most of these plans still did not link directly with their budgets (Joyce, (1993). Thus, neither OMB nor Congress could analyze budgets linked to performance even if they wanted to because they have not had the data to enable such an effort (Moynihan, 2006; see also Gilmour and Lewis, 2005).

5. ENACTMENT OF GPRA AND GMRA: IMPLICATIONS FOR PERFORMANCE BUDGETING

As noted in our introduction, on August 3, 1993, Congress passed P.L. 103-62, the Government Performance and Results Act of 1993--GPRA (P.L. 103-62, 1993). Approximately a year later, it passed the Government Management Reform Act (GMRA) that implemented the provisions of GPRA across all of the federal government. The purpose of the acts was to shift the focus of government management from inputs to outputs and outcomes, from process to results, from compliance to performance, and from management control to managerial initiative.

The significance of GRPA was made evident in 1994 when the Office of Management and Budget (OMB) issued its Circular No. A-11 Revision (Preparation and Submission of Budget Estimates for FY 1996). Under the guidelines of this directive, justification of programs and program funding henceforth would require the use of performance indicators and goals as set forth by the GPRA. In issuing Circular A-11 in 1994 to instruct and control the preparation of the FY 1996 budget, OMB indicated that without performance indicators, performance goals, or some other type of performance data, agency requests for significant funding to continue or increase an ongoing program would be difficult to justify (OMB, 1994, 122). For the FY 1996 President's Budget submission and subsequently through 2000, OMB asked agencies to use output and outcome based performance measures in the budget decision-making process and budget justification statements. These guidelines conformed to the general provisions of the GPRA.

The long-term goal of GPRA was and continues to be implementing performance measurement into federal government management and to some extent experimentally into budgeting as a means of improving resource planning, decision making, allocation and execution for federal agencies (McCaffery and Jones, 2001; McNab and Melese, 2003). Performance budget pilot projects were conducted for FY 1998 and FY 1999 under the auspices of GPRA. Regrettably, the results from these pilot projects were never provided externally and consequently only agencies and OMB learned from these experiments. Then in 2002, under the administration of George W. Bush, OMB developed and integrated performance measurement but not performance budgeting per se into all programmatic and budget submissions and reviews under the Performance Assessment Rating Tool (PART) system.

6. THE PROGRAM ASSESSMENT RATING TOOL AND LINKAGE TO PERFORMANCE BUDGETING

A budget reform initiative was announced by President Bush in the FY 2003 President's Budget delivered to Congress in February 2002 (Jones, 2002a). The budget introduced "performance-based budget review" to link funding to performance measures and accomplishments for federal departments and agencies. As a result of a Presidential initiative in August 2001, the Office of Management and Budget already had targeted review to improve performance in five areas of management: human resources management productivity, competitive sourcing (i.e., contracting out), financial management, e-government, and integration of performance measurement and budgets. However, the change in 2002 went beyond these objectives in establishing a Program Assessment Rating Tool (PART) to be used in budget review. What is provided here is not a comprehensive evaluation of the success of performance-based budget review under the Bush administration, but merely a critique.

There is some evidence available to assess the efforts and degree of success of OMB with this approach, given that by 2003 more than 20% of federal programs and virtually all departments and agencies were complying with OMB budget submission requirements. For FY 2004, 231 programs were graded by OMB using the Program Assessment Rating Tool system (OMB, 2003). However, whether and to what extent the PART review improved department efficiency and effectiveness is uncertain (Gilmour and Lewis, 2006). What can be said is that OMB used PART to attempt to reduce budgets in some instances. This is hardly surprising given the role of OMB is, in part, to cut budgets and this is standard practice as an element of most management and performance review techniques employed by executive budget control agencies (Jones and Euske, 1991; McCaffery and Jones, 2001: 203-224, 281-320; Jones, 2001a; Jensen, 2001; Wanna et al., 2003; Guthrie et al., 2004). Performance assessment using PART continued in preparation of the FY 2003 through FY 2009 President's Budgets under the Bush.

Performance review under the Bush administration may be viewed as a continuation of a trend begun in the 1990s under OMB and at the direction of Congress (Rodriquez, 1996; GAO, 1997a; 1997b; 1998; 1999; 2000a; 2000b; GPRA, 1993; GMRA, 1994). As explained earlier in this study, budgets have long been reported to and analyzed by OMB using performance-based criteria to link funding to performance measures and accomplishments for federal programs within departments and agencies. The Program Assessment Rating Tool was used by OMB in analysis of 67 programs included in the FY 2003 President's Budget. PART was employed to score performance in 231 programs (about 20% of total on-budget federal programs) for the President's FY 2004 Budget. An additional 20% were reviewed annually by OMB in preparation of the FY 2005 through FY 2009 President's Budgets (OMB, 2008).

PART scored programs using a multi-variate set of criteria consisting of approximately 30 variables that initially (for FY 2003-2008) culminated in what was characterized as a "stop light" system: red for failing performance, yellow for marginal performance, and green for good performance. For FY 2004 the system expanded the range of grading options to five categories: effective, moderately effective, adequate, results not demonstrated, and ineffective. For FY 2004 14 programs were rated as effective, 54 moderately effective, 34 adequate, 11 ineffective and 118 results not demonstrated. The large number in this last category indicated that many programs have not attempted or have been unable to develop useful measures of performance. Reporting of the results was provided in a separate volume of the President's budget (OMB, 2003). Programs were rated in four areas of performance: program purpose and design, strategic planning, program management, and program results.

7. PART GOALS AND PERFORMANCE IMPROVEMENT

The overall objectives of PART were (a) to measure and diagnose program performance, (b) to evaluate programs in a systematic, consistent, and transparent manner, (c) to inform agency and OMB decisions for management, legislative or regulatory improvements and budget decisions, and (d) to focus on program improvements and measure progress against prior year ratings. OMB extended the application of PART to all programs in the budget in budget review. Doing so proved to be a time-consuming effort for the reviewers who were mostly line budget examiners. Budget analysts were tasked to review specific programs, i.e., they assumed some degree of ownership of the budgets they examined (McCaffery and Jones, 2001: 203-224; Wildavsky, 1964: 38, 40, 160). Consequently, there was a problem of consistency in application of evaluative criteria with OMB review of budgets and performance.

Given this problem, the executives and budget staff of programs and agencies under review were wise in taking Wildavsky's advice (Wildavsky, 1964, pp. 2031) to be sensitive to the signals about priorities provided to it by budget analysts. Typically, after one or two budget reviews by the same analyst, agency budget officials became attuned to the preferences of the analyst and administration served. To fail to read such feedback was to lose competitive advantage in the budget game. The advantage of the PART system over previous methods of budget review was that it provides more feedback, i.e., more signals on how to achieve a higher rating. Indeed, by 2003, PART had become sufficiently institutionalized so that consulting firms inside the Washington, D.C. beltway were offering courses to teach program staff how to improve their scores. With this much feedback and assistance, it is surprising how many programs were rated for the FY 2004 Budget as ineffective or results not demonstrated categories (129, or roughly 56% of the 231 programs reviewed). However, from FY 2004 to FY 2006, many agency performance ratings improved.

For FY 2003 through FY 2008, many programs received failing PART scores--but improvements in some programs continued to be reported (OMB, 2008). Departments and agencies invested staff time and energy to achieving improved ratings in attempt to be rewarded in the President's Budget. The key incentive supporting the PART system was the intent of OMB directors and staff to integrate performance scoring with OMB budget review. Presumably, programs that improved their ratings were rewarded in the budget. Thus, the advantages of the PART approach appear to be twofold. First, the scoring was relatively easy to understand because it was simple--there were only five categories. Second, PART scores were scaled relative to a set of variables that represented the strategic and annual planning, management and execution performance by programs and agencies according to data developed and reported to OMB by these agencies. OMB did not provide the data for PART reviews. Still, the opportunity was ever present for programs to score better if they wanted to, in part based on whether they were able to measure and quantify results.

It may be observed that at least two biases are built into any performance rating system in addition to the inevitable issues concerning inter-rater reliability noted above. First, some program performance results (or outputs) are easier to measure and report than others. The second is uncertainty about the relationship between achievement on measures and budget decisions. For example, if programs solve some problems faced by their clientele groups (Wildavsky's term) but failed others, should their budget be increased or reduced? On one hand, the argument is to reward improvement, but if clientele needs have been satisfied, does this indicate a decrease in demand for program services and therefore a budget cut?

Review of the PART system by departments and agencies that were rated by OMB initially indicated several recurrent criticisms (Jones, 2002a; 2002b; 2003). The PART questionnaire instrument required yes or no answers to a number of questions about performance. It was been suggested that a better system would have departments and agencies rate their answers on a scale, e.g., 1 (lowest) to 5 (highest). Scaled data are more amenable to analysis than yes/no responses. A second criticism concerned the way OMB defined the units of analysis - as programs instead of departmental or agency administrative entities. Some programs defined by OMB were not administered as such by departments and agencies (many programs cross agency jurisdictions), thus making performance reporting more difficult. In this regard, there seemed to be some incompatibility between PART and the Government Performance and Results Act. PART evaluated programs while GPRA assesses agencies--and these entities are defined in different ways. This was confusing to those under evaluation. A third criticism was that while OMB provided some feedback on their assessment of questionnaire responses and desired improvements in program performance, more information of this type was needed. Further, some program officials indicated they wished to collect more data but were prevented from doing so by various rules, including the requirements of the Paperwork Reduction Act, and OMB insistence that they cut down the number of different data elements to be measured and reported. Finally, program staff reported the appearance of an inverse relationship between effectiveness ratings and budget decisions, i.e., better was not necessarily richer.

8. SUPPORT FOR PERFORMANCE-BASED REVIEW

Testimony to Congress by David Walker (Walker, 2001; 2002), the Controller General of the U.S. government, and comments by representatives of the General Accounting Office, the Offices of the Inspectors General and members of Congress indicate that important institutional observers, including the key oversight committees of Congress, have reviewed OMB assessment of executive programs and management practices for the FY 2003 and 2004 budgets. Until 2006, numerous entities including the U. S. Comptroller General were cautiously supportive of Bush administration efforts.

The Government Accountability Office (GAO), the auditor for Congress, was very specific in stating that it had reviewed favorably the criteria supporting PART and OMB evaluation of department and agency performance. As noted, in 2002, Christopher Mihm of GAO stated that in his view the approach and its execution were methodologically sound (Mihm, 2002) GAO reviews of performance management from the late 1990s through 2002 were supportive. (GAO, 1996a; 1997a; 1997b; 1998; 1999; 2000 a & b, Mihm, 2002a & b; Posner, 2002).

GAO has favored performance measurement to the extent that it recommended in 2002 that Congress adopt a "Performance Resolution" process to measure and report annually on executive agency progress. This approach would function in a manner similar to the Budget Resolution process (Posner, 2002). Such support for performance budget review (as distinct from broad-scale performance budgeting) may change, but it is clear that virtually everyone in the nation's capitol took serious notice of and responded to the Bush administration OMB initiatives with performance measurement and results reporting linked to budgets. And, it may be anticipated that Congress and the Executive branch will continue to be concerned with implementation of the Government Performance and Results Act. However, positive reviews of PART began to change when it became evident after 2006 that OMB was using PART merely to cut budgets. Eventually this became the Bush budget legacy: a perversion of the purposes of performance budgeting and growth of large annual and cumulative budget deficits.

9. U.S. FEDERAL BUDGET REFORM: IS THERE A ROLE FOR PERFORMANCE BUDGETING?

To put the initiatives of the Bush and Obama administration into the larger context of the status of federal budgeting in 2010-2011, observers of the congressional budget process have expressed the view that performance assessment is not the most significant problem. Rather, without caps on spending, and without the other restraints from the Budget Enforcement Act of 1993 that expired in 2000, including Pay-Go (finance before providing the benefit) for entitlement programs to control increases in the huge non-discretionary accounts (approximately 70% of total federal spending annually) including Social Security and Medicare, the federal budget process was deemed to be "broken" and in need of reform (Joyce, 2002; Meyers, 2002). Further evidence abounds that federal government budgeting is in trouble, both procedurally and substantively. After four years of annual surpluses, from 1998 to 2001, the federal budget moved back into deficit of $160 billion in FY 2002 and approximately $480 billion in FY 2003. Adding to future deficits, Congress passed additional tax cuts requested by President Bush and adding $475 billion in spending to Medicare in 2003. Congress also approved additional funding of $79 billion in the spring of 2003 for the war on terrorism, and in September 2003, President Bush requested another $87 billion in the next budget (FY 2004) for pacification and rebuilding Iraq and Afghanistan. Large supplemental budgets have been approved routinely by Congress and the President for national defense and security from the period following the 9/11 terrorist attack on the U.S., continuing into 2010 under the Obama administration.

Given is that the annual federal budgets for fiscal years 2010 and 2011 are projected to be in deficit by more than $1 trillion, that total federal debt is approximately $13 trillion and is projected to increase through at least 2018 (Congressional Budget Office, 2010). Additionally, with the focus of the government now on health care reform implementation, job creation, the state of the economy, and armed conflicts abroad, the result is that administrative reform, except in the areas of regulation of financial institutions, national security and transportation safety, has not been not high on the agenda of the Obama administration or Congress. What the Democratic Congress and President Obama will do with respect to controlling spending and reducing deficits over the longer term is uncertain, although Obama has repeatedly promised to make such reductions. Whether Congress will invest more energy into performance review of the budget also is impossible to forecast. Recent evidence suggests that Congress will attempt to reduce spending and the deficit through its Budget Resolution process (Congressional Quarterly, 2010, p. 1). The Budget Resolution is, in essence, the annual congressional spending plan. However, the spending and other targets established in the Budget Resolution are not binding on appropriations committees, and Congress in fact is not required to pass a Resolution (McCaffery and Jones, 2001, pp. 109-112). As of May 2010, Congress was unable to negotiate a Budget Resolution for FY 2011.

While the expressed interest of the Obama administration in assessing the performance of federal government agencies is evident, PART as employed by the Bush administration has been abandoned and there is no evidence to suggest that the administration will attempt to implement performance budgeting per se as a means to accomplish this end. Rather, performance is now reviewed by OMB less formally than under PART, as one set of inputs to budget decision making, which is perhaps the best that advocates of performance budgeting can hope for in the medium term. What it would take to interest federal government decision makers, particularly members of Congress, to accept performance budgeting is unknowable. Congress demonstrated such interest in the 1980s in passing the Gramm-Rudman-Hollings budget deficit legislation and in the 1990s in approving GPRA and GMRA. However, in the 2000s regardless of what party has been in the majority, spending control has not been a priority. And for a variety of reasons this also has been the case for both the Bush and Obama administrations in this period.

Implementation of performance budgeting seems unlikely in the short term because Congress cannot even manage the process it is supposed to follow to control spending, even when one party controls both Houses of Congress. Despite the GAO recommendation that Congress use a performance resolution (Posner, 2002), it seems unlikely that Congress has the institutional capacity or the inclination to apply performance measurement in budgeting. Appropriations committees, which hold the most budget power in Congress, shun this approach because they believe it will reduce their discretion over spending. Few members of Congress want "budgeting by formula," as performance budgeting often is perceived.

10. THE FUTURE OF PERFORMANCE-BASED BUDGETING IN THE U.S. FEDERAL GOVERNMENT

Whether Congress will ever take action to further implement performance measurement and management, strategic planning and budgeting, in conformance with the Government Performance and Results Act of 1993 and the Government Management and Results Act of 1995, is uncertain. Given the likelihood of little or no action by Congress to place greater emphasis on performance in budget analysis and decision making in the near-term future, we may ask how much difference it makes whether Congress uses or ignores performance budgeting and performance review? To some extent, Congress has routinely used performance measures to review and enact budgets since the 1950s and before. A vast number of performance proxy measures are built into federal budgets for Executive branch programs in virtually all departments, ranging from the National Forest Service and Bureau of Land Management to the Department of Defense. By using formulas that equal dollars, performance budgeting may reduce complexity in congressional budgeting - and reducing complexity is a necessity in federal budgeting, as Wildavsky explained years ago (Wildavsky, 1964, pp. 11-14-15; pp., 147-152; Wildavsky, 1988, pp. 79-80, p. 412). Can Congress be expected to do more?

With respect to executive branch budget reform, it probably does not matter much how Congress budgets. The executive branch can institute the types of reform it deems fit and useful. Again, this is nothing new. Most of the major budget reforms in the post-WW II period have been developed and implemented in the executive branch, e.g., program budgeting in the 1950s, PPBS in the 1960s, management by objectives--and budgeting by objectives--and zero-based budgeting in the 1970s, top-down budgeting in the early 1980s, fixed-ceiling budgeting in the mid-1980s and 1990s after passage of the Gramm-Rudman-Hollings Acts, the Budget Enforcement Acts of 1990 and 1993 and other similar measures, and the performance/results orientation post-2000 (McCaffery and Jones, 2001). While it may be argued that GPRA was a congressional initiative, enforcement and implementation have been managed by the President's Office of Management and Budget. Thus the conclusion may be drawn that if performance budgeting is to be implemented at some time in the future in the U.S. federal government, it probably must come at the initiative of the President and the Office of Management and Budget.

President Barack Obama has articulated the importance of performance evaluation in public pronouncements. However, as of 2010, OMB does not appear to be moving any further towards implementing a performance budgeting system--but it definitely continues to emphasize use of performance information in the budget process as explained below. While most federal agencies have embraced GPRA and the idea of performance measurement, some still are treating it as a short-term phenomenon and waiting for it go away, as did other budget reforms including Zero Based Budgeting (ZBB) and Management by Objectives (MBO), so as to allow them get on with what they perceive as the "real" work of their agencies. Performance measurement and performance budgeting consume a significant amount of staff time and energy. Unless decision makers are willing to use the information produced from performance measurement, staff and other observers of the process wonder whether the cost is worth the effort and benefit. In particular, if Congress is no longer interested in the GPRA approach to performance measurement, they then wonder why OMB continues to push it. This perception was magnified when performance measurement and evaluation were viewed as exclusively of interest to the Executive branch for use in cutting budgets, as occurred during the period 2006 to 2008 (and for fiscal year 2009 budget preparation under Bush).

If Congress appears to be ignoring to a great extent the results of the GPRA and GMRA laws it passed, then it remains to be seen whether the Obama administration and the Executive branch will press forward on the emphasis on performance measurement in budgeting and management in 2010 and beyond. It may be observed that some members of Congress remain interested in performance review. On May 13, 2010, Congressional Representative Henry Cuellar (D-TX) presented a draft bill to the House Oversight and Government Reform subcommittee on Government Management, Organization and Procurement intended to put performance "at the heart of how federal government does business". Cuellar introduced similar legislation in 2009 intended to make PART law but Congress did not take action on this legislative proposal. The emphasis of the new legislation is revised so that PART has been dropped, with the new legislation focused on performance goals and more modern, crosscutting thinking about performance measurement and management. However, this legislation does not call for performance budgeting. (Kohli, 2010)

The Obama administration is interested in assessing the performance of federal government agencies. In spring 2010, OMB announced that the President's fiscal year 2011 budget included a new approach to ensure that executive branch departments and agencies focus on performance. The Budget established 128 High Priority Performance Goals (HPPGs) across federal departments and agencies that define its priorities (OMB 2010). According to OMB sources (who decline to be cited) the Obama administration wants agencies to integrate performance planning into their operational plans more systematically with "less bureaucratic reporting and review" by OMB, as was the practice using PART under the Bush administration. We have noted that much of this has been already accomplished by a number of agencies in implementation of GPRA. Further, it is unclear how HPPG performance review by OMB will be integrated into budget decision making, or whether it is intended to be integrated at all. A reasonable expectation is that high priority areas will receive increased funding in Presidential budgets. If this is so, it is unclear what performance results will need to be demonstrated, and to whom, for agencies to justify continuation of funding increases.

Despite this interest in careful assessment of the performance of federal agencies, there is no evidence to suggest that the Obama administration will attempt to implement performance budgeting. Thus, the longer-term issue for the Obama administration, and administrations to follow, is to determine whether performance assessment of the type they desire can and should be done using performance budgets.

11. CONCLUSIONS AND RECOMMENDATIONS

Based on our review of performance budgeting and performance-oriented budgeting initiatives at the federal government level in the U.S., what advice may be given to practitioners involved in developing performance measurement tied to budgeting, regardless of what level of government at which they work? In our view, practitioners should consider the recommendations provided below. Finally, after enumerating and articulating our recommendations, we provide an overall conclusion with respect to U.S. federal government application of performance budgeting.

1. Agencies should first identify the primary mission they perform and the most important services they provide. It is critical to define and use performance indicators that measure and report (a) the most important work elements the agency is responsible for performing, and (b) evidence of value added and changes in productivity. Definition of agency mission, strategy and prioritization of services needs to be integrated and linked to performance plans and budgets. This in turn helps in identifying the measures of performance to be used in gauging these service activities. Deciding which services are the primary activities of the agency will clarify which measures to develop to measure performance (Jones and Thompson, 2007; Grizzle and Pettijohn, 2002).

2. Agencies should keep performance budgeting plans a simple as possible. Simplicity has seemed to be the best way to approach GPRA performance planning. Large convoluted measures that are difficult for outside administrators or legislators to understand are not likely to be beneficial in gauging performance. Moreover, verbose explanations of future implementation plans or of items not directly related to the measures themselves add little value to plans. The plans should state the mission and vision of the organizations. Specific measures, their targets, baseline data if available, and long-term goals should be identified. Clear definitions of the measures used should be developed and examined closely for clarity before they are reported outside the agency. Finally, a means for validating the measures as well as their relationship to the budget is highly desirable and required by law in the GPRA.

3. Agencies should expect evolution of measures. Several aspects of performance planning will take considerable time to work out. Dialogue about the satisfactoriness of measures needs to take place within the agency, and between agencies and budget reviewers and others who monitor performance externally, e.g., congressional oversight committees. Such dialogue is needed to improve measures, measurement and linkage to budgets. Subsequent performance plans may not look anything like the initial ones developed by an agency. Federal agencies changed many of their performance measures after realizing the initial measures were not appropriate. Additionally, the size of initial plans often is significantly reduced over the first several years of implementation (General Accounting Office, 2001; see also General Accounting Office, 1996; 1997a & b; 1998; 1999; 2000 a & b). Moreover, outcomes may not be measurable on an annual basis. Some outcomes take years to achieve (e.g., in the area of social services or education), depending on the service mission, orientation and goals of the agency.

4. Agencies should concentrate on measures of efficiency, effectiveness and cost reduction. Outcome or output measures should be included if possible in performance budgets. However, it may be more feasible for agencies to focus on efficiency and effectiveness measures that are more realistically attainable. Even simple measures of output and performance are likely to be perceived as more useful by both agency officials and external budget examiners if accompanied by efficiency, effectiveness and cost reduction initiatives. In the current environment of budget deficits and reductions, performance measurement and reporting that includes means for improving agency operations while cutting costs are more likely to be noticed and supported.

5. Agencies should realize their measures may not be interpreted as expected. Outcomes are by far the most difficult to capture. Few agencies can include true outcome measures in their GPRA plans. An outcome measure at one level could be an input measure at another level. Input measures are not required by GPRA and usually are not of interest to outside stakeholders, but they are often used by control agencies in budget examination and reduction. Experience at the state government level in the U.S. has demonstrated that some of the consequences of use of performance measurement and budgeting may have a broader range of effects than initially anticipated (Melkers and Willoughby, 2005; see also Willoughby, 2004).

6. Good accounting and performance measurement systems are required to implement performance budgeting in an effective manner. Agencies with good service and cost information systems in place have a significant advantage in developing performance measurement and budgeting systems. Few agencies had good information systems when GPRA was passed, but as of 2010, most federal agencies have such systems, given varying degrees of sophistication and accuracy.

7. Agencies should link together reform initiatives where synergism is possible. GPRA fit rather neatly into other reform initiatives in the 1990s, including Total Quality Management and the National Performance Review. Most of the pilot projects used GPRA as a means to enhance initiatives already in progress in their organization. Many of the tools of TQM appeared to have benefited managers as they attempted to create performance indicators and plans. In addition, GPRA requirements matched reasonably well with the Performance Assessment Rating Tool (PART).

8. Agencies should determine how performance plans can be linked to resource allocation. Perhaps the most difficult task to complete in all of performance budgeting and in GPRA implementation is to define whether and how various measures on hand may be tied to costs and resource allocation. As noted above, defining measures and relating them to costs requires accurate and reliable databases and information systems. Most federal agencies have improved their accounting systems in efforts to comply with the requirements of the Chief Financial Officers Act of 1990. However, better accounting for expenses and costs is only part of what is needed for performance budgeting. Measures of service performance must be developed and matched to expenses and costs.

Over the past decade, some federal agencies have achieved significant progress relating performance measures to costs, while others have not made any real progress, according to OMB (OMB, 2010). Initially, a few agencies began the performance measurement process by establishing performance contracts between its field activity managers and the corporate office (General Accounting Office. 2001, pp. 8-9). However, by the early 2000s, most U.S. federal agencies, and many of the other national governments that have used extensive performance contracting in the past, had abandoned them, e.g., New Zealand. A primary stumbling block with the performance contract approach is the object of expenditure and appropriation base used in the U.S. federal budget process, which has no performance orientation. For other nations, performance contracts have been eliminated because they were not found to produce the results desired, especially given the significant effort consumed in preparation and reporting. Still, performance measures are used to allocate budgets to some extent as of 2010 in the UK, Switzerland, Australia, New Zealand and in other developed nations, and in the U.S. federal government, as noted previously in this study. Some developing nations are deeply involved in similar efforts (Wescott et al, 2009). Still, such use typically does not qualify as pure performance budgeting. Rather, it is resource-allocation decision making, informed by some performance indicators and trends. Only in developing nations does it appear that pressure is applied presently to implement full-scale performance budgets. While this pressure is applied within national governments by their treasury departments and central budget control agencies, often it is directed more rigorously to local governments and is driven most directly by international development assistance institutions including the World Bank and Asian Development Bank (Srithongrung, 2009; Punyaratabandhu and Unger, 2009; Kim, 2009; Netra and Craig, 2009; Taliercio, 2009, p. 196; Wescott, 2009).

In summary, the extent to which longer-term U.S. federal government experimentation with performance-based budget reform will continue remains in doubt. This is particularly the case as a result of the extensive role of the federal government in assuming a large amount of additional debt in its efforts to stabilize the economy, beginning in the last quarter of 2008. Rescuing the economy has dominated the U.S. federal government monetary and fiscal policy agenda to the extent that performance budgeting or performance-oriented budgeting, even as employed during the period 2001-2008, has been pushed off the political landscape. Still, given that it has been around for more than fifty years, we can expect that performance budgeting, at some point, is likely to rise again on the budget reform agenda of the U.S. federal government.

REFERENCES

Anthony, R. N. (2002) "Federal Accounting Standards Have Failed", International Public Management Journal, 5(3), pp. 297-312.

Axelrod, D. (1988) Budgeting for Modern Government. New York: St. Martin's Press, Inc.

Behn, R. D. (2008a) "Designing PerformanceStat Or What Are the Key Strategic Choices that a Jurisdiction or Agency Must Make When Adapting the CompStat/CitiStat Class of Performance Strategies?" Public Performance & Management Review, 32(2), pp. 206-235.

Behn, R. D. (2008b) "The Seven Big Errors of Performance Measurement". Working paper. Cambridge, MA: John F. Kennedy School of Government, February.

Bloom, T. R. (2003) Statement by the Director, Defense Finance and Accounting Service to the Subcommittee on National Security, Emerging Threats, and International Relations of the House Government Reform Committee, U.S. House of Representatives, March 31.

Borins, S. (1997) "What the New Public Management is Achieving: A Survey of Commonwealth Experience", in L. R. Jones and K. Schedler, eds., International Perspectives on the New Public Management. Greenwich, CT and London, UK: JAI Press, 1997, pp. 49-70.

Congressional Budget Office (2010) An Analysis of the President's Budgetary Proposals for Fiscal Year 2011, March 24, accessed April 22, 2010, at http://www.cbo.gov/budget/budproj.shtml.

Congressional Quarterly (2010) "Conrad's Budget Hits His 3 Percent Target", Budget Tracker Newsletter, Washington, DC: Congressional Quarterly, April 21, p. 1.

Eghtedari, A. G. and F. Sherwood (1960a) "An Analysis of Performance Budgeting in the City of Los Angeles", Public Administration Review, 20(1), pp. 76-92.

Eghtedari, A. G. and F. Sherwood, eds. (1960b) Performance Measurement and Performance Budgeting in the United States. Los Angeles, CA: University of Southern California.

General Accounting Office (1996) "Managing for Results: Achieving GPRA's Objectives Requires Strong Congressional Role". Testimony, GAO/T-GGD-96-79, March 6, p. 5.

General Accounting Office (1997a) Managing for Results: Analytic Challenges in Measuring Performance, HEHS/GGD-97-138, Washington, DC: GAO, May 30.

General Accounting Office (1997b) The Government Performance and Results Act: 1997 Government-wide Implementation Uneven, Washington, DC: GAO, June 2.

General Accounting Office (1998) Managing for Results: Measuring Program Results Under Limited Federal Control, GGD-99-16, Washington, DC: GAO, December 11.

General Accounting Office (1999) Managing for Results: Opportunities for Continued Improvements in Agencies' Performance Plans, GGD/AIMD-99-215, Washington, DC: GAO, July 20.

General Accounting Office (2000a) Managing for Results: Views on Ensuring the Usefulness of Agency Performance Information to Congress, GGD-00-35, Washington, DC: GAO, January 26.

General Accounting Office (2000b) Managing for Results: Challenges Agencies Face in Producing Credible Performance Information, GGD-00-52, Washington, DC: GAO, February 4.

General Accounting Office (2001) Managing for Results. GAO-01-592 Washington, DC: GAO, May.

Gilmour J. B. and D. E. Lewis (2005) "Assessing Performance Budgeting at OMB: The Influence of Politics, Performance, and Program Size", Journal of Public Administration Research and Theory, 16(2), pp. 169186.

Government Management Reform Act of 1994. www.npr. gov/npr/library/misc/s2170.html

Government Performance and Results Act of 1993. www.doi.gov/gpra

Greenspan, A. (2002) Testimony to the Senate Budget Committee, U. S. Senate, Washington, D.C., February 5.

Grizzle G. and C. Pettijohn (2002) "Implementing Performance-Based Program Budgeting: A System Dynamics Perspective", Public Administration Review, 62(2), pp. 51-62.

Guthrie, J., C. Humphrey, L. R. Jones, and O. Olson, eds. (2005) International Public Financial Management Reform: Progress, Contradictions, and Challenges. Greenwich, CT: Information Age Publishing.

Hilton, R. M. and P. G. Joyce. (2007) "Performance Information and Budgeting in Historical and Comparative Perspective", in B. G. Peters, J. Pierre, eds., Handbook of Public Administration, 2007, New York: Sage Publications Ltd, pp. 404-412.

Ingraham, P., P. Joyce and A. Donahue (2003) Government Performance: Why Management Matters. Washington, D.C.: Johns Hopkins Press.

Jensen, Lotte (2001) "Constructing the Image of Accountability in Danish Public Sector Reform", in L. R. Jones, J. Guthrie and P. Steane, eds., Learning From International Management Reform. New York: JAI-Elsevier Press, pp. 479-498.

Jones, L. R. 2001a. "UK Treasury Use of Performance Measures". Interview with UK Treasury official, Rome, Italy, December 12.

Jones, L. R. 2001b. "Management Control Origins". Interview with R. N. Anthony, North Conway, New Hampshire, November 17.

Jones, L. R. (2002a) "An Update on Budget Reform in the U.S.". IPMN Newsletter No. 2, February 7, p. 1 http://www.ipmn.net/index.php?option=com_con tent&task=view&id=36&Itemid=32

Jones, L. R. (2002b) "IPMN Symposium on Performance Budgeting and the Politics of Reform: Analysis of Bush Reforms in the U.S.". International Public Management Review, 3(2), pp. 25-41.

Jones, L. R. (2003) "IPMN Symposium on Performance Budgeting and the Politics of Reform". International Public Management Journal, 6(2), pp. 219-235.

Jones, L. R. and K. Euske (1991) "Strategic Misrepresentation in Budgeting". Journal of Public Administration Research and Theory, 3(3), pp. 37-52.

Jones, L. R. and F. Thompson (1999) Public Management: Institutional Renewal for the 21st Century. New York: Elsevier Science.

Jones, L. R. and F. Thompson (2007) From Bureaucracy to Hyperarchy in Netcentric and Quick Learning Organizations. Charlotte, NC: Information Age Publishing.

Jones, L. R., J. Guthrie and P. Steane (2001) "Learning From International Public Management Reform Experience", in L. R. Jones, J. Guthrie, and P. Steane, eds., Learning From International Public Management Reform. New York: Elsevier, pp. 1-26.

Joyce P. G. (1993) "Using Performance Measures for Federal Budgeting: Proposals and Prospects", Public Budgeting and Finance, 13(4), pp. 3-17.

Joyce, P. G. (2002) Federal Budgeting After September 11th: A Whole New Ballgame or Deja Vu All Over Again? Paper presented at the conference of the Association for Budgeting and Financial Management, Kansas City, MO, October 10.

Kaplan, R. S. and D. P. Norton (1996) The Balanced Scorecard: Translating Strategy into Action. Cambridge, MA: Harvard University Press.

Kaplan, R. S. and D. P. Norton (2001) The Strategy-focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment. Cambridge, MA: Harvard University Press.

Kim, S. (2009) "Do Leadership and Management for Results Matter? A Case Study of Local E-Government Performance in South Korea", in C. Wescott, B. Bowornwathana and L. R. Jones, eds., The Many Faces of Public Management Reform in the Asia-Pacific Region. Oxford, UK: Emerald Publishing, 2009, pp. 307-334.

Kohli, J. (2010) "Congress Must Ensure the Executive Branch Performs at Its Best", Washington, DC, The Center for American Progress, May 12 http://www.americanprogress.org/issues/2010/05/defini ng_goals.html

Lynn, L. E. Jr. (1997) "The New Public Management as an International Phenomenon", in L. R. Jones and K. Schedler, eds., International Perspectives on the New Public Management. Greenwich, CT and London, UK: JAI Press, 1997, pp. 105-124.

McCaffery, J. L., and L. R. Jones (2001) Budgeting and Financial Management in the Federal Government. Greenwich, CT: Information Age Publishing.

McNab, R. M. and F. Melese (2003) "Implementing the GPRA: Examining the Prospects for Performance Budgeting in the Federal Government", Public Budgeting and Finance, 23(2), pp. 73-95.

Melkers, J. and K. Willoughby (2005) "Models of Performance Measurement Use in Local Governments: Understanding Budgeting, Communication and Lasting Effects", Public Administration Review, 65(2), pp. 180-190.

Mihm, C. J. (2002a) Testimony to the House Committee on Government Reform, Subcommittee on Government Management, Information and Technology, U. S. House of Representatives, Washington, DC: General Accounting Office, February 5.

Mihm, C. J. (2000b) Testimony to the House Committee on Government Reform, Subcommittee on Government Management, Information and Technology, U. S. House of Representatives, Washington, DC: General Accounting Office, July 20.

Moynihan, D. P. (2006) "What Do We Talk About When We Talk About Performance? Dialogue Theory and Performance Budgeting", Journal of Public Administration Research and Theory, 16(2), pp. 151-168.

National Performance Review (1993), Washington, DC: Office of the Vice President of the U.S. http://www .npr.gov/npr.html.

Netra, E. and D. Craig (2009) "Could a Decentralized Human Resource Management System in Cambodia Strengthen Performance and Accountability?" in C. Wescott, B. Bowornwathana and L. R. Jones, eds., The Many Faces of Public Management Reform in the Asia-Pacific Region. Oxford, UK: Emerald Publishing, 2009, pp. 335-360.

Office of Management and Budget (2003) Performance and Management Assessments, Budget of the United States Government, Fiscal Year 2004 http://www.whitehouse.gov/omb/budget/fy2004/pma.ht ml.

Office of Management and Budget (2008) Performance and Management Assessments, Budget of the United States Government, Fiscal Year 2008 http://www.whitehouse.gov/omb/budget/fy2008/pma.ht ml.

Office of Management and Budget (2010) Assessing Program Performance http://www.whitehouse.gov/omb/performance_default.

Posner, P. (2002) "Performance-Based Budgeting: Current Developments and New Prospects", Paper presented at the conference of the Association for Budgeting and Financial Management, Kansas City, MO, October 10.

Punyaratabandhu, S. and D. H. Unger (2009) "Managing Performance in a Context of Political Clientelism: The Case of Thailand", in C. Wescott, B. Bowornwathana and L. R. Jones, eds., The Many Faces of Public Management Reform in the Asia-Pacific Region. Oxford, UK: Emerald Publishing, 2009, pp. 279-306.

Reichmann, G. and M. Sommersguter-Reichmann (2007) "Efficiency Measures and Productivity Indexes in the Context of University Library Benchmarking", Applied Economics, 9(1), pp. 1-13.

Robinson, M. (2007) Performance Budgeting. New York: Palgrave MacMillan.

Robinson, M. and J. Brumby (2005) "Does Performance Budgeting Work? An Analytical Review of the Empirical Literature", Washington, DC: International Monetary Fund.

Rodriquez, J. (1996) "Connecting Resources with Results", Budget and Finance, 16(4), pp. 2-4.

Rouse, P. and M. Putterill (2003) "An Integral Framework for Performance Measurement", Management Decision, 41(8), pp. 791-805.

Sanger, M. B. (2008) "From Measurement to Management: Breaking through the Barriers to State and Local

Performance", Public Administration Review, Supplement to Vol. 68, pp. 570-585.

Seckler-Hudson, C. (1953) Bibliography on Public Administration: Annotated. Washington, DC: American University Press.

Schedler, K. (1997) "Legitimization as Granted by the Client: Reflections on the Compatibility of New Public Management and Direct Democracy", in L. R. Jones and K. Schedler, eds., International Perspectives on the New Public Management. Greenwich, CT and London, UK: JAI Press, 1997, pp. 145-168.

Schick, A. (1971) Budget Innovation in the States. Washington, DC: The Brookings Institution.

Schick, A. (2001) "Getting Performance Measures to Measure Up", in Quicker, Better, Cheaper? Managing Performance in American Government. D. W. Forsythe, ed., (2001) New York City, NY: Rockefeller Institute Press, pp. 30-60.

Simons, R., A. Davila, R. S. Kaplan (1999) Performance Measurement and Control Systems for Implementing Strategy. Cambridge, MA: Harvard University Press.

Smith, P. C. and A. Street (2005) "Measuring the Efficiency of Public Services: the Limits of Analysis", Journal of the Royal Statistical Society: Series A (Statistics in Society), 168(2), pp. 401-417.

Srithongrung, A. (2009) "The Causal Dynamic Effects of a Performance-Based Budget on Thai Public Spending: A Reexamination", in C. Wescott, B. Bowornwathana and L. R. Jones, eds., The Many Faces of Public Management Reform in the Asia-Pacific Region. Oxford, UK: Emerald Publishing, 2009, pp. 247-278.

Stone, M. (2002) "How Not to Measure the Efficiency of Public Services", Journal of the Royal Statistical Society: Series (A Statistics in Society), 165(3), pp. 405-434.

Taliercio, R., 2009. "Unlocking Capacity and Revisiting Political Will: Cambodia's Public Financial Management Reforms, 2002-2007", in C. Wescott, B. Boworn wathana and L. R. Jones, eds., The Many Faces of Public Management Reform in the Asia-Pacific Region. Oxford, UK: Emerald Publishing, 2009, pp. 175-206.

Thompson, F. (1997) "Defining the New Public Management", in L. R. Jones and K. Schedler, eds., International Perspectives on the New Public Management. Greenwich, CT and London, UK: JAI Press, 1997, pp. 1-14.

Walker, D. (2001) Testimony by the Comptroller General to the Subcommittee of the National Security, Veterans Affairs, and International Relations Committee of the House Government Reform Committee, U. S. House of Representatives, March 7.

Walker, D. (2002) Testimony by the Comptroller General to the House Committee on Government Reform, Subcommittee on Government Management, Information and Technology, U. S. House of Representatives, February 7.

Wanna, J., L. Jensen and J. de Vries, eds. (2003) Controlling Public Expenditure. Northampton, MA: Edward Elgar.

Wescott, C. (2009) "Assessing World Bank Support for Public Financial Management and Procurement", in C. Wescott, B. Bowornwathana and L. R. Jones, eds., The Many Faces of Public Management Reform in the Asia-Pacific Region. Oxford, UK: Emerald Publishing, 2009, pp. 157-174.

Wescott, C., B. Bowornwathana and L. R. Jones, eds. (2009) The Many Faces of Public Management Reform in the Asia-Pacific Region. Oxford, UK: Emerald Publishing.

Wildavsky, A. (1961) "Political Implications of Budget Reform", Public Administration Review, 21(4), pp. 183-190.

Wildavsky, A. (1964) The Politics of the Budgetary Process. Boston: Little, Brown.

Wildavsky, A. (1988) The New Politics of the Budgetary Process. Glenview, IL: Scott, Foresman.

Wildavsky, A. and L. R. Jones (1994) "Budgetary Control in a Decentralized System: Meeting the Criteria for Fiscal Stability in the European Union", Public Budgeting & Finance, 14(4), pp. 7-22.

Willoughby, K. (2004) "Performance Budgeting and Budget Balancing: State Government Perspective", Public Budgeting and Finance, 24(2), pp. 21-39.

L. R. Jones

Wagner Professor of Public Management

Graduate School of Business and Public Policy, Naval

Postgraduate School

Monterey, California, USA

Jerry L. McCaffery

Professor of Public Budgeting

Graduate School of Business and Public Policy, Naval

Postgraduate School

Monterey, California, USA
COPYRIGHT 2010 Southern Public Administration Education Foundation, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2010 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Jones, L.R.; McCaffery, Jerry L.
Publication:Public Finance and Management
Article Type:Report
Geographic Code:1USA
Date:Jun 22, 2010
Words:10849
Previous Article:Fiscal incentives, maquiladoras, and local standard of living in Mexico before and after NAFTA.
Next Article:Diagnosing performance management and performance budgeting systems: a case study of the U.S. Navy.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters