Printer Friendly

From global rationing to target setting in the U.K.

From Global Rationing to Target Setting in the U.K.

Since its founding in 1948, Britain's National Health Service has worked on the basis of an implicit concordat between the state and the medical profession. The state has allocated capped, global budgets to health authorities which, in turn, have adopted the same method in setting the spending limits for individual hospitals. In exchange, the British medical profession has enjoyed far greater freedom from scrutiny than doctors in the United States. The bargain was clear: the state achieved firm control over health care spending in total; the medical profession retained clinical autonomy. The state could not determine the way in which resources were used at the point of service delivery; doctors had to accept the budgetary limits set for them. For almost forty years everyone seemed satisfied. But as the NHS passed its fortieth anniversary in 1988, there were clear signs that the concordat was breaking down. The state was becoming ever more determined to influence the activities of the NHS, which meant telling doctors what they should be doing; and the medical profession was beginning to regret what they increasingly saw as a Faustian bargain, whereby it was left to them to ration inadequate resources in the name of clinical decisionmaking. When the government published its review of the NHS in January, 1989--its manifesto, as it were, for health care delivery in Britain in the twenty-first century--it was clear that a new contract between the state and the medical profession was in the making.[1]

The basis of resource allocation in the NHS is simple. It is a tax-financed service and the central government annually determines the global budgetary allocation to the NHS. This tends to be a highly politicized process, and passions run high both inside and outside the Cabinet. Indeed, the system gives every incentive to those working in the NHS to advertise its shortcomings to attract more resources. Once the global allocation has been made, however, the process of distribution is much more technical. Since the 1970s the Resource Allocation Working Party (RAWP) formula has been used to allocate funds to the regions to achieve distributional equity across the country. The formula weights allocations by demographic characteristics and relative mortality--crude, and much-criticized, proxies for need. The regions, in turn, allocate the funds to the district health authorities (DHAs) on the same principle, although here the process becomes more untidy. Finally, the DHAs fix the budgets for each hospital or health care delivery unit.

The system has two main characteristics. First, the Treasury exercises absolute control over total expenditure, with the exception of spending on primary health care where the allocatory process is somewhat different and more demand-led. Second, this control is exercised over the global sum, not the activities. That is, it was designed as a system of financial control, not as a way of controlling the outputs of the NHS. In the 1970s there were indeed attempts by central government to change the pattern of spending, by pushing more resources to the least-favored services, such as those for the elderly and mentally ill.[2] But essentially the machinery of resource allocation remained unchanged.

Turning to the other side of the balance sheet, the system of resource control meant that over the decades successive governments of all political persuasions were content to leave it to those who deliver service to decide what they did within their budgets. There were a number of reasons for this. The creation of the NHS involved a deal with the medical profession, and part of that deal was that NHS would be allowed to run its own affairs.[3] British governments, almost uniquely, did not have to worry about health care cost explosions: the NHS was the international model of parsimony--and even today less than 6 percent of the national income is spent on health care. Conversely, however, the fact of tight financial control inevitably meant rationing and waiting lists, although, in contrast to the United States, the NHS does not ration by exclusion, a point often overlooked when making comparisons.[4] And it was obviously in the interest of politicians that rationing decisions should be disguised as clinical decisions, as in the case of end-stage renal disease treatment, for example.[5] Whatever the reasons, the consequences were clear. The medical profession was left to get on with its own business, free from scrutiny or examination. Although individual hospital departments might set up their own peer reviews, there was (and is) no general, systematic audit system, no way of checking on either the competence or the productivity of individual clinicians or clinical units.

Matters began to change in the 1980s. For one thing, Mrs. Thatcher's government was determined to cut public expenditure and, although the NHS budget continued to increase, the rate of growth fell sharply. For another, the customers of the NHS were becoming more demanding, as might be expected in any increasingly consumer-oriented society. Even though an increasing proportion exited into the private sector--notably for quality of life elective surgery--this did not affect their loyalty to the NHS or their use of the public sector for coping with life-threatening conditions. Politically, the government could not afford to be seen to neglect the NHS. To deal with increasing demands while yet restraining spending, the government therefore prepared to take on the medical profession. It switched the emphasis of policy from the control of expenditure (inputs) to the control of activities (outputs). Its response to those who accused it of starving the NHS of funding--a coalition that included not only the opposition parties but also just about everyone working in the NHS--was to point to the increase in the service's activities: the rise in the number of patients treated, operations performed, and so on. Finally, to achieve its policy objective of squeezing more value for money out of the NHS's budget, the government took a series of policy decisions through the 1980s designed to achieve greater control over the way in which resources are used by service providers.

Only some of the main milestones in this process can be noted here. In the first half of the 1980s, the government introduced a new managerial system in the NHS designed to tighten control over activity. At the same time it introduced a battery of performance indicators (PIs) designed to identify the relative performance, financial and clinical, of individual health authorities and hospitals. These provide information about, for example, cost per case, lengths of stay, throughput per bed, and waiting lists.

In the second half of the 1980s, the government moved toward setting explicit targets of performance; resource allocation became linked (as part of a wider change in British public management) to objectives. So, in 1986, specific targets were set, in terms of the number of operations to be carried out by 1990: 17,000 artery bypass grafts, 50,000 hip replacement operations, 70,000 cataract operations, and so on. The implication was clear. The newstyle managers would have to make sure that the clinicians delivered the goods. If setting objectives was not a direct attack on clinical autonomy in the allocation of resources to individual patients, it certainly represented a threat to the way in which British doctors had traditionally thought about their role. For the first time ever, they might be called to account not just for exceeding their budget but for how they used the money in it.

The threat became explicit in the 1989 Review of the NHS.[6] This made it clear that the medical profession would have to become much more accountable for its use of resources. It proposed that medical audit should become compulsory and universal--that there should be "a systematic, critical analysis of the quality of medical care, including the procedures used for diagnosis and treatment, the use of resources, and the resulting outcome for the patient." It introduced the notion of a much tighter contract for hospital consultants that in the future is to specify "their responsibility for the quality of their work, their use of resources, the extent of the services they provide for NHS patients and the time they devote to the NHS," and that will be reviewable annually. It even challenged the way in which distinction or merit awards are handed out to consultants, one of the cornerstones of the deal made by Nye Bevan with the medical profession in 1948. At present these financial bonuses are for life and are allocated by the medical profession itself. The government intends that in the future the selection committees will include managers, that the awards themselves will be reviewed every five years, and that "consultants must demonstrate not only their clinical skills but also a commitment to the management and development of the service." Changes of a different kind, but equally bold, are proposed in the case of general practitioners, who operate under rather different rules.

Overall, then, Britain appears to be teetering on the edge of a new era--although months if not years of bargaining between the government and the medical profession lie ahead. The implications are likely to be profound. If doctors are to become more accountable for the way in which they use public resources, who is going to determine what counts as good performance?[7] How will any conflict between quantity and quality be resolved, since it cannot be assumed that (given a finite budget) quality must or should be pursued to the nth degree even if this means sacrificing other policy aims? If the government moves increasingly toward fixing output targets, how will these be determined? The targets fixed hitherto are largely an off-the-cuff ministerial response to political process, but is this a satisfactory process if setting objectives in terms of outputs becomes the norm in decisions about resource allocation? And if resource allocation becomes increasingly tied to particular objectives, what will be the relationship between political and clinical decisions in rationing? Will doctors remain content to accept as much responsibility for rationing as in the past if they are no longer guaranteed immunity from scrutiny in return? The list of questions could be extended. It is long enough, however, to suggest that the NHS faces a period of intellectual--as well as perhaps political--turmoil unprecedented in its history. References [1]Patricia Day and Rudolf Klein, "NHS Review: The Broad Picture," British Medical Journal, 11 February 1989, 339-40. [2]Department of Health and Social Security, Priorities for Health and Personal Social Services in England (London: HMSO, 1976). [3]Rudolf Klein, The Politics of the National Health Service (London: Longman, 1989), 2nd ed. [4]Henry J. Aaron and William B. Schwartz, The Painful Prescription (Washington, DC: The Brookings Institution, 1984). [5]Thomas Halper, "Life and Death in a Welfare State: End-Stage Renal Disease in the United Kingdom," Milbank Memorial Fund Quarterly 63:1 (1985), 52-94. [6]Secretary of State for Health, Working for Patients (London: HMSO, 1989), Command 555. [7]Patricia Day and Rudolf Klein, Accountabilities (London: Tavistock, 1987). Rudolf Klein is professor of social policy at the Centre for the Analysis of Social Policy of the University of Bath, England.
COPYRIGHT 1989 Hastings Center
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1989 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:health care and national health insurance in Great Britain
Author:Klein, Rudolf
Publication:The Hastings Center Report
Date:Jul 1, 1989
Words:1850
Previous Article:Is there life after Roe v. Wade?
Next Article:In Ghana, conflict and complementarity.
Topics:

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters