Promoting patient safety: an ethical basis for policy deliberation.
This is the final report of a two-year Hastings Center research project that was launched in response to the landmark 1999 report from the Institute of Medicine, To Err Is Human, and the extraordinary attention that policymakers at the federal, state, regulatory, and institutional levels are devoting to patient safety. It seeks to foster clearer and better discussion of the ethical concerns that are integral to the development and implementation of sound and effective policies to address the problem of medical error. It is intended for policymakers, patient safety advocates, health care administrators, clinicians, lawyers, ethicists, educators, and others involved in designing and maintaining safety policies and practices within health care institutions.
Among the topics discussed in the report:
* the values, principles, and perceived obligations underlying patient safety efforts;
* the historical and continuing tensions between "individual" and "system" accountability, between error "reporting" to oversight agencies and error "disclosure" to patients and families, and between aggregate safety improvement and the rights and welfare of individual patients;
* the practical implications for patient safety of defining "responsibility" retrospectively, as praise or blame for past events, or prospectively, as it relates to professional obligations and goals for the future;
* the shortcomings of tort liability as a means of building institutional cultures of safety, learning from error, supporting truth telling as a professional obligation, or adequately compensating patients and families, contrasted with alternative models of dispute resolution, including mediation and no-fault liability;
* the needs of patients, families, and clinicians affected by harmful errors and how these needs may be addressed within systems approaches to patient safety; and
* the potential conflicts between the protection of patient privacy required by the Health Insurance Portability and Accountability Act and efforts to use patient data for the purposes of safety improvement, and how these conflicts may be resolved.
Although this report is the work of the project's principal investigator, not a statement of consensus, it draws from the insights of the interdisciplinary group of experts convened by The Hastings Center to make sense of the complex phenomenon of patient safety reform. Working group members brought their experience as people who had suffered from devastating medical harms and as institutional leaders galvanized to reform by tragic events in their own health care institutions. They brought expertise as clinicians, chaplains, and risk managers working to deliver health care, confront its problems, and make it safer for patients. They brought familiarity with the systems thinking deployed in air traffic control and in the military. And they brought critical insight from medical history and sociology, economics, health care purchasing, health policy, law, philosophy, and religious studies.
The research project was made possible through a major grant from the Patrick and Catherine Weldon Donaghue Medical Research Foundation.
On the cover: Hospital, by Frank Moore, 1992. Oil on wood with frame and attachments. 49" x 58" overall. Private Collection, Italy. Courtesy Sperone Westwater, New York.
PROMOTING PATIENT SAFETY
AN ETHICAL BASIS FOR POLICY DELIBERATION
by Virginia A. Sharpe
Over the last three years, patient safety and the reduction of medical error have come to the fore as significant and pressing matters for policy reform in U.S. health care. In 2000, the Institute of Medicine's report, To Err Is Human: Building a Safer Health System presented the most comprehensive set of public policy recommendations on medical error and patient safety ever to have been proposed in the United States. (1) Prompted by three large insurance industry-sponsored studies on the frequency and severity of preventable adverse events, as well as by a host of media reports on harmful medical errors, the report offered an array of proposals to address at the policy level what is being identified as a new "vital statistic," namely that as many as 98,000 Americans die each year as a result of medical error--a figure higher than deaths due to motor vehicle accidents, breast cancer, or AIDS. And this figure does not include those medical harms that are serious but non-fatal.
The IOM recommendations resulted in a surge of media attention on the issue of medical error and swift bipartisan action by President Clinton and the 106th and then the 107th Congress. Shortly after the report was issued, President Clinton lent his full support to efforts aimed at reducing medical error by 50 percent over five years. In Congress, the report prompted hearings and the introduction of a host of bills including the SAFE (Stop All Frequent Errors) Act of 2000 (S. 2378), the "Medication Errors Reduction Act of 2001" (S. 824 and H.R. 3292), and, recently, the "Patient Safety and Quality Improvement Act of 2002" (S. 2590) and the "Patient Safety Improvement Act of 2002" (H.R. 4889). Although none of these bills has made it into law, each represents ongoing debate about the recommendations in the IOM report.
Since the IOM recommendations have been either a catalyst or a touchstone for all subsequent patient safety reform proposals whether by regulation or by institutions hoping to escape regulatory mandates--they must be part of the context of any policy-relevant discussion of the ethical basis of patient safety.
The Institute of Medicine Report
The Institute of Medicine report is a public policy document. That is, it proposes the need for government intervention to address a problem of serious concern to public health and health care financing. Although there was an immediate flurry of resistance to the report's statistics on the number of deaths associated with preventable medical error--a key premise in the argument establishing the scope and significance of the problem--these challenges have been effectively silenced by the preponderance of evidence that the rate of harmful medical error, with its enormous human and financial consequences in death, disability, lost income, lost household production, and health care costs, is unacceptable.
The report observes that health care has lagged behind other industries in safety and error prevention in part because, unlike aviation or occupational safety, medicine has no designated agency to set and communicate priorities or to reward performance for safety. As a result, the IOM's keystone recommendation is the establishment of a center for patient safety to be housed at the Agency for Health Care Quality and Research under the auspices of the Department of Health and Human Services. The center's charge is to set and oversee national goals for patient safety. In order to track national and institutional performance, and to hold institutions accountable for harm, the IOM also proposes mandatory, public, standardized reporting of serious adverse events. In addition to mandatory reporting, the IOM advocates efforts to encourage voluntary reporting. To motivate participation in a voluntary reporting system, the IOM recommends legislation to extend peer review protections, that is, confidentiality, to data collected in health care quality improvement and safety efforts.
To complement the national initiative, the IOM recommends that patient safety be included as a performance measure for individual and institutional health care providers and that institutions and professional societies commit themselves to sustained, formal attention to continuous improvement on patient safety. Finally, regarding medication safety, the IOM recommends that the FDA and health care organizations pay more attention to identifying and addressing latent errors in the production, distribution, and use of drugs and devices.
A unifying theme in the report is the role that systems play in the occurrence of medical mistakes. Over the last few decades, research conducted on error in medicine and other high-risk, high-variability industries has revealed that most quality failures in these industries result not from poor, incompetent, or purposefully harmful individual performance but from the very complexity of systems. In the hospital setting, systems of drug dissemination or infection control, for example, can be designed either to prevent or to facilitate error by individual providers. Recognizing the system dimensions of the problem, the IOM recommendations promote human factors research--which examines the interface between humans and machines in complex work environments--to get at the root causes of error and adverse events. The report encourages non-punitive, voluntary reporting as an essential ingredient in understanding lesser injuries and "near misses"--that is, those errors that have the potential to cause harm, but have not yet caused harm.
Although the IOM acknowledges the role that professional ethics and norms play in motivating health care quality, it bases its recommendations on the premise that internal motivations are insufficient to assure quality and patient safety consistently throughout the health care system. Thus, the IOM's aim is to create external regulatory and economic structures that will create both a level playing field and "sufficient pressure to make errors so costly ... that [health care] organizations must take action." (2)
Given its aims as a comprehensive policy document, it is understandable that the IOM places only minimal emphasis on professional norms or the moral motivation of health care providers as the principal catalyst for change. The scope of the change proposed requires a uniform set of incentives and accountabilities. Further, if systems rather than individuals are the most appropriate targets for improvement, then appeals to individual virtue would seem to be the wrong focus. We will come back to the relationship between individuals and systems, but, for the moment, it is enough to point out that the role of ethics in public policy goes well beyond the question of moral motivation. Ethics also plays an essential role in the justification of public policy and the critique of policies already in place.
Underlying all public policy deliberations are specific social values and assumptions about how these values should be weighed and balanced or prioritized. In order to understand and assess the legitimacy of proposed policies in a democratic society, therefore, those underlying values and assumptions can be made explicit and subject to critical appraisal.
This report takes up this large task. It begins by elucidating the ethical values and concepts underlying the IOM recommendations. The central sections of the report are devoted to a careful unpackaging of the notion of accountability. The report argues that accountability requires a sophisticated understanding of the causal explanation for errors--an account of errors not merely as causes of harm but as themselves caused by complex systems. The notion of accountability itself can also be explicated in different ways; this report argues that accountability should be understood not merely in a retrospective and fundamentally retributive way, but also in a foreword-looking or prospective sense oriented to the deliberative and practical processes involved in setting and meeting goals--such as improved patient safety. Both senses of accountability must be borne in mind in assessing the pros and cons of the different possible ways of compensating patients for adverse events. The demands of justice and safety improvement, which sometimes conflict and must be balanced against each other, argue for compensation schemes based on no-fault liability or mediation. Traditional tort liability is the worst way of achieving these two goals.
Ethical Values and Issues at Stake in Proposed Patient Safety Reforms
Do No Harm." The guiding value of patient safety can be understood to derive from two longstanding principles of health care ethics: beneficence, the positive obligation to prevent and remove harm, and nonmaleficence, the negative obligation to refrain from inflicting harm. As far as medical error is concerned, the principle of beneficence establishes a moral argument against errors of omission such as a misdiagnosis or failure to provide required treatments. The principle of nonmaleficence establishes an argument against errors of commission, such as surgical slips, drug administration to the wrong patient, or the transmission of nosocomial infection. Together, these two principles constitute the obligation to "do no harm." (3)
Traditionally, the relationship between the clinician and the patient has been regarded as a fiduciary relationship. That is, the power disparity between doctor and patient, the patient's vulnerability, and the doctor's offer to help are understood to place special obligations on health care providers, as professionals, to promote a patient's health interests, to respect the patient's autonomy, and to hold his or her good "in trust." (4)
Medical error and injury happens to an identifiable individual. From the point of view of fiduciary ethics, that is, professionalism, the individual patient is the focus of the obligation to do no harm. (5) This patient-centered focus is acknowledged by the IOM in its definition of safety as "freedom from accidental injury." This definition, says the report, "recognizes that this is the primary safety goal from the patient's perspective." (6)
The principle of utility. The goal of patient safety can also be justified by the principle of utility, understood in the simplest terms as the achievement of the greatest good for the greatest number or the net aggregate benefit across a population. For example, policy recommendations are aimed at patient safety as a public health problem--a problem requiring strategies to improve overall safety in the health care system. As such, they are based on the principle of utility. From the point of view of public health ethics, the patient population in the aggregate is the normative focus, and safety improvements are measured in terms of population-based or epidemiologic statistics such as the IOM's target goal of a "fifty percent reduction in errors over five years." (7)
The value of patient safety is also understood to derive from its economic utility. As the IOM report states on the second page of its executive summary, the total national cost of preventable medical error is between seventeen and twenty-nine billion dollars a year. The assumption behind the report's recommendations is that efforts to reduce error by the target of 50 percent over five years will be justified by the reduction of associated costs.
Utilitarian and fiduciary justifications for patient safety can come into tension. For example, although it is possible that the incentive to reduce the extra costs associated with preventable error will coincide with the imperative to protect patients from harmful outcomes, such a coincidence is by no means assured. One can easily imagine a cost-conscious hospital deciding against certain strategies to improve safety because the up-front costs are prohibitive. Likewise, without a clear prioritization of the fiduciary justification for safety--which gives priority to patient welfare as a policy objective--it is easy to imagine safety proposals being reduced to their economic value. Under such circumstances, policy makers might suppose that economic considerations alone will justify certain safety trade-offs. (8)
One of the biggest ethical challenges for patient safety reform will be in confronting the fact that strategies to improve overall patient safety have the potential to compromise obligations to individual patients. For example, the IOM recommends mandatory reporting of serious adverse events and voluntary reporting of lesser harms and near misses. To the extent that institutions direct their resources to meeting the standards for mandatory reporting, they may de-emphasize voluntary reporting and the follow-up necessitated by it. This could have the paradoxical effect of making safety improvement activities contingent on a patient having been seriously harmed.
To appreciate fully what is at stake here, we need to grapple with the complex issue of accountability. Accountability for harmful medical error is expressed in the IOM'S call for a nationwide mandatory system for reporting serious adverse events and in its call for performance standards on patient safety and quality improvement for health care organizations. (9) Accountability is grounded, in the report, in the public's right to know about and be protected from hazards. It also derives from the principle of fairness.
From a regulatory perspective, hazards in the health care setting are matters of public safety. The IOM's recommendations regarding mandatory reporting are designed to generate standardized information that can be used to understand and track known hazards and to take preventive action. As the report states: "The public has the right to expect health care organizations to respond to evidence of safety hazards by taking whatever steps are necessary to make it difficult or impossible for a similar event to occur in the future. The public also has a right to be informed about unsafe conditions." (10)
The principle of fairness operates on two levels in the mandatory reporting proposal. First, mandatory reporting is intended to level the playing field for health care institutions so that none is exempt from data collection on safety, or from penalties or civil liability in the case of serious patient harms. Second, mandatory reporting to oversight bodies is intended to provide an avenue for harmed patients to gain access to information regarding the circumstances surrounding an injury and use it to seek justice for negligent harm associated with care. (11)
Although a number of states currently mandate external reporting of serious adverse events--usually to the state health department--in most cases the information collected is intended to be protected by law from potential claimants. (12) Many state programs fail to provide public access to the information and most require subpoena or court order for release of information. By contrast, the IOM proposes meaningful public access to information about serious harms; it states that "requests by providers for confidentiality and protection from liability seem inappropriate in this context." (13)
A conceptual distinction between reporting and disclosure is important. Reporting refers to the provision of information to oversight bodies such as state agencies, or the proposed Center for Patient Safety. Disclosure, by contrast, refers to the provision of information to patients and their families. It is important to point out that the IOM's emphasis on accountability and the public's right to know in the context of mandatory reporting have nothing to do with active disclosure of information by health care institutions to harmed parties. Although the mandatory reporting of serious or fatal adverse events would, in principle, trigger meaningful investigation and administrative action, it does not automatically direct that information to the patients who have been harmed. The "right to know" invoked by the IOM, is thus not an endorsement of the individual's right to know or of the obligation of respect for the autonomy of individuals. In this way, the IOM's understanding of accountability is extremely narrow and points up one of the ways in which a public health or safety approach overlooks obligations to specific individuals.
Although it is not a feature of the IOM's recommendations, the need for disclosure, understood as a prima fade obligation of professionalism, (14) is being addressed on other fronts in the patient safety movement. For example, in 2001, the JCAHO put into effect a disclosure standard that requires hospitals and physicians to inform patients (and families) about "unanticipated outcomes associated with their care. (15) This requirement is included in the JCAHO's Patient Right's and Organizational Ethics standards. Likewise, a number of forward-looking health care institutions, such as the Veterans Affairs (VA) Medical Center in Lexington, Kentucky, (16) have embraced disclosure as an institutional obligation that has the added advantage, from a consequentialist point of view, of not resulting in a negative financial impact on the hospital. As Steve Kraman of the Lexington VA hospital says, "We didn't start doing this to try to limit payments; we did it because we decided we weren't going to sit on or hide evidence that we had harmed a patient just because the patient didn't know it.... We started doing it because it was the right thing to do, and after a decade of doing it decided to look back to see what the experience had been. The indication that it's costing us less money was really unexpected." (17) Implicit in Kraman's remark is an endorsement of disclosure as an obligation of professionalism, as "the right thing to do."
The IOM also calls for accountability of health care institutions to performance standards regarding continuous improvement in safety and quality. The emphasis here is on pressure that will be applied by regulators, accreditors, and purchasers to evaluate and compare hospitals according to their demonstrated commitment to safety. Given its public policy focus, the IOM report focuses on accountability of organizations, not of individuals. If we look at the history of medicine, however, we see that it is individuals--specifically physicians--who have historically been regarded as the locus of health care quality and who have been held responsible for it. (18)
These assumptions have shaped medical culture to the extent that a rethinking of accountability must be central to the "culture change" that is the rallying cry of reform. If, as safety experts both within and outside medicine maintain, (19) it is flaws in a system, rather than in individual character or performance, that produce the vast majority of preventable errors--a premise this essay accepts--then the dominant strategy of blaming individuals will continue to be ineffectual and counterproductive in improving safety. This point was made early by leaders of the patient safety movement: "A new understanding of accountability that moves beyond blaming individuals when they make mistakes must be established if progress is to be made." (20) The dynamic between institutional and individual accountability is one of the most important and complex issues at the heart of patient safety reform. We analyze this concept and its practical implications later in this essay.
In addition to its recommendations regarding a nationwide mandatory reporting system, the IOM also recommends that voluntary, confidential reporting systems be implemented within health care institutions and encouraged through accrediting bodies. In this context, confidentiality refers specifically to the restriction of public access to information on the quality and safety of health care delivery--also known as "peer review protection." Ordinarily, when we speak of "confidentiality" in health care we are referring to the confidentiality of patient information and restricted access to that information except by patient consent. (21) Such systems, many of which are already in place in health care and other high-risk industries, are essential to safety improvement efforts, insofar as they can encourage providers to supply information needed to identify and take action to address hazardous conditions. As many observers of high-risk industries have noted, it is the information about near misses that provides the richest resource for safety improvement efforts. (22) In its distinction between thresholds for mandatory and voluntary reporting, the IOM combines, under the voluntary reporting system, near misses and errors that have caused minor or moderate injuries.
In order for voluntary reporting to be workable, the IOM states, providers need to be assured that the information they report will not be used against them in the context of malpractice litigation. As such, the IOM recommends that "Congress pass legislation to extend peer review protections to data related to patient safety and quality improvement that are... collected or shared with others solely for purposes of improving safety and quality." Although the guarantee of secrecy has a political purpose (to gain participation from clinicians who would otherwise fear exposure to liability), from an ethical point of view, the guarantee of peer protection is justified by the principle of utility. A reduction in harmful errors across the patient population can be achieved only if front-line health care professionals are willing to supply information regarding specific health care delivery problems. The free flow of this information to create an epidemiology of error can occur only if secrecy regarding the information is assured.
As recommended, this proposal has been introduced into legislation under the "Patient Safety and Quality Improvement Act," introduced into the Senate 5 June 2002, and the Patient Safety Improvement Act of 2002, introduced into the House 6 June 2002. According to the bills, all information collected for the purpose of patient safety and quality improvement will be confidential and protected from subpoena, legal discovery, Freedom of Information Act requests, and other potential disclosures. (23)
There are a number of ethical problems with this approach. First, the proposed legislation allows information about adverse medical events (which it calls "lesser injuries") to be concealed from harmed parties. It is not clear how the legislation squares with accreditation requirements for disclosure that are mandated by the JCAHO or that may be part of a hospital's institutional policy. Second, peer review protections formalize and reinforce the conflict between the provider's interest in self-protection and patients' legitimate interest in information about their care. In so doing, the restriction of access to information about adverse events undercuts fiduciary obligations and patients' right to know about information pertinent to their care. Third, the enhancement of peer review protection is premised on the assumption of the status quo with regard to the current malpractice system. Peer review protection is made to do all of the heavy lifting to circumvent what Troyen Brennan has called the "the dead weight of the litigation system." (24) Brennan is critical of the IOM recommendations and other reform proposals that fail to address the ways in which the current malpractice system is ethically and practically counterproductive as a response to medical harms. The structures and incentives of the tort system are inconsistent with accountability for truth telling, and safety improvement (a point taken up again below). (25)
As we have pointed out, the notion of accountability is central to patient safety reform. It guides our expectations and judgments regarding the performance of health care providers. More challenging, the causal story now being told about medical errors from the systems perspective fundamentally challenges those conventional expectations and judgments; that is, the assumption of individual accountability that forms the fabric of medicine and law. So, in order to hold health care providers accountable under a systems approach, we have to reinvent not only our understanding of accountability, but also the structures of accountability institutionalized in our legal and cultural approaches to medical error.
Getting clear on the notion of accountability requires that we sort out and appraise two different causal explanations for medical error. Further, in examining one of these explanations--the story of complex causation in a systems approach to error--we will need to distinguish between two different senses of accountability--a backward-looking sense and a forward-looking sense--and consider the implications of each for both how we compensate those who have been harmed and for safety improvement.
Two Causal Stories
With the emergence of the systems approach to patient safety, a paradigm shift has occurred in the causal story of why errors occur and how they can be prevented. According to the conventional story, medical error, and specifically harmful medical error, is the result of individual actors and their individual actions--the slip of a scalpel, a wrong diagnosis, a failure to wash one's hands, the failure to check a hematocrit. As far as responsibility for such errors is concerned, the earliest modern codes of medical ethics by Thomas Percival in 1803 and by the AMA in 1847, state that the doctor's conscience is the "only tribunal" and his responsibility is to learn from his mistake and to make sure it does not recur. (26) As Kenneth De Ville has observed, after the late 1800s, when medical malpractice emerged as a new public "tribunal," this causal story became the basis for negligence claims against physicians. (27) Tort law remains the dominant narrative of responsibility in the arena of medical error, and it operates on the basis of a notion of simple causation. Poor or unsafe care is attributable to the actions or inactions of individual health care providers who are cast as "bad apples." (28) The shadow of liability reflects and reinforces a "shame and blame culture" within which people hide their mistakes.
Starting about four decades ago, W. Edwards Deming and J.M. Juran's work in human factors research and industrial engineering, Charles Perrow's book Normal Accidents, and James Reason's Human Error, all offered a new causal story about quality and quality failure. That story, which has been told in the medical context by Donald Berwick, Lucien Leape, and the National Patient Safety Foundation, (29) among others, is that human error should not be regarded narrowly as the cause of harm; it should be regarded as the effect of complex causation. Why? Because the majority of errors do not produce harm, but they have the potential to reveal latent errors or potentially harmful failures within a complex system. Unless we look in greater detail at the causal web, we will be ignorant of the weaknesses in the system and powerless to prevent their causing future harm.
The lesson of human factors research and cognitive psychology is that to understand error causation it is not enough to examine one's own actions or to look for the "smoking gun" or proximate cause of the active error; we must also examine the interrelationships between humans, technology, and the environment in which we work. (30) Applying this research to accidents involving the leaking of radioactive material at Three Mile Island and the explosion of the space shuttle Challenger, psychologist James Reason determined that most accidents were caused by mismatches between the design of complex systems and the ways humans process information. In the medical context, a system failure in drug administration, for example, might involve look-alike packaging or sound-alike drug names--situations that are literally "accidents waiting to happen."
According to safety experts in aerospace, atomic energy, and other complex, technology-based industries, the most constructive approach to error reduction is the creation of a blame-free environment that sees every error as "a treasure." (31) There are at least two justifications for this counterintuitive approach to responsibility or accountability (which for the purposes of this report are interchangeable terms). The first is, again, that error-prevention depends on information that will be forthcoming only if individuals feel free enough from liability concerns to provide it. The second is based on the principle of justice. As Merry and McCall Smith point out in their book Errors, Medicine, and the Law, errors are by definition exculpatory because they are involuntary. (32) So, holding individuals responsible for errors is wrong on two counts. First, a true accident, whether it is an act or an omission, is not blameworthy because it is not intentional, and its result was either unforeseeable or could not have reasonably been prevented. Second, most errors cannot be causally attributed solely to an individual actor.
"Don't Blame Me, It Was a System Problem"
This new causal story has understandably given rise to a number of concerns about accountability for harmful mistakes.
The first worry is that a systems explanation gives people permission to pass the buck by saying that their own actions were so controlled by "the system" that they simply were not free to do otherwise. In this sense, appeals to the "system" provide a convenient pretext for moral shirkers. In its most extreme form, this is the problem of free will and determinism in a new context. Appealing to the "system" in the broadest metaphysical sense, one's actions are seen to be determined by forces outside of all human agency. Responsibility is located outside the individual actor. But this sort of defense against responsibility is not really plausible in the case of health care practitioners, whose self-understanding includes the ability to influence the course of illness. As long as freedom of the will provides one of the guiding justifications of their work, they cannot also reject it whenever they make a mistake. That said, however, the literature on the history and sociology of medical law indicates that a fatalistic belief in divine providence was one of the key exculpating factors in medical harm until the early nineteenth century and that it continues to be an important, if sometimes disingenuous one today. (33) In her book Wrongful Death, Sandra Gilbert, whose husband died as the result of a medical error, recounts a story about the benefactor of a Catholic hospital whose wife's doctors repeatedly assured him that it was "God's will" that she was comatose and later died after routine surgery. Her husband sued to find out what everyone had "known all along," namely, that the patient's coma was the result of an identifiable error. (34)
A related worry about a systems approach is the "Dilbert problem." Unlike the metaphysical problem of determinism that implicates the human condition, the "Dilbert problem" implicates the conditions under which humans work and is implicit in the problem of learned helplessness. (35) The worry is that the systems approach so minimizes the role of individual agency that it will choke off the motivation to sustain high-quality performance, encourage poor performance, and lead to an erosion of the trustworthiness of health professionals. (36)
This worry is based on the assumption that individual actors are morally and practically disempowered within such a system, or that individuals can step "outside" a system and claim moral immunity. As we shall see, however, the kind of responsibility envisioned within the systems approach is based on the empowerment of individuals to contribute to system improvement.
Another, more practical concern about the systems approach to medical error is that it will make assigning responsibility for preventable adverse events difficult if not impossible. This worry about the loss of an identifiable target of blame is fostered, in part, by the very human desire for vengeance. (37) The invocation of a "system" renders faceless and anonymous the perpetrator of harm, and victims are left powerless. Also at play here is the assumption that justice to harmed parties requires being able to point to a wrongdoer. This is an assumption fostered by the evidentiary requirements of malpractice, which link compensable negligence to an identifiable lapse in the standard of care. If a wrongdoer is able to take refuge in the "system," then harmed parties may be denied access to compensation.
This concern is directly linked to a worry that the practical demands of the systems approach--that is, the need to collect information about errors and adverse events--will be possible only at the expense of the patient's right to know. If protections against subpoena and legal discovery are extended to information regarding harmful quality failures, then accountability to individuals will be subordinated to the ostensible aims of safety improvement.
Two Notions of Accountability
We may allay both these speculative and practical concerns by distinguishing two different ways in which we think about accountability. Ascribing responsibility depends for its sense on the purposes or ends to which we put it and the information that we take or do not take to be directly relevant. Put differently, when we talk about responsibility we need to be clear not only about the information that we take to be relevant, or not, but also about what we hope to accomplish in assigning responsibility. With that in mind, we can make a distinction between two types of responsibility ascription: responsibility in the backward-looking or retrospective sense, and responsibility in the forward-looking or prospective sense. (38)
In the backward-looking sense, responsibility is linked to practices of praising and blaming and is typically captured in expressions such as "she was responsible for harming the patient" or "he made a mistake and he should be held responsible for it." When we speak of "holding someone accountable" we tend to be using this phrase after some action has gone awry.
The forward-looking or prospective sense of responsibility is linked to goal-setting and moral deliberation. It is expressed in phrases such as "as parents, we are responsible for the welfare of our child," or "democratic citizenship involves both rights and responsibilities." Responsibility in this sense is about the particular roles that a person may occupy, the obligations they entail, and how those obligations are best fulfilled. But whereas responsibility in the retrospective sense focuses on outcomes, prospective responsibility is oriented to the deliberative and practical processes involved in setting and meeting goals. (39)
Currently, the dominant view of responsibility regarding medical error is grounded in tort liability, that is, malpractice. The aim of responsibility ascription in this context is compensation to harmed parties and deterrence of further malpractice. Through the lens of malpractice, error is germane only as the cause of harm, and information about errors that do not cause harm is irrelevant. Responsibility ascription in this context is retrospective; its point is the assignment of blame.
A systems approach to error emphasizes responsibility in the prospective sense. It is taken for granted that errors will occur in complex, high-risk environments, and participants in that system are responsible for active, committed attention to that fact. Responsibility takes the form of preventive steps to design for safety, to improve on poor system design, to provide information about potential problems, to investigate causes, and to create an environment where it is safe to discuss and analyze error.
Although there is much disagreement in the medical ethics literature about the source of moral norms in medicine, (40) it is generally accepted that, at minimum, health care is guided by the imperative "to help, or at least to do no harm." (41) Traditionally, this role responsibility has been associated exclusively with clinicians--those who have a direct relationship with patients. In part, this stems from the historical origins of healing, which until the emergence of the modern hospital was the domain largely of solitary practitioners. It also reflects the ethical standards established to legitimate professional self-regulation. Given the complexity in the dimensions both of the financing and the delivery of today's health care system in the United States, a strong case can be made that this role responsibility should also be extended to those who have indirect but significant control over decisionmaking that affects patient welfare. This includes health care managers and administrators who have not traditionally been held accountable to standards of medical professionalism.
Since prospective responsibility is linked to practices and roles, it applies to collectives as well as to individuals. To the extent that a group of people contributes to a practice and the goals that define it, they can be said to have "collective responsibility"--in the prospective sense. In health care, helping and avoiding harm is one of the primary bases on which physicians, nurses, and other health care providers find solidarity in their work. Collective responsibility in this uncontroversial sense has been largely overlooked because, like most discussions of responsibility in the philosophical and legal literature, discussions of collective responsibility have focused almost exclusively on the retrospective question of blame and whether and how collectives can properly be held accountable for harmful events. (42)
An emphasis on prospective responsibility is helpful because it forces us to re-examine, in light of the complexities of institutionally delivered health care, the content and scope of responsibility. This is something we have lost sight of in our narrow reliance on the malpractice paradigm as an explanatory framework for medical error. We need new structures to account for what we now know about the occurrence of error in complex systems.
In the context of health care delivery, the aim of prospective responsibility ascription is to orient everyone who has an effect on patient care (including clinicians, health care administrators, hospital managers and boards, technicians, computer data specialists) toward safety improvement. Through the lens of patient safety, error is germane as an indicator of vulnerabilities in a system and as an opportunity to prevent harm. The point of forward-looking responsibility ascription is to specify the obligations entailed in creating a safer health care environment. Given a systems approach to error, these obligations entail a high degree of transparency about errors, analysis of errors to determine their causes, and the implementation of systemic improvements. To the extent that current structures prevent health care providers from meeting these responsibilities, the structures are inconsistent with the ethics of professionalism.
But what is the patient's own responsibility for safety? If, as Leape and others have argued, a system is "an interdependent group of items, people or processes with a common purpose," (43) and responsibility in the prospective sense belongs to all who contribute to the healing enterprise, isn't it reasonable to include patients in this collective?
For some, the suggestion is offensive because it can very easily shade into blaming the victim. If the patient is responsible for assuring safety, and she does not ask about a medication she knows to be unfamiliar, will we say that she somehow failed? (44) On the other hand, if patients supply information and insights essential to their care--and indeed they must provide information regarding their history--then should they not be considered as members of the team?
We can all agree that patients are de facto central to their care. The sticking point is whether this centrality implies that they are morally responsible for the safety or quality of their health care. (45) Unlike clinicians and others who deliver health care, patients have not committed themselves to the practice of health care delivery and the goals that define it. Most people do not freely choose to become patients. That said, the rise of the patient advocacy movement has been based on the call for patients to become more active in their care. Patient safety advocate Roxanne Goeltz, whose brother Mike died as a result of a medical error, has argued forcefully that patients and their families should take active measures to assure that their care is delivered safely. This includes having a friend or family with the hospitalized patient twenty-four hours a day, seven days a week. (46) Bryan Liang has also argued that patients are responsible at least for supplying health care providers with personal information that is as complete and accurate as possible. (47)
An axiom of responsibility ascription is "ought implies can." In order to say that someone is responsible, he or she has to be in a position to act on that obligation. In the case of patients, taking responsibility for the quality or safety of their care will often be out of the question. For those patients who can be actively involved, their positive contribution to their health care delivery should be facilitated and commended, but required only in the provision of information that is as accurate and complete as possible and in following, as much as possible, the treatment regimen. The onus of responsibility for patient involvement is on institutional and individual health care providers. (48) Respect for patient self-determination requires that providers involve patients in their care, and the lessons of safety improvement indicate that including patients (or their families) as members of the health care team (by asking them to confirm their surgical site, by paying attention to their reports on themselves) may be one of the most effective and commonsensical ways of improving care.
If we find that most preventable harms are caused by complex factors involving latent failures at the managerial level, system defects, unsafe acts, and psychological precursors, and if we agree that an essential moral responsibility of health care providers is "to help or at least to do no harm," then meeting that responsibility will require conditions under which these causal factors can be brought to light, assessed, and improved. Currently, the system of liability for medical harms makes meeting that responsibility possible only through exceptional acts of courage. (49) Likewise, it makes respect for patients through disclosure almost impossible because it discourages honesty and openness on the part of health care professionals.
Prospective accountability means creating safe conditions for patient care. Retrospective accountability means achieving justice for harmed parties. As a policy matter, both forms of accountability must be understood in light of the ethical pros and cons of compensation schemes for adverse patient outcomes: tort liability, no-fault liability, and mediation. No-fault and mediation seem likeliest to meet the demands of justice without inhibiting safety improvement. Traditional tort liability is the least ethically viable means of achieving these two goals, although it is the most deeply entrenched system, politically speaking.
Tort liability is a fault-based system of compensation for those who sustain injury as a result of their medical care. To qualify for payment, the injured party must prove that his or her injury was the result of negligence on the part of the health care provider. A second goal of tort liability is deterrence. The expectation is that the threat of legal action will keep providers from straying from standards of due care.
As David Studdert, Edward Dauer, and Bryan Liang each argue, tort liability not only fails in respect of both compensation and deterrence, but also inhibits safety improvement. (50) They point out that malpractice law falls short in at least six ways. First, it is a haphazard compensation mechanism. According to findings from the Harvard Medical Practice Study, one of the largest insurance industry-sponsored studies of medical error, only one in seven patients who are negligently harmed ever gain access to the malpractice system, with those who are older and poorer disproportionately excluded from access. (51) For those patients who do sue, the severity of the injury appears to be a more powerful predictor of compensation than the fact of negligence. (52) And because of that, physicians believe that liability correlates not with the quality of the care they provide, but with outcomes over which they have little control. As a result, "risk management" has become an effort to avoid liability rather than error.
A second problem with malpractice law is that it delivers compensation inefficiently. Administrative costs account for more than 50 percent of total system costs, (53) and a successful plaintiff recoups only one dollar of every $2.50 spent in legal and processing costs. (54) Third, malpractice claims offer only a monetary outcome, ignoring the harmed party's need for noneconomic remediation, such as a guarantee of corrective action, an apology, or an expression of regret and concern. Fourth, the negligence standard, because it is embedded in an adversarial process, is inconsistent with attempts to learn from errors and improve quality. Malpractice claims, including pre-trial discovery, are shrouded in secrecy, with legal rules governing disclosure and protection of information. This means that institutions and individual providers typically forego opportunities to learn from the problems that lawsuits can sometimes help illuminate.
Fifth, as Dauer points out, the adversarial process is based on the belief that the presentation of relentless, one-sided arguments to an impartial judge or jury is the best way to discern the truth. This process necessarily rules out the prospect of collectively analyzing information to discern what happened. The malpractice system thus "externalizes" responsibility for truth by selectively taking information out of the hands of involved parties--a process that is emotionally brutal for patients and families trying to reconstruct their lives after medical harm. (55) Finally, regarding its deterrence function, evidence indicates that malpractice stimulates defensive medicine rather than high quality care, (56) and that the stress and isolation that physicians experience while subject to malpractice claims can impair their performance. (57)
These shortcomings reveal the moral flaws of tort liability. With regard to the claims of justice, tort system fails to deliver compensation in a fair and timely way to harmed parties. Those with lesser claims are kept out of a prohibitively expensive malpractice system; those who are compensated may spend years obtaining this result; those who are old and poor may be excluded from the system altogether. For Sandra Gilbert, who settled under the shadow of malpractice, the adversarial process guaranteed that the plaintiffs would never know the case's full details and would never receive an apology or recognition from the defendant. The tort system creates incentives against truth telling on the part of health care providers. Also, with regard to justice for clinicians, the tort system overlooks the system dimensions of error and thus may unfairly target individual providers for acts, omissions, and outcomes for which they cannot fairly be held culpable. When it comes to harm prevention, the tort system stifles safety improvement, and, by externalizing responsibility for truth, engenders a defensive rather than a constructive posture toward error prevention. Viewed from the perspective of utility, the tort process is inefficient.
No-fault liability is a compensation scheme that does not base the award of damages on proof of provider fault. As Studdert observes, "to qualify for compensation in these schemes, claimants must still prove that they suffered an injury and that it was caused by an accident in a specific domain, such as the workplace, road, or hospital, but it is not necessary to demonstrate that the party who caused the accident acted negligently." (58) No-fault liability is consistent with the prospective assignment of responsibility. It is predicated on a high risk of hazard in a particular industry and assigns absolute liability in advance regardless of contributory fault. In other words, no-fault liability is based on the presumption that harms will occur in a particular setting, and it incorporates provisions for compensation.
Studdert cites empirical research indicating that no-fault has led to increases in average monetary compensation for injured workers as well as gains in worker safety. Although more evidence will be needed, Studdert and others are optimistic that similar benefits would be obtained by implementing no-fault in health care. No-fault has a number of potential moral advantages. First, since it suspends the fault requirement, no-fault could remove incentives to conceal information, thereby supporting fiduciary obligations of disclosure and creating the conditions for the collection and analysis of error information. Second, no-fault could overcome some of the inequities in access to compensation under malpractice law. Unlike the tort system, which distributes compensation haphazardly, no-fault, as an administrative scheme, could determine remedies in advance and distribute them according to the severity of injury. One potential problem, however, is in the calculation of loss. If a person's loss is determined by the person's salary, for example (as it was for victims of the September 11 attacks), then age-based, gender-based, or income-based inequities could be repeated in a no-fault scheme.
This weakness is also related to the health care financing system that we have in this country. As Haavi Morreim points out, countries where no-fault schemes for medical harm have been implemented also offer their citizens universal health care coverage and other social welfare programs, so that ongoing health care and other needs are already covered and need not be obtained through no-fault compensation. (59) Without this and other social welfare programs to support the needs of the injured and infirm, the efficiencies of no-fault will quite likely not be realized.
Nonetheless, the potential for no-fault to remove barriers to information access both for patients and for safety improvement, along with its potential for fairer distribution of compensation, make it a promising context in which justice, fiduciary responsibility to patients, and safety improvement can thrive.
Interest-based mediation is a means of opening direct communication between parties in a dispute. Its aim is to address the parties' actual interests and needs rather than the inflated interests and needs evoked by the adversarial arrangement of malpractice law. Empirical research indicates that patients who suffer injury often have non-economic motivations--such as a desire for information and communication--in bringing a claim. (60) Likewise, it has been argued that what physicians want out of litigation (whether that means winning a malpractice suit or a subsequent defamation claim that they have brought as plaintiff) is not monetary repair, but repair of reputation. (61) Mediation is a means of addressing these interests in a "restorative" way that is impossible within the context of traditional tort litigation.
Another potential advantage of mediation is that, although it takes place within the existing fault-based system, its confidentiality is ostensibly assured through statutory legal privilege in almost every state. (62) Although the degree to which legal privilege does actually guarantee a "safe harbor" against subsequent litigation has been questioned, (63) mediation has the advantage of "internalizing" responsibility for the resolution so that the parties are able to communicate directly rather than through legal intermediaries. As a result, the parties may all benefit from the resolution. Health care providers can avoid a costly lawsuit, consequent reporting to the National Practitioner Data Bank, and loss of reputation, while patients and families can make a human connection following a loss, and patients can be brought into the peer review process by requesting follow-up or remedial actions in lieu of or in addition to monetary damages. Although mediation does not offer a direct avenue to information collection about adverse events and errors, it may create a less adversarial context in which safety, rather than money, can be pursued as a mutual goal and the patient's experience can be explicitly used to improve care.
Mediation can also provide a much-needed context that supports truth-telling as an avenue to justice. Patients are routinely excluded from rituals of forgiveness in the medical context. In Charles Bosk's description of forgiveness for the technical and moral errors committed by surgical residents, (64) analogs of "confession" and "repentance" take place in the "hair shirt" ritual of the morbidity and mortality conference. Here, physicians report to peers and superiors on the circumstances surrounding their involvement in an adverse event, and forgiveness is conferred by the superior. A second ritual involves peer support for clinicians confronting the emotional trauma of harmful errors. Absent from all of these contexts is the patient. All of these rituals serve important purposes; justice to specific patients is not one of them.
In her work on religious and cultural perspectives on error and forgiveness, Nancy Berlinger argues that such rituals are incomplete. (65) In the Jewish and Christian traditions that have helped to shape Western cultural norms, argues Berlinger, the possibility of forgiveness or reconciliation in the service of justice to harmed parties--in this case, patients--involves repairing one's relationship with the patient, not with one's superordinates or peers. Repairing the relationship requires appropriate actions of confession and repentance. Practices that could be described as confession in the Jewish and Christian traditions would include (to list only a few possibilities Berlinger mentions) promptly acknowledging error and disclosing to the patient a cogent and complete narrative of what happened; accepting personal accountability even in cases of systems error, bearing in mind that some patients may always understand error as an individual rather than a systemic failure; and giving clinicians opportunities to process incidents and receive counseling in an environment that is neither punitive nor demoralizing. Practices that could be described as repentance could include (again listing only a few examples) apologizing and expressing remorse to an injured patient (and allowing oneself to feel remorseful); offering injured patients and family members pastoral care or other counseling services; and covering the cost of treating injuries resulting from error. Berlinger also details practices that might promote forgiveness or reconciliation. For example, forgiveness might be promoted by inviting patients to be part of the hospital's quality improvement process, to allow them, if they wish, to take an active role in working with clinicians and administrators to create a patient-centered culture of safety by sharing their experiences of medical harm and their perspectives on hospital culture (although injured patients are not to be made to feel that they ought participate in QI).
Berlinger also notes that forgiveness might be promoted by challenging aspects of institutional culture that deny the fallibility, and therefore the humanity, of clinical staff, or that work against truth-telling, accountability, compassion, and justice in dealing with medical error and promoting patient safety.
It is important to remember that the IOM report includes both errors that cause no harm (near misses) and errors that cause "lesser injuries" within its recommendation for voluntary reporting. (66) The recommendation should not be regarded as a substitute for the established professional obligation for disclosure of harmful errors, be they serious, moderate, or minor. Regardless of the policy recommendations, the ethical obligation for disclosure of harmful error stands. The challenge, therefore, will be to create a context in which this obligation can be honored despite seemingly contradictory policy proposals.
As Berlinger's recommendations about disclosure make clear, delivering justice to harmed parties entails the institutionalization of new norms and practices of disclosure. The greater openness potentially afforded by no-fault or mediation and voluntary compensation in the context of existing tort liability may provide environments in which such norms and practices can take hold and harmonize with the long-established fiduciary obligations of disclosure.
The chief premise of a systems approach to error is that overall safety improvement requires that old forms of individual interrogation (shame and blame) be replaced by new forms of "system interrogation" (that is, root cause analysis). Another premise of a systems approach is that success depends on the collection and analysis of information gleaned from real life health care delivery. The IOM report recommends that information about error not associated with serious harm be protected from all uses not connected with safety improvement, including uses requiring access to information by such methods as subpoena, legal discovery, and the Freedom of Information Act.
As we have just noted, the recommended protection of information about "lesser harms" is incompatible with professional obligations of disclosure. Equally if not more disturbing, both the IOM recommendations and ensuing legislation (the "Patient Safety and Quality Improvement Act" in the Senate, and the "Patient Safety Improvement Act of 2002" in the House (67)) make safety improvement contingent on patients being harmed--even though the harms in question can be of "lesser" severity. The effort to protect information that is part of a voluntary reporting scheme is a "workaround" in the malpractice status quo. It pits the value of safety improvement against the values of nonmaleficence and truthtelling. As Brennan points out, the IOM sought to assure accountability through its proposed mandatory reporting of serious, preventable adverse events. Not surprisingly, however, the dominance of malpractice has made this recommendation politically untenable. (68) Thus, reconsideration of the malpractice system itself, in favor of no-fault and mediation, may be necessary to overcome the antagonism between safety improvement and the values of nonmaleficence and truth-telling, as well as to achieve accountability in the prospective as well as the retrospective sense.
The recently finalized Health Insurance Portability and Accountability Act (HIPAA) has also given rise to concerns about the extent to which data collection for safety improvement will be hampered by HIPAA provisions to safeguard the privacy of patient records. At issue is whether patient records--primarily intended to support the health care needs of the patient--can also be used for the secondary purpose of improving safety or quality. As Bryan Liang points out, HIPAA was not designed with safety improvement research in mind and may present some obstacles to the use of patient information in this arena. (69) In the original version of the regulation, before it was modified in August 2002, patient consent was required for the release of personal, identifiable information that could be used for safety improvement. The modifications eliminate the consent requirement for the disclosure of personal information for "health care operations," which may include quality improvement activities. Under the rubric of "quality," data collection for safety without patient consent appears to be allowable in the final rule. But if quality- or safety- improvement rises to the level of "research"--if it involves the production of "generalizable knowledge"--the activities will fall under the requirements of human subjects protection requiring Institutional Review Board approval or HIPAA authorization. The modifications to HIPAA also allow for researchers to have access, without patient consent, to a "limited data set," that is, to information that has been partially de-identified. It is not clear whether this limited information will be useful in fine-grained safety improvement work.
The final privacy rule goes some way towards harmonizing patient privacy and the promotion of safety-improvement activities. Still, safety improvement activities ought not to be conducted on the basis of information to which harmed patients themselves are denied access, either because of the structure of peer review protections or because providers are reluctant to disclose due to liability fears. (70) No-fault liability offers one way around this conflict. Under such a system, existing obstacles to patient access to information about the delivery of their health care would be largely removed, and this secondary use of health information would not be contingent on depriving patients of their rights to know about problems associated with their health care. Although the HIPAA privacy provisions have been finalized and compliance is now required, it is likely that definitive answers to questions regarding privacy and "research" will be obtained only as the rule is tested or as advocates seek amendments to it.
The chief goal of this report has been explore and clarify both the ethical considerations that enter into patient safety reform and the ethical implications of various reform proposals at federal state and institutional levels. Elucidating the ethical basis of policy deliberation leads to several important recommendations:
* Federal officials, privacy advocates and advocates of safety improvement should work together to clarify the implications of the HIPAA privacy rule for the collection of safety data.
* Policymakers should look for alternatives to the tort system to serve the purposes of compensation and safety improvement.
* Institutional change depends on understanding how a cultural context shapes perceptions about why errors happen and how actors within a culture learn to think about and deal with them. Institutional leaders in health care will need more self-consciously to examine the "hidden curriculum" in medical and nursing education; that is, the practices that are taught and rewarded through example, rather than through what is conveyed in the official curriculum.
* Errors cannot be eliminated. We can, however, reduce them, learn from them, improve the way we handle them, and deal more justly with all those (including clinicians) touched by them.
(1.) L.T. Kohn, J.M. Corrigan, M.S. Donaldson, eds, To Err is Human: Building a Safer Health System (Washington, DC: National Academy Press, 2000).
(2.) Ibid., p. 18.
(3.) V.A. Sharpe and A.I. Faden, Medical Harm: Historical, Conceptual and Ethical Dimensions of Iatrogenic Illness (New York: Cambridge U. Press, 1998); T.L. Beauchamp and J.F. Childress, Principles of Biomedical Ethics 4th ed. (New York: Oxford University Press, 1994).
(4.) E.D. Pellegrino, "Toward a Reconstruction of Medical Morality: The Primacy of the Act of Profession and the Fact of Illness," Journal of Medicine and Philosophy 4 (1979):32-55; E.D. Pellegrino ED, D.C. Thomasma, For the Patient's Good: The Restoration of Beneficence in Health Care (New York: Oxford, 1988).
(5.) E.D. Pellegrino, "Prevention of Medical Error: Where Professional and Organizational Ethics Meet," in Promoting Patient Safety: An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, DC: Georgetown University Press, in press).
(6.) Kohn, et al.,. To Err is Human, p. 4.
(7.) Kohn, et al.,. To Err is Human, p. 4.
(8.) See T. Brennan, "The Institute of Medicine Report on Medical Errors--Could it do Harm?" New England Journal of Medicine 342 (2000):1123-1125.
(9.) Kohn, et al., To Err is Human, p. 87-88; 133.
(10.) Kohn, et al., To Err is Human, p. 102.
(11.) It is well known that the threat of medical malpractice has created a culture of silence in medicine, discouraging health care providers from telling patients about problems associated with their care. Even claimants who settle a lawsuit may never know the events surrounding an injury. See, S. Gilbert, Wrongful Death (New York, Norton & Norton, 1997)
(12.) Liang has indicated the multiple ways in which such confidentiality can, in fact, be breached, by legal maneuvers. See Bryan Liang, "Error Disclosure for Quality Improvement: Authenticating a Team of Patients and Providers to Promote Patient Safety," in Promoting Patient Safety." An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, DC: Georgetown University Press, in press).
(13.) Kohn, et al., To Err is Human, p. 102.
(14.) F. Rosner, J.T. Berger, P. Kark, J. Potash, A.J. Bennett, "Disclosure and Prevention of Medical Error," Archives of Internal Medicine 160 (2000):2089-2092; American Medical Association, Council on Ethical and Judicial Affairs. Code of Medical Ethics: Current Opinions with Annotations. Chicago: AMA, 1997, sec. 8.12:125.
(15.) Joint Commission on Accreditation of Health Care Organizations: 2002 Comprehensive Accreditation Manual for Hospitals: The Official Handbook (Oakbrook Terrace, IL, JCAHO, 2001). See standard RI.1.2.2: "Patients and, when appropriate, their families are informed about the outcomes of care, including unanticipated outcomes."
(16.) S.S. Kraman and G. Hamm, "Risk Management: Extreme Honesty May Be the Best Policy," Ann Intern Med 131 (1999):963-967.
(17.) N. Osterweil, "Truth or Consequences: Does Disclosure Reduce Risk Exposure?: Admitting Errors Makes Process Less Adversarial, MDs, Lawyers Agree," WebMD Medical News, 20 December 1999. http://my.webmd.com/content/article/1728.53548.
(18.) L.L. Leape, "Error in Medicine," Journal of the American Medical Association 272 (1994):1851-7.
(19.) D. Maurino, J. Reason, R. Lee. Beyond Aviation Human Factors. (Aldershot UK: Avery Press, 1995); James Reason,. Human Error (New York: Cambridge University Press, 1990); James Reason,. "Human Error: Models and Management," British Medical Journal 320 (2000):768-70; James Reason. Managing the Risks of Organizational Accidents (Aldershot, UK: Ashgate, 1998).
(20.) L.L. Leape, D.D. Woods, M.J. Hatlie, K. W. Kizer, S.A. Schroeder, G.D. Lundberg, "Promoting Patient Safety by Preventing Medical Error," Journal of the American Medical Association 280 (1998):1444-1447.
(21.) Thanks to Janlori Goldman for pointing out this important ambiguity.
(22.) W.E. Deming, Out of the Crisis, (Cambridge, Mass.: MIT Center for Applied Engineering Studies, 1986); D.M. Berwick, "Continuous Improvement as an Ideal in Health Care," New England Journal of Medicine 320 (1989):53-56; J. Reason J. Human Error.
(23.) See the text of the bills, S. 2590, and H.R. 4889 on Thomas, the federal government's legislative information site on the Internet, http://thomas.loc.gov/
(24.) T. Brennan, "The Institute of Medicine Report on Medical Errors--Could it do Harm?" New England Journal of Medicine 342 (2000): 1123-1125.
(25.) E.A. Dauer, "Ethical Misfits: Mediation and Medical Malpractice Litigation," in Promoting Patient Safety: An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press).
(26.) T. Percival, Medical Ethics or A Code of Institutes and Precepts adapted to the Professional Conduct of Physicians and Surgeons (Manchester: S. Russell, 1803).
(27.) K.A. De Ville, "God, Science, and History: The Cultural Origins of Medical Error," in Promoting Patient Safety: An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, DC: Georgetown University Press, in press); K.A. De Ville, Medical Malpractice in Nineteenth-Century America: Origins and Legacy (New York: NYU Press, 1990).
(28.) D.M. Berwick, "Continuous Improvement as an Ideal in Health Care"; J. Reason Human Error.
(29.) R.I. Cook, D.D. Woods, C. Miller. "A Tale of Two Stories: Contrasting Views of Patient Safety. Report from a Workshop on Assembling the Scientific Basis for Progress on Patient Safety." (Chicago: National Patient Safety Foundation, 1998) http://www.npsf.org/exec/front.html.
(30.) J. Reason. Managing the Risks of Organizational Accidents (Aldershot, UK: Ashgate, 1998).p. 208.
(31.) D. Blumenthal, "Making Medical Errors into "Medical Treasures." Journal of the American Medical Association 272 (1994):1867-68. Karl E. Weick, Kathleen M. Sutcliffe, Managing the Unexpected : Assuring High Performance in an Age Of Complexity (San Francisco: Jossey-Bass, 2001).
(32.) A. Merry, A.M. Smith, Errors, Medicine and the Law. (Cambridge, UK: Cambridge University Press. 2001). It is also worth noting that in his Nicomachean Ethics, Aristotle observes that responsibility is only properly ascribed to actions that are voluntary. See (Aristotle, 1999, 1110a ff).
(33.) K.A. De Ville, "God, Science, and History: The Cultural Origins of Medical Error."
(34.) S. Gilbert, Wrongful Death (New York, Norton & Norton, 1997), p. 218-9.
(35.) J. Reason. Managing the Risks of Organizational Accidents, p. 192.
(36.) Edmund Pellegrino, "Prevention of Medical Error: Where Professional and Organizational Ethics Meet."
(37.) Edward A. Dauer, "Ethical Misfits: Mediation and Medical Malpractice Litigation," in Promoting Patient Safety." An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press).
(38.) The discussion of this distinction is drawn from V. A. Sharpe, "Taking Responsibility For Medical Mistakes," in S. Rubin and L. Zoloth, eds. Margin Of Error: The Ethics Of Mistakes in the Practice of Medicine. (Hagerstown, Md.: University Publishing Group, 2000): 183-94.
(39.) Of course, failures of prospective responsibility often result in holding someone responsible retrospectively. The systems approach is an attempt to expand the scope of prospective responsibility so that concerted steps toward safety can be taken and rewarded before there are specific outcomes to be assessed.
(40.) For example, are moral norms inherent to medicine, residing in the fiduciary nature of the healing relationship? Are they grounded in a pragmatic concern to produce "patient satisfaction?" Are they based in theories of democratic citizenship? Or is medicine simply like other market transactions that are based on contracts stipulating specific expectations and obligations?
(41.) Hippocrates. Epidemics I. In Hippocrates, trans., W.H.S. Jones. Loeb Classical Library. (Cambridge, Mass.: Harvard University Press, 1923-1988): 165.
(42.) L. May and S. Hoffman, Collective Responsibility: Five Decades of Debate in Theoretical and Applied Ethics (Savage, M: Rowman and Littlefield, 1991).
(43.) L.L Leape, D.W. Bates, D.J. Cullen, et al for the ADE Prevention Study Group. Systems Analysis of Adverse Drug Events. Journal of the American Medical Association 274 (1995):35-43.
(44.) E. Pellegrino, "Prevention of Medical Error: Where Professional and Organizational Ethics Meet."
(45.) There is a large literature on the extent to which people are responsible for their health and their health behaviors. Our question is much narrower and concerns only whether patients are responsible for the safety and quality of health care delivery.
(46.) R. Goeltz, "In Memory of My Brother, Mike," in Promoting Patient Safety; An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press).
(47.) B. Liang, "Error Disclosure for Quality Improvement: Authenticating a Team of Patients and Providers to Promote Patient Safety," in Promoting Patient Safety: An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press).
(48.) This point is reflected in the Joint Commission's 2002 standards #3.7 on patient education. Joint Commission on Accreditation of Health Care Organizations: 2002 Comprehensive Accreditation Manual for Hospitals: The Official Handbook (Oakbrook Terrace, IL, JCAHO, 2002).
(49.) D. Hilfiker, Facing Our Mistakes. New England Journal of Medicine 310 (1984):118-122.
(50.) D. Studdert, "On Selling "No-Fault," in Promoting Patient Safety: An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press); Edward A. Dauer, "Ethical Misfits: Mediation and Medical Malpractice Litigation," in Promoting Patient Safety: An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press); Bryan Liang, "Error Disclosure for Quality Improvement: Authenticating a Team of Patients and Providers to Promote Patient Safety."
(51.) F. A. Sloan and C.R. Hsieh, "Variability in Medical Malpractice Payments: Is The Compensation Fair?" Law and Society Review 24 (1990):997-1039; N. Vidmar, Medical Malpractice and the American Jury: Confronting the Myths About Jury Incompetence, Deep Pockets, and Outrageous Damage Awards (Ann Arbor: University of Michigan Press, 1995); P.C. Weiler H.H. Hiatt, J.P. Newhouse , et al. A Measure of Malpractice: Medical Injury, Malpractice Litigation and Patient Compensation (Cambridge, Mass.: Harvard University Press, 1993); H.R. Burstin, W.G. Johnson, S.R. Lipsitz, T.A. Brennan. "Do the Poor Sue More? A Case-Control Study Of Malpractice Claims and Socioeconomic Status." Journal of the American Medical Association 13 (1993):1697-1701.
(52.) T.A. Brennan, C.A. Sox, H.R. Burstin, "Relation Between Negligent Adverse Events and the Outcomes of Medical Malpractice Litigation," New England Journal of Medicine 335 (1996): 1963-1967.
(53.) J.S. Kakalik and N.M. Pace, Costs and Compensation Paid in Tort Litigation (Santa Monica, CA: RAND, 1986 (R-3391-ICJ)).
(54.) P. Weiler, et al., A Measure of Malpractice (Cambridge, Mass.: Harvard University Press, 1993).
(55.) C. Levine. "Life But No Limb: The Aftermath of Medical Error." Health Affairs 21 (2002):237-41. Reprinted in Promoting Patient Safety: An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press).
(56.) D. Kessler and M. McClellan. "Do Doctors Practice Defensive Medicine?" Quarterly Journal of Economics 111 (1996):353-390.
(57.) S.C. Charles, "Sued and Non-Sued Physicians' Self-Reported Reactions to Malpractice Litigation," American Journal of Psychiatry 142 (1985):437-440; T. Passineau, "Why Burned-Out Doctors Get Sued More Often," Medical Economics 75 (1998):210-218; B.A. Liang, "The Effectiveness of Physician Risk Management: Potential Problems for Patient Safety," Risk Decision Policy 5 (2000):183-202.
(58.) D. Studdert, "On Selling "No-Fault."
(59.) H. Morreim, "Medical Errors: Pinning the Blame versus Blaming the System," in Promoting Patient Safety." An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press).
(60.) E.A. Dauer and L.J. Marcus, "Adapting Mediation to Link Resolution of Medical Malpractice Disputes with Health Care Quality Improvement," Law and Contemporary Problems 60 (1997):185-218; W. Levinson, "Physician-Patient Communication. A Key to Malpractice Prevention," Journal of the American Medical Association. 272 (1994):1619-20; W. Levinson, D.L. Roter, J.P. Mullooly, V.T. Dull, R.M. Frankel, "Physician-Patient Communication. The Relationship with Malpractice Claims Among Primary Care Physicians And Surgeons," Journal of the American Medical Association. 277 (1997):553-9.
(61.) W.M. Sage, "Reputation, Malpractice Liability, and Medical Error," in Promoting Patient Safety: An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press); J. Soloski and R.P. Bezanson. Reforming Libel Law. (New York: Guilford Press, 1992).
(62.) E.A. Dauer, LJ. Marcus, and S.M. Payne, "Prometheus and the Litigators: A Mediation Odyssey." Journal of Legal Medicine 2000;21 : 159-186.
(63.) B. Liang, "Error Disclosure for Quality Improvement: Authenticating a Team of Patients and Providers to Promote Patient Safety."
(64.) C.L. Bosk, Forgive and Remember: Managing Medical Failure (Chicago: University of Chicago Press, 1979).
(65.) N.S. Berlinger, "'Missing the Mark': Medical Error, Forgiveness, and Justice," in Promoting Patient Safety." An Ethical Basis for Policy Reform, ed. V.A. Sharpe (Washington, D.C.: Georgetown University Press, in press).
(66.) Kohn, et al., To Err is Human, p. 101, 110.
(67.) These bills can be found on Thomas, the federal government's legislative information site on the Internet, http://thomas.loc.gov/
(68.) Troyen Brennan, "The Institute of Medicine Report on Medical Errors--Could it do Harm?"
(69.) Bryan Liang, "Error Disclosure for Quality Improvement: Authenticating a Team of Patients and Providers to Promote Patient Safety." Troyen A. Brennan and Michelle M. Mello, "Patient Safety and Medical Malpractice: A Case Study" Annals of Internal Medicine 139 (2003): 267-273.
(70.) T. A. Brennan, "The Ethics of Confidentiality: The Special Case of Quality Assurance Research," Clinical Research 38 (1990):551-557.
Virginia A. Sharpe, "Promoting Patient Safety: An Ethical Basis for Policy Deliberation," Hastings Center Report Special Supplement 33, No. 5 (2003), S1-S20.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||A Special Supplement|
|Publication:||The Hastings Center Report|
|Date:||Sep 1, 2003|
|Previous Article:||Threats to the common good: biochemical weapons and human subjects research.|
|Next Article:||The smallpox vaccination of health care workers: professional obligations and defense against bioterrorism.|