Printer Friendly

Phantom risk: scientific inference and the law.

Thus, phantom risk is, arguably: miscarriage resulting from work at video display terminals, birth defects in children caused by the mother's use of the drug Bendectin, cancer from magnetic fields, lung cancer produced by the very slight levels of airborne asbestos in buildings with intact asbestos insulation, or cancer resulting from a slip and fall in a grocery store.

In using the term phantom risk we do not question the existence of human tragedies or imply lack of sympathy for the affected individuals. A family that has a child with a major birth defect has suffered real tragedy, whatever its cause. The problem we address is whether its cause could be identified and how to resolve the resulting legal claims that would inevitably arise in our litigious society.

Phantom risk can arise from chance observations in everyday life. Breast cancer developing after serious trauma to the chest, the birth of a child with a major defect following use of the drug Bendectin by the mother, miscarriage in a user of a video display terminal, the illness of a child after a vaccination, or the appearance of essentially any cancer in a person exposed to a pesticide might understandably appear as causal sequences to the people concerned. Such incidents, which in retrospect might be regarded as chance occurrences, helped to fuel the scientific and legal controversies involved in phantom risks.

Phantom risk may also arise from errors and ambiguity in science itself. Often in science, the first studies in a new line of investigation are preliminary in design, crude in execution or: flawed in concept. Follow-up studies are often better designed, more focused, better executed ! and less ambiguous in interpretation. Phantom ' risk is a manifestation of the confusion and uncertainty that invariably accompany risk research, a science that in all respects is in the early stages of its development.

At least three factors must be considered when assessing the likelihood that a person was harmed by exposure to an agent (which is, after all, the central question in personal injury suits)-The first factor is the existence of a hazard; the second is the exposure the person received; and the third is the level of risk associated with that exposure. Public discourse (and media coverage) about subtle environmental hazards is overly focused on the first of these factors (e.g., whether radiation causes cancer); the other two, however, need to be considered as well.

The most direct way to identify a hazard is to observe human populations. Epidemiology does just that- trying to identify statistical associations between exposure and incidence of a disease by observing human populations. Epidemiology provides information that is directly relevant to health and disease in humans. However, this must be weighted against the frequent difficulty in interpreting the evidence and the frequent inconsistency of epidemiologic findings. The need to examine the whole body of evidence about a suspected hazard, and judge what can be concluded reliably in spite of inconsistency and confusion, cannot be overemphasized.


In pointing to the need to weigh conflicting scientific evidence and to debate interpretations of data, we often return to two major themes. The first is the difficulty of inferring cause-and-effect relationships from epidemiologic evidence. The second is the difficulty of inferring risks to humans from high-dose animal experiments.

Many arguments about phantom risk revolve around small relative risks, that is, weak statistical associations. A relative risk close to 1.0 implies that the exposure is, at best, only one of perhaps many factors that contribute to the development of a disease. Furthermore, because of sampling error, the results of an epidemiologic study are surrounded by a penumbra of uncertainty. A study might report a relative risk greater than 1.0 (indicating an increase in risk), but with a margin of sampling error that includes a relative risk of 1.0. Thus, the increase is not statistically significant, meaning that it has an uncomfortably high probability of arising from sampling error. This creates obvious problems for laypeople on juries who must interpret such evidence.

Many risks are reported on the basis of very limited data. For example, the claim (often repeated in the lay media) that 10 percent to 15 percent of all childhood cancers result from exposure to magnetic fields derives from data from a group of 27 households in Denver. In statistical jargon, such results are unstable, that is, highly uncertain because of the small numbers of subjects - Epidemiologic studies are susceptible to many potential errors that are not statistical in nature. For example, James Mills from the National Institute of Child Health and Human Development in Bethesda, Maryland, suggests that early studies linking spermicides and birth defects were flawed and seriously misleading. Renate Kimbrough, of the Washington, D.C.-based Institute for Evaluating Health Risks has described major flaws in early studies that reported adverse health effects of polychlorinated biphenyls (PCBs). Michael Gough from the U.S. Congress Office of Technology Assessment has examined studies that are inconsistent with (and probably invalidate) the earliest studies that linked low-level exposure to dioxin and cancer. In all of these cases, the early positive findings were not confirmed by later investigation and are presumably wrong.

Unfortunately, the irreducible uncertainties in epidemiology (both random and non-random) are large enough to be legally significant.

In the realm of science, a relative risk of 2.0 is considered "small"- in most cases near or below the limits of reliable detection. However, in the legal arena, a doubling of risk might be interpreted as satisfying the "more likely than not" standard of civil litigation. This is not to say that no reliable inferences about subtle hazards can be drawn, but rather that they must be drawn despite noise and inconsistencies in the data.

There are great disparities in perspective about subtle risks, both between scientists and laypeople, and to a lesser extent among scientists themselves. Compared with other organic solvents, for example, trichloroethylene (TCE) is rather benign, with obvious toxic effects only at high levels of exposure (compared to typical environmental levels). But, in massive doses, it produces cancer in animals. Can one infer that TCE causes cancer in humans at very low levels of exposure? Probably not - the great disparity in doses and biological differences between the test animals and humans makes the relevance of the animal data very uncertain. And though the epidemiologic studies on TCE and cancer risk are inconsistent, they are considered, as a whole, negative.

The evidence regarding PCBs raises other kinds of problems. Commercially used PCBs were mixtures of chemicals of varying (but generally moderate) toxicity. In massive doses (compared with typical environmental exposures) some PCB mixtures produce cancer in laboratory animals. However, there is no convincing evidence that PCBs caused any human illness. Some well-publicized incidents (e.g., the tragic poisoning of people in Japan and Taiwan who had ingested rice oil contaminated with PCBs and other chemicals) were later traced to contaminants other than PCBs. In the plenitude of toxic substances, PCBs have received excessive notoriety.

The exposure of most people to PCBs, asbestos (in buildings with intact asbestos insulation) and other toxic substances outside of the workplace is too low to be a major (or even detectable) source of illness. In 1989, Dr. Gough reviewed the Environmental Protection Agency's (EPA) estimates of cancer risk from environmental pollution, and identified a total of 6,000 to 11,000 cancer deaths that might be associated with environmental exposure to manmade carcinogens - or 1.5 percent to 3.0 percent of all cancer deaths. By the EPA's own estimates, the expenditure of large resources to further reduce public exposure to these chemicals is unlikely to have a measurable effect on public health.

Yet, the perspective of someone who has fallen victim to disease is vastly different. The small risk of developing a disease no longer matters the victim has already lost that particular gamble. The question becomes whether some external exposure was responsible. Perhaps a strong case can be made, or perhaps not. The scientific literature has so many reports (some are probably wrong) of association between exposure to toxic substances and disease that anyone who searches through the scientific literature will find bits of inconsistent and often disquieting evidence. Anybody who insists on unambiguous proof of safety will never receive a satisfactory answer from science.


The law frequently demands positive statements by scientists that a risk does, or does not, exist, which often leads to unnecessary confusion and controversy. For example, consider the statement, "Smoking causes lung cancer." We know this to be true with a high - but not absolute - level of certainty. This claim is supported by decades of epidemiologic and laboratory studies. The former show that smoking dramatically increases the risk of lung cancer; the latter have begun to trace the causal chain in the process.

But the claim that smoking causes lung cancer is difficult to prove. The tobacco industry has for many years cynically argued that smoking is an unproven cause of lung cancer.

Nevertheless, the epidemiologic evidence and laboratory evidence, taken together, make a case of overwhelming strength. Ironically, personal injury suits by lung cancer victims against tobacco companies have been uniformly unsuccessful.

Consider, by contrast, the statement, "Sudden trauma does not cause cancer." Such a negative proposition is unprovable and, in some philosophical sense, meaningless. No study could prove the absence of any association between trauma and cancer; at best one might place an upper limit on the risk.

In fact, most authorities at the beginning of the 20th century did believe that simple trauma might cause cancer. Eventually, after physicians developed objective criteria to identify trauma-induced cancer, the number of such cases declined. The theory is now dead or rather, most of its advocates are dead. But for many years it figured prominently in claims for compensation, and to this day it is still resurrected occasionally in lawsuits.

Another questionable theory is the view of clinical ecologists that traces of environmental pollutants damage the immune system. The methods and conclusions of clinical ecology have been widely criticized by mainstream physicians and scientists. Nevertheless, some clinical ecologists have been prominent as expert witnesses in personal injury suits, offering alarming (and grossly inappropriate) diagnoses such as "chemically induced AIDS" in support of daimants' cases.

Given time and resources, science will ultimately clear up such muddles. By now, the overwhelming weight of scientific evidence is that there is no detectable link between Bendectin or spermicides and birth defects (despite early studies that seemed to indicate a link), or between use of video display terminals and miscarriage (despite media reports of "clusters" of miscarriages, and at least one positive study).

Ultimately, truth in science is established by its ability to make successful predictions.

However, this process is slow, and it constantly generates still other questions. The controversy about miscarriage and use of video display terminals took 10 years to resolve. The clusters of miscarriages were never adequately explained and may have had no connection with the terminals. But once the question was opened, a dozen studies were required to put the matter to rest; some critics may still argue that the matter remains unresolved. Centuries were required for the theory to be dispelled that simple trauma is a sufficient cause of cancer.


The problem does not lie with science but with the use of science in human affairs and the law.

Tort law approaches the question of risk quite differently than does science, A store owner defending himself against a claim that a slip and fall caused breast cancer need not prove generally that trauma does not cause breast cancer; he need only show that this trauma probably did not cause that cancer, whether or not other traumas cause other cancers elsewhere. Conversely, the plaintiff needs only to sell a diagnosis: this breast cancer was caused by that fall. The jury is supposed to decide in favor of the side that has established its claim by the "preponderance of evidence."

Ending baseless claims - that trauma caused a cancer, or the drug Bendectin caused a birth defect - is a far more difficult proposition. The individual claimant may lose, and the case will then be closed. But for legal purposes the issue remains open, to be raised again any number of times by others elsewhere. Everyone is entitled to at least one day in court, and the same question can be litigated indefinitely.

In a mass market, a manufacturer is potentially exposed to many lawsuits if something goes wrong, and even questionable claims may result in endless litigation. If 30 million women used Bendectin, and 900,000 of them bore children with birth defects (as expected from the 3 percent incidence of major birth defects in the population at large), then 900,000 different juries could (in principle) be asked to determine whether Bendectin causes birth defects.

Many Bendectin lawsuits were filed. The first thousand claims in a giant class action suit were resolved in line with mainstream science, though not before Merrell (the drug's manufacturer) was impelled to offer $120 million to escape from the legal quagmire. The plaintiffs' lawyers declined the offer, and the cases went to trial. Merrell ultimately won most of the individual trials. Most juries made no award, but some returned verdicts ranging from $20,000 to $95 million. Most of these verdicts were overturned on appeal, but some survived that additional test. The average award (the total awarded in all the trials, divided by the number of trials) was close to $100,000. The legal costs became so high that Merrell withdrew the drug, depriving women of the only effective drug for morning sickness on the market.


Litigation often turns on fear. A legal theory now held by some academics and jurists links legal rights not just to the actuality of risk but also to the public's widely shared fears. Some courts permit recovery only if a claimant's anxiety about rabies or cancer, for example, is scientifically reasonable, and would be shared by a knowledgeable doctor or scientist in the same position. But others ask only whether the public at large shares the fear; the factual basis for the fear is irrelevant. Real fear is easier to prove than real hazard.

Legal risk has increased by the ruling, in a growing number of courts, that a plaintiff need not prove actual injury to collect in some toxic tort suits. If the plaintiff claims that a defendant's product or technology causes negative health effects, the plaintiff (in these courts) need not present any evidence that the product actually hurt him or her, but only that the product caused a fear of potential negative health effects.

In some jurisdictions, this liberal standard of recovery is mitigated somewhat by a rule that the plaintiff must prove through competent scientific evidence that the fear is reasonable. But in other jurisdictions the plaintiff need show only that he or she suffers from fear and that the fear is widespread in the community. The latter rule has been widely adopted in power line cases. In cases in which that rule is applied, legal risk has nothing to do with scientific risk. Rather, the results of litigation turn on the success of those who oppose the product or technology in persuading the public that the product or technology should be feared.


The reader is likely to come away with two strong impressions. The first is that much confusion, error and ambiguity surround risk research. This is particularly true when the risks are small and perhaps non-existent. The second is that this creates much controversy and expense in a legal system that sometimes raises more questions than it settles.

These problems are connected, of course. The legal controversy arises, in part, from the difficulty that science has in proving cause and effect relationships in individual cases. We are all surrounded by carcinogens, natural and synthetic. Many of us develop cancer at some time in our lives. Yet rarely can science identify the specific cause of any person's tumor. Some of us will bear children with grave defects yet rarely can science identify the cause. We all get sick and die - yet the cause of most chronic diseases is unknown. Epidemiology can often only hint at the answers, and many caveats are needed when interpreting such evidence. The courtroom is not well suited to resolve such issues.

Phantom risk litigation is, in part, the cost of a legal system that grants broad access to the courts, and which gives attorneys financial incentives to file speculative litigation. It is also, in part, an unavoidable result of scientific uncertainty. Probably the best that legal reformers can do is to suggest ways to help improve the quality of the scientific evidence that is presented in court. The goal is not to raise standards of proof to levels so high that no plaintiff could hope to win, but rather to ensure that scientific results that are presented to juries have probative value, and that scientific opinions are within the spectrum of mainstream scientific thought.

In late 1992, the U.S. Supreme Court agreed to hear Daubert vs. Merrell Dow Pharmaceuticals Inc., one of the many unsuccessful Bendectin cases. The technical legal issue is the survival of the "Frye" rule, named after a 1923 Federal Court ruling that held that novel scientific evidence is admissible only when it has received "general acceptance" by the scientific community. The rule forced judges to look to the broader scientific community in determining whether to admit scientific testimony. Without Frye, judges presented with conflicting scientific evidence would tend to split the difference and allow both sides' testimony to be introduced no matter how imaplausible their basis. This occurred to a large extent starting in 1975, when the Federal Rules of Evidence, which make no mention of Frye, were promulgated. After a series of embarrassing rulings, a countertrend has emerged in many courts, and Frye is slowly becoming the majority rule once again. Frye should be affirmed.

Courts should also explore more fully the probative value of scientific evidence. Much phantom risk litigation tums on the interpretation of immune-system tests and other scientific evidence whose relation to clinical illness is remote and speculative. The use of high-dose animal studies as evidence in court, outside the context of a careful risk assessment, is a gross misuse of scientific data. Much of the litigation concerning health effects of electromagnetic fields involves in vitro or in vivo scientific studies that have no apparent connection with risk assessment. The legal community as a whole, and judges in particular, need to become more sophisticated about the scientific basis of risk assessment.

This is an article about science and the law, but it touches on other important issues as well. One is the social benefit of toxic tort litigation.

Lawyers often argue that litigation is an effective instrument for controlling behavior that creates risk. When risks are obvious, and the legal system performs quickly and predictably, this is probably true: the fear of a lawsuit encourages the shopkeeper to clean the ice from his doorstep. But tort law does not perform a useful or desirable social function when the link between exposure and injury is as remote and questionable as with the issues considered here - the litigation did not demonstrably reduce risk; it simply imposed costs.

Many of the ostensible social benefits of well-functioning tort litigation can be more efficiently obtained through other social mechanisms. Compensation for cancer, birth defects or other health problems can be provided more fairly and reliably through health insurance or other contractual arrangements. But, while better health insurance might reduce the incentive to litigate, recall that many phantom risk litigants do not allege any injury at all, only the possibility of injury at some time in the future.

The regulatory arena is far better suited than tort law for controlling subtle risks. For example, the Environmental Protection Agency attempts to regulate carcinogens to a level such that less than one excess death occurs in a million people over their lifetimes, which corresponds to less than three excess deaths per year in the entire country. Risks of this magnitude are far too small to be detectable by any conceivable scientific study, and thus far too small to lead to any tort litigation that meets conventional standards of legal proof.

The regulatory system is also far better than the courtroom at resolving contradictory scientific evidence. In the United States at least, the regulatory process is subject to extensive scientific review and public comment, which pushes it towards consensus solutions. Ultimately, the questions of how much safety a society should purchase for itself, to whose benefit and at whose cost, are decisions that must be resolved through the political process. The goal is not to avoid all risk (which is impossible in any event) but to reduce risk to levels that society is willing to accept.

The issues discussed here were very expensive, in several senses. First, there are the costs of the research itself. A good epidemiologic or animal screening study might cost $1 million about what it costs to treat a handful of cancer patients. On the scale of the national health budget, these costs are tiny.

At a different level are the opportunity costs.

The capacity of the scientific research establishment is limited, and scientific efforts might be more productively directed towards other health issues. Some 70,000 chemicals are used in commerce, for example, and as of the mid-1980s only 2 percent had been extensively tested for human health effects; no health data existed for over 70 percent of the rest. No doubt some of these chemicals will be found to cause otherwise preventable illness, most likely from occupational exposures.

The greatest costs are to society itself. By one estimate the issue of possible health hazards from electromagnetic fields now costs the American public $1 billion per year, through increased costs to utilities, litigation and ad hoc steps taken by many individuals and industries to reduce exposure. Considered as a health investment, that money is being very badly spent indeed. A billion dollars spent on prenatal and pediatric health care to inner city populations, for example, would produce important and easily demonstrable health benefits; the health benefits of electricity are incalculable, but obviously very high.

Finally, risks that are imposed on people involuntarily or unknowingly deserve special attention. But the evidence does not make a strong case that PCBs, dioxin, TCE, low-level electromagnetic fields and the other things that caused so much public and legal controversy are very risky, or risky at all, at least at typical exposure levels. Their risks (or non-risks) loom larger in the public's mind than the very much larger everyday risks that are under a person's voluntary control. A person who drives without seat belts, eats a rotten diet, smokes or drinks too much has real risks to worry about but can effectively reduce them.

In the end, phantom risk is a problem of the law and not science. Future historians might look back on late 20th century America and regard the excesses of toxic tort litigation as a bizarre aberration that reflects an essential failure of the law when the link between cause and effect is murky. Whatever the eventual solution to this problem may be, phantom risk remains a diversion that is too expensive for even our wealthy society.
COPYRIGHT 1993 Risk Management Society Publishing, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Foster, Kenneth; Bernstein, David; Huber, Peter
Publication:Risk Management
Article Type:Cover Story
Date:Apr 1, 1993
Previous Article:The evolution of risk management.
Next Article:The risk manager's pivotal role in D&O.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters