Printer Friendly

Sense and "sensitivity": epistemic and instrumental approaches to statistical evidence.

I.   A Critical Review of Proposed Solutions to the Statistical
     A. Exogenous-Factor Claims
     B. About-Relation Claims
     C. The Causal-Connection Argument
     D. Defendant-Specific Claims
     E. Justice-in-the-Particular-Case Claims
     F. Autonomy Considerations
     G. Social-Acceptability Considerations
     H. The Guaranteed-False-Conviction Argument
     A. Justifying Beliefs or Actions?: We Require an Epistemic,
        Not a Practical, Framework
     B. Some Epistemology: Introducing "Sensitivity"
     C. From Epistemology to More Practical Concerns: Should
        the Law Care About Sensitivity (or Knowledge)?
        1. The remaining puzzle: why care about knowledge?
        2. Sanchirico on character evidence
        3. Solution: the instrumental significance of being
     A. DNA Evidence
     B. Propensity-for-Crime Evidence at the Guilt Phase of Trial
     C. Incriminating Versus Exonerating Statistical Evidence
     D. Admissibility Versus Sufficiency of Statistical Evidence


"For nearly twenty years, law journals have been the forum for a bitter debate about the use at trial of overtly probabilistic evidence and methods," wrote Jonathan J. Koehler and Daniel N. Shaviro in 1990. (1) More than two decades have passed since then, but these words still hold true. Despite the voluminous body of literature dedicated to the issue of statistical evidence, it continues to generate great controversy in evidence law scholarship. Questions regarding the admissibility and sufficiency of statistical evidence arise in court with ever-growing frequency, with seemingly inconsistent treatment in the case law. (2) The aim of this Article is to dispel some of the confusion surrounding the use of statistical evidence in the legal arena by connecting the statistical evidence debate to broader epistemic discussions and by highlighting "Sensitivity"--that is, the requirement that a belief be counterfactually sensitive to the truth--as a way of epistemically explaining the suspicion toward statistical evidence. After exposing the epistemic distinctions between statistical and individualized evidence, the Article turns to examining their implications for the legal arena. We will use the epistemic discourse on Sensitivity as well as an instrumental analysis to address the descriptive and prescriptive challenges that statistical evidence poses.

One starting point for the statistical evidence debate is the classic Blue Bus hypothetical, (3) which is a variant of Smith v. Rapid Transit, Inc., (4) a seminal case in modern evidence law. The hypothetical consists of two cases. In both cases, a runaway bus injures the plaintiff, and the case goes to trial against the eponymous bus company. In the first case, the evidence includes eyewitness testimony that one of the Blue Bus Company's buses caused the injury. The witness, however, is imperfectly reliable. To illustrate, let us assume her to be 70% reliable in such circumstances. In the second case, however, there is no eyewitness to the accident. Instead, the plaintiff seeks to introduce evidence about the Blue Bus Company's market share in the area where the accident took place. The uncontested market share data show that the Blue Bus Company owns 70% of the buses in the relevant area. This, the plaintiff argues, shows that it is more likely that one of the Blue Bus Company's vehicles was involved in the accident, because Blue Bus is the largest bus company in the area with the greatest number of buses on the road.

Even though the evidence in both cases may be of equal probative value, our responses to the two cases are very different. Most people (lawyers and laypeople alike) find nothing problematic in basing a finding on the eyewitness evidence in the first case but are very reluctant to ground a finding on the market share evidence in the second case. (5) And many courts seem to agree. (6) Indeed, the second case closely resembles the Smith case mentioned above, in which the court rejected market share evidence as the basis for finding that the defendant's bus was responsible for the accident. (7)

But this now presents a problem. The Blue Bus hypothetical were carefully designed to keep all other things equal. This means, for instance, that relying on the two kinds of evidence will yield, over the long run, the same rate of mistaken decisions: in the words of the hypothetical, the percent chance that the witness mistook the color of the bus in the first case is equal to the market share of all other bus companies in the second case. And yet despite these similarities, almost everyone--judges and legal scholars, lawyers and laypeople alike--seems to draw a sharp distinction between eyewitness testimony and market share evidence, (8) even when all other things are held equal. How can such a discrepancy be explained? Why do we treat these kinds of evidence so differently? And can this intuitive distinction be vindicated? In other words, should we treat market share evidence and eyewitness testimony differently? If so, why? (9)

Consider the analogous gatecrasher hypothetical, (10) in which it is uncontested that of 1000 people attending a stadium event, only 10 purchased tickets. If any individual attending the event is sued for crashing the gate or, even more clearly, is prosecuted for such gatecrashing, a finding against him or her merely on the strength of the statistical evidence seems inappropriate. On the other hand, conviction based on a probabilistically similar piece of direct, individualized evidence, such as a videotape, seems perfectly fine.

The puzzle concerning the differential treatment of probabilistically equivalent statistical and individualized evidence, exemplified by these hypotheticals, surfaces in many different scenarios. This puzzle can appear in civil or criminal trials, arise with different levels of probability, relate to past or future events, and so on. (11)

Legal doctrine has failed to systematically resolve questions regarding the use of statistical evidence in court. In the first half of the twentieth century, when statistical evidence first began to appear in court, many judges responded antagonistically, deeming it inadmissible and devoid of any probative value. (12) To this day, courts continue to exhibit a general preference for individualized evidence and to reject base-rate evidence despite its potential to promote accuracy in legal factfinding. (13)

The doctrinal picture, however, is not clear-cut. First, the legal doctrine regarding the use of statistical evidence for imposing liability is incoherent, and there is conflicting case law on the matter. (14) Although statistical evidence is often considered irrelevant and thus inadmissible in court, (15) at other times, when presented as supplementary evidence, courts sometimes treat it as admissible. (16) In yet other instances, statistical evidence has even been treated as sufficient for a finding of liability. (17) For example, in contrast to the ruling in Smith, the appellate court in Kaminsky v. Hertz Corp. (18) ruled that the plaintiff had established a rebuttable presumption of ownership when the primary evidence brought forth against the defendant, Hertz, was unchallenged testimony that the yellow truck that had caused the accident at issue in the case bore a Hertz logo and that Hertz owned ninety percent of such trucks. (19)

Second, not only are there seemingly random inconsistencies in the court rulings on statistical evidence in the general class of cases, there are also exceptions to the general approach of inadmissibility in certain categories of cases. Such is the case, for example, with DNA profiling, which is explicitly statistical in nature. The standard DNA profile is extracted from only a small portion of the donor's entire DNA. So even under the assumption that a unique set of DNA characterizes each and every individual, two or more people can still share an identical DNA profile. (20) Despite its statistical nature, DNA evidence is increasingly relied on by the courts. (21) To date, most American courts admit DNA evidence despite its apparent similarities to other, inadmissible types of statistical evidence. (22) In light of the considerable inconsistency in the case law regarding the admissibility or sufficiency of statistical evidence, the extrapolation of the legal doctrine on this matter has been deemed "part science and part art." (23)

In addition to the inconsistencies in the legal doctrine, the legal scholarship is also fraught with discrepancies on this topic. The decision in Smith triggered a heated, ongoing debate in legal academia (24) around (1) whether judicial fact-finding ought to be grounded on standard probability logic and (2) whether statistical evidence regarding the base rate for liability is sufficient ground for a ruling in criminal or civil trials. (25) Koehler and Shaviro, for instance, question courts' refusal to ground verdicts in favor of plaintiffs or prosecutors on statistical evidence. (26) Shaviro claims that the objective of verdict accuracy requires that courts hold in favor of the party whose case seems more likely, (27) and that the only relevant misgivings about imposing liability are doubts related to risk of error, not those related to overtness. (28) But the evidence law scholars who advocate the use of statistical evidence are outnumbered by other legal academics who tend to strongly oppose statistical evidence and object to its submission in trial for reasons of probative deficiency, reasons of moral deficiency, or other policy reasons. (29)

The inconsistent treatment of statistical evidence in both legal doctrine and the evidence law literature has created a need for an overarching theory. The objective of this Article is to provide such a theoretical framework and to dispel some of the incoherence associated with the treatment of statistical evidence in trial.

The theoretical framework we offer in this Article views statistical evidence through the prism of epistemology: it connects statistical evidence to a broader epistemic discussion of similar phenomena, and it uses this epistemic discourse to highlight Sensitivity--the requirement that a belief be counterfactually sensitive to the truth--as a way of epistemically explaining the legal suspicion of statistical evidence.

The theory also claims that the epistemic distinction cannot satisfactorily vindicate the reluctance to rely on statistical evidence. Knowledge, Sensitivity, and epistemology--we claim--carry little, if any, legal value. Instead of the epistemic story, we therefore offer an incentive-based story, vindicating the suspicion toward statistical evidence. However, as we show in this Article, the epistemic story and the incentive-based story are closely knit and interestingly related in light of their similar structures and ramifications in the legal arena. Using these theoretical foundations, we expose the intuitions underlying the prevailing differential treatment of statistical evidence in the doctrine and explain why some types of statistical evidence are regarded by the courts as admissible and sufficient for substantiating liability, while others are not. In addition, the Article highlights the prescriptive contribution of our theoretical framework by providing criteria for legal reform and revising the treatment of statistical evidence in certain contexts. (30)

The Article proceeds as follows. In Part I, we review some of the existing theoretical endeavors to explain the distinction between statistical evidence and individualized evidence and point out their shortcomings. Part II presents an alternative theoretical framing of the statistical evidence debate. Part III then applies this theoretical framework to the legal sphere, showing its capacity to solve the existing doctrinal

puzzles and guide legal reform. Part IV concludes the discussion.


A good way to appreciate the depth of a problem is to explore the attempts that have been made to tackle it. The literature on statistical evidence is extensive, and it contains various attempts to justify the distinction between statistical evidence and individualized evidence. (31) We will begin by mapping out the most influential suggestions in the literature and highlighting their shortcomings. This will both underscore the gravity of the problem and enable an appreciation of the distinctive features (and advantages) of the account that we propose. (32)

A. Exogenous-Factor Claims

Richard Posner claims that a resort to statistical evidence is in itself proof that no other evidence could be found, and that this in itself indicates the weakness of the plaintiffs (or the prosecution's) case. (33) If this is, indeed, the case, statistical evidence should be accorded less weight simply because it tends to be submitted in circumstances in which the case of the party presenting the evidence is weaker.

It may be that plaintiffs and prosecutors seek to include statistical evidence more often in weak cases, though much empirical work would be necessary to substantiate the argument. But regardless of the existence of such a correlation, Posner's argument fails to explain why after holding all else equal, including general case strength, the intuition of a distinction between the two types of evidence seems to persist. So explanations of this kind will not suffice.

B. About-Relation Claims

Another claim stated in the literature is that there is an important distinction to be made between evidence that is genuinely about the relevant defendant and merely statistical evidence, which is considered to be unrelated to the defendant's matter. (34) Following this line of reasoning, in the Blue Bus hypothetical, the eyewitness testimony is about the Blue Bus Company, whereas the market share evidence is not; the latter is in no way relevant to determining what happened in the specific case. This about-relation claim does not help. (35) In the context of evidence, the only "about" that is relevant is the epistemic "about," the "about" of indication. And with both the individual and the statistical pieces of evidence, the relevant evidence indicates that the bus was blue. In this sense, the statistical evidence, too, is "about" the Blue Bus Company. Now, there may be nothing objectionable about using such about-talk to capture the intuitive distinction between statistical and individualized evidence. But doing so without giving considerably more details regarding the about-relation amounts not to an explanation or a vindication of the distinction but merely to its renaming.

C. The Causal-Connection Argument

Judith Jarvis Thomson suggests that the difference between statistical and individualized evidence should be understood causally. (36) Individualized evidence, she claims, is causally linked in an appropriate way to the thing for which it is taken as evidence. (37) In the Blue Bus case, it is the fact that the bus that inflicted the harm was blue that resulted in the eyewitness testimony, and (so the argument goes) that fact brought about the testimony in an appropriate way. In the case of statistical evidence, however, no similar appropriate causal link is present. That the relevant bus was blue in no way, apparently, caused the market share evidence. Thomson holds such causal links with evidence to be a necessary condition for knowledge. (38) She also holds that they are necessary for justifiable legal factfinding, (39) in part because she believes that knowledge is a necessary condition for justifiable legal factfinding (at least in criminal cases).

Yet the causal mechanism does not capture the legal distinction between statistical evidence and individualized evidence. For instance, courts may sometimes need to accept evidence (expert witness testimony, for example) regarding certain mathematical truths. It is very hard to see how the causal requirement can be met here, given that mathematical truths are, arguably, causally inert. Also, causal links, even appropriate ones, can be notoriously complicated. Cases can easily be constructed--cases with multiple causes, independent causal chains, different facts that suffice causally only together, different facts each of which suffices causally alone, etc.--where it is not clear what follows from a causal theory and where, to the extent that it is clear, the implications are intuitively unacceptable. For instance, in some versions of the gatecrasher case, the fact (if it is a fact) that the relevant person crashed the gate is partly causally relevant to the precise nature of the statistical evidence. (Had he purchased a ticket, the statistics would have been slightly different.) Still, if the only relevant evidence against him is the statistical evidence, presumably we do not know that he crashed the gate. In order to do justice to the causal theory's underlying intuition, some further restrictions on the nature of the causal relation are needed.

Lastly, consider a certainty case where, for instance, no one at the stadium in the gatecrasher hypothetical purchased a ticket. There, the evidence intuitively is still statistical (100% is also a probability, isn't it?), but it nonetheless seems sufficient for conviction. It is not clear, however, how a causal theory can accommodate this result. After all, there is no appropriate causal link between no one's having purchased a ticket and, say, John's gatecrashing. Thomson addresses the certainty case, but instead of showing how her theory can accommodate its desired result, she "bypass[es]" it. (40) This is objectionably ad hoc. At the very least, a theory that could account for the desired result in the certainty case as a natural particular instance (rather than as an ad hoc exception) would be the better for it.

D. Defendant-Specific Claims

Another argument raised in the literature to defend the difference between the two types of evidence focuses on the specific defendant. The claim made is that the defendant ought not be punished for being a member of a reference class. (41) True, there is indeed something problematic about convicting a defendant for gatecrashing based purely on the percentage of gatecrashers among those at the stadium; this, after all, is just a repetition of the intuitive suspicion toward statistical evidence. But it is highly misleading to say that in such a case the defendant will have been convicted for his membership in the relevant reference class. If we end up punishing the defendant, it will be for crashing the stadium gates. But since we do not have omniscient knowledge of the facts, we must determine, by relying on evidence, whether the defendant did in fact gatecrash. In making this determination, the statistical evidence seems relevant--and, if it is not, the fact that it is not still remains to be shown. This point is underscored by the fact that there is something statistical about individualized evidence as well. (42) Indeed, it is precisely in this context that it becomes tempting to insist--as some have (43)--that in actuality, at bottom, all evidence is statistical evidence. But this presumably does not show that in the eyewitness scenario we are punishing someone for being a member of the class of people who would be recognized by the eyewitness. In cases of both statistical and individualized evidence, we punish for the offense by relying on evidence.

E. Justice-in-the-Particular-Case Claims

Relatedly, it is sometimes maintained that since a court's primary duty is to do justice in the specific case before it, justice in that case cannot be compromised in order to achieve a more efficient result in the overall class of cases or the result that is likely to minimize the global risk of error. (44) This argument, however, is also insufficient to validate the distinction between statistical and individualized evidence, for the court that is instructed to ignore all global effects and to strive solely to do justice in the case at hand still has to resort to evidence--some evidence--to determine what justice in the specific case demands. And thus far, no compelling claim has been made showing statistical evidence to be any less appropriate for this purpose than individualized evidence. (45)

F. Autonomy Considerations

Yet another argument in the literature is that relying on statistical evidence violates the relevant party's autonomy and individuality, and perhaps even her free will and agency. (46) By relying on statistical evidence to convict a gatecrasher, for example, are we not in effect saying that she was bound to crash? Or perhaps we are being disrespectful of her full autonomous individuality, treating her as simply a member of a statistical group and not as a genuine person. If so, are we not, by relying on statistical evidence, in some sense degrading her? Is this not cause to reject such evidence?

This line of argument can also be rejected. While there is nothing wrong with excluding degrading evidence, even when it is acknowledged as genuinely probative, this reasoning cannot justify the distinction between individual and statistical evidence. This is firstly because it does not plausibly generalize to all the relevant cases and cannot explain, for instance, cases like the Blue Bus hypothetical. The Blue Bus Company, after all, does not possess dignity in the same sense real persons do, and so its treatment is not restricted by autonomy considerations in the same way the treatment of real people is. Secondly, and more importantly, this claim confuses epistemology and metaphysics: Statistical evidence is relevant only as evidence. By using a statistic to give reason to believe that the defendant gatecrashed, we are not expressing any belief that he or she has always been bound to do so. Nor are we implying that he or she is anything less than a fully autonomous individual. We are just taking one thing as an indicator of another.

G. Social-Acceptability Considerations

Another attempt at rationalizing the distinction between the two types of evidence is Charles Nesson's well-known claim that verdicts based on statistical evidence are socially unacceptable. (47) Nesson maintains that the court's legitimacy is contingent on the public's perception of the verdict as a statement about the actual event. (48) Statistical evidence transforms the message conveyed by the court from one of certainty to one of risk calculation. In doing so, it expressly states the risk of error underlying the judicial verdict and may thereby weaken the system's legitimacy in the eyes of the public. (49)

This claim can also be rejected. First, its empirical basis is unpersuasive, as it is nearly impossible to delineate the boundaries of what would be acceptable to the public. Second, even assuming the empirical problem away, there is room to question whether public trust is even a goal that ought to be attained. True, it is arguably important that the legal system enjoy some public confidence, though questions may be raised as to the soundness of this as an intrinsic aim, independent of whether the legal system merits public trust. We are even willing to assume for the sake of argument (because whether this is so is very unclear) that securing public trust can sometimes justify catering to the prejudices of the masses. Still, if there is no other way to justify the traditional skepticism toward statistical evidence, then this feature of public opinion is indeed a prejudice, which renders the call to accommodate it suspicious. Furthermore, for our purposes here, we can simply assume away the problem with the premise that the public is going to form an accurate opinion about statistical evidence. In this (perhaps hypothetical) case, nothing about public opinion and trust can justify an otherwise unjustified approach to statistical evidence. Of course, justified public opinion could supplement any other justification for the traditional approach, but it would be the other justification that provides the primary rationale--not the fact that it is an opinion generally held by the public. Here, too, then, the public opinion argument can be safely dismissed. (50)

H. The Guaranteed-False-Conviction Argument

One final explanation for differentiating between the two types of evidence, which may initially seem plausible but must ultimately be rejected, goes as follows: To return to the gatecrasher hypothetical, were we to prosecute each and every person who exited the stadium and use the statistical evidence to convict, each and every one of them would be guaranteed to be found guilty, including the ten innocent people who had purchased tickets. In nonstatistical cases, although the probability of finding an innocent party guilty might be higher than in each of the gatecrasher trials, there would be no certainty that an innocent person will be convicted. Since a guaranteed wrongful conviction is something we as a society seek to avoid, the conclusion is that only nonstatistical evidence ought to be accepted. But this reasoning cannot justify the full extent of the distinction, nor explain it. The following two points demonstrate why: First, in any criminal legal system, some innocent defendants are virtually guaranteed to be convicted. Against the background of inherent uncertainty, the only way to avoid wrongful convictions is to never convict, thereby abolishing the criminal justice system altogether. The second point can be illustrated with a variant of the gatecrasher hypothetical in which it is possible to indict only one person because, for instance, all the other attendees fled the stadium before the police arrived. Relying on statistical evidence under such circumstances would not guarantee the conviction of an innocent defendant, but the intuitive reluctance to convict on the basis of statistical evidence would still be present.

We should emphasize that this quick critical survey is not intended to be either conclusive or comprehensive in scope. It does not cover all attempts that have been made to vindicate the distinction. (51) Still, we hope it succeeds in giving a sense of the depth of the problem, portraying the theoretical attempts to contend with it, and conveying the need for a new resolution.


A. Justifying Beliefs or Actions?: We Require an Epistemic, Not a Practical Framework

In this Part, we propose our explanation for the suspicion of statistical evidence. But before delving into this issue, another preliminary issue must be addressed: In comparing statistical and individualized evidence, we must begin with the question of whether we are concerned with justifying our beliefs, our actions, or both. That is, in the context of cases like the Blue Bus Company hypothetical, we may wonder what we should believe about the identity of the harm-causing bus; or we may wonder what we should do in this regard, and how we should proceed. While the two kinds of questions may be interestingly related, the precise nature of the relation between the epistemic questions having to do primarily with the justifications of beliefs and the practical ones having to do primarily with the justifications of actions is neither obvious nor uncontroversial. (53) We will revisit these questions in some detail below.

Looking for an epistemic vindication of the distinction between statistical and individualized evidence amounts to insisting on some positive epistemic status that the belief that the harming bus belonged to the Blue Bus Company has in the eyewitness version of the hypothetical but that is absent in the market share version. Perhaps, for instance, the belief that the bus belonged to the Blue Bus Company is justified on the basis of eyewitness evidence, but not on statistical evidence; or perhaps it's somehow--despite the equal probabilities--more justified in the former than in the latter; or perhaps the belief can amount to knowledge when supported by eyewitness testimony, but not when supported by statistical evidence. There are other possible epistemic distinctions here, and different attempts at an epistemic vindication of the distinction will endorse different epistemic distinctions.

Another very different way to proceed in explaining the distinction between statistical and individualized evidence is not epistemic, but practical. When looking for practical distinctions, we may concede, for the sake of argument, that when it comes to justifying the relevant belief statistical and individualized evidence are exactly alike. Nonetheless, we could still insist that we should proceed differently in practice in these cases. Perhaps, for instance, reliance on statistical evidence and reliance on individualized evidence create different incentive structures, (54) or have different implications in terms of administrative costs. If so, we should treat these pieces of evidence differently for practical purposes, despite their epistemic similarities.

Importantly, then, if we seek epistemic answers, the main distinction between statistical and individual evidence will be about the justifications of the belief differences in the justifications of the relevant actions can be expected to fall outside of the epistemic distinction and in the practical realm. Perhaps, for instance, we should find against the Blue Bus Company in the eyewitness case and not in the market share case because only in the former, but not in the latter, can our belief that it was that company's bus amount to knowledge.

We can distinguish between epistemic and practical strategies for vindicating the distinction between statistical and individualized evidence. (55) And crucially, we should notice that they differ in scope. Practical justifications are highly sensitive to the relevant practical circumstances, such as the existence of incentive structures or the costs of making a decision on a certain basis, whereas epistemic distinctions seemingly apply wherever there are beliefs. Thus, practical justifications--incentive-based stories, differential administrative costs, and the like--will tend to vindicate the distinction only in legal contexts (and indeed, perhaps only in a subset of those). So if it can be shown that the problem extends beyond the legal context--that a vindication of something similar to the distinction between statistical and individualized evidence also is needed in nonlegal, and indeed in nonpractical, contexts--then the scope of practical vindications seems ill suited for the task. Consequently, what we would need is an epistemic vindication, one that applies in all contexts in which the problem arises.

As we will show below, the problem does seem to be general, and therefore an epistemic solution is needed.

B. Some Epistemology (56): Introducing "Sensitivity "

To demonstrate the general nature of the problem and the solution that the epistemic perspective may offer, what we need is a nonlegal, and indeed nonpractical, case where something like the statistical-individual distinction seems to be doing serious epistemic work. A version of the lottery paradox from the epistemic literature serves this purpose well. (57) Once again, we have two versions.

In the first version, you buy a ticket to a one-in-a-million lottery. You know the probabilities, of course, and perhaps on that basis you believe that your ticket is not a winning ticket. Prior to receiving any indication of the lottery results, do you know that your ticket is not a winning ticket? The almost unanimous answer here would be no. Under these circumstances, you do not possess knowledge that your ticket is not a winning ticket.

In the second version of the lottery paradox, the odds of winning the lottery to which you buy a ticket are somewhat better--one in a thousand. You purchase the ticket and wait for a day. The winning ticket is then declared, and the number made public in the newspaper does not match yours. However, a mistake in the newspaper--while unlikely--is not an impossible scenario. Suppose that with all the information made available to you taken into account (namely, the initial odds of the lottery and the fact that the number in the newspaper is not that of your ticket), the chances of your ticket still being a winning ticket are exactly one in a million. Now, having looked in the newspaper, do you know that your ticket is not a winning ticket? The almost unanimous response here would be yes. Under these circumstances, most people would hold that you do possess knowledge that your ticket is not the winning ticket.

The two cases were described in such a way that the probability of the ticket being a winning one is constant (one in a million). Yet in the first version, it seems that you do not possess knowledge (that your ticket is not a winning one), while in the second version, you do possess such knowledge. What can possibly explain this difference?

The intuitive difference between the two cases seems to be the following: In the newspaper case, it is no accident that your belief is true. It is true because in forming your belief, you're tracking the newspaper report, which in turn tracks the truth. You form your belief in the way in which a responsible believer would. And for these reasons, your belief is adequately sensitive to the truth. That is, had your ticket been the winning one, in all likelihood you would have believed that it was, and you wouldn't have continued to believe that it wasn't the winning ticket (because you still would have followed the newspaper report, which, in all likelihood, would have correctly reported that your ticket was the winning one). But in the first case, in which you base your belief (that yours is not a winning ticket) on just the statistics, none of this is true. In this case, if your belief ends up being true, it merely happens to be true. It's a fluke. You base your belief on the statistics alone, not on anything--such as the newspaper report--that tracks the truth in the individual case. You don't seem to be forming your belief in a responsible manner. And for these reasons, your belief is not sensitive to the truth: even had your ticket been the winning one, you would have still believed that it wasn't, because you would have based your belief on the one-in-a-million statistic, which would still be in place.

What the lottery hypothetical seems to indicate is that one of the conditions necessary for knowledge (or at least an important epistemic condition) is Sensitivity.

Sensitivity. S's belief that p is sensitive =[sub.df] Had it not been the case that p, S would (most probably (58)) not have believed that p. (59)

The belief that your ticket is not a winning one satisfies Sensitivity in the newspaper case, but not in the mere statistics case, and this seems to explain (at least in part) their very different epistemic statuses.

Something very similar--both in terms of the underlying intuitions and in terms of the somewhat more precise Sensitivity conditions--can be said about the Blue Bus hypothetical and the gatecrasher hypothetical. (60) When we find against the Blue Bus Company based on the reasonably reliable eyewitness, we ground our finding in the testimony, which in turn tracks the truth (imperfectly, but reasonably reliably). In other words, we show sensitivity to the truth, and accordingly, Sensitivity is satisfied: had the damaging bus not been one of the Blue Bus Company's fleet, we would have most probably not found against the Blue Bus Company (because the eyewitness would be unlikely to testify that it was a Blue Bus bus). If, however, we base a finding against the Blue Bus Company just on the basis of the market share evidence, then whether or not the finding matches the facts seems to be a matter of luck; we do not base our finding on anything that tracks the truth. And accordingly, Sensitivity is not satisfied: even had the bus not been the Blue Bus Company's, we would have still found that it was (because the market share evidence would remain the same under this scenario). The same is true for the gatecrasher case: If we rely merely on the high percentage of gatecrashers among those attending the stadium event in order to convict, we render our conviction insensitive. Even had the accused not crashed the gates, the statistics would have been highly similar, and so we would have still convicted him.

We can now revisit a diagnostic point from the previous Subpart. There we noted that practical, instrumental vindications of the distinction between statistical and individual evidence will tend to be law-specific: they will tend to apply only to the practical circumstances that are relevant to the specific legal arrangement. What the lottery cases teach us is that such solutions cannot be fully adequate. The problem, as these cases demonstrate, is a general one, and it applies more broadly than to just the law. And so, ideally, the solution to look for is similarly general. Put another way, because the very same problem arises in purely epistemic cases, (61) where no action is at stake, the kind of solution to look for is epistemic as well. And this is what Sensitivity amounts to--a purely epistemic, independently motivated way of distinguishing between statistical and individualized evidence.

Before getting back to the law, we want to describe another possible way from the epistemic literature of vindicating the distinction between statistical and individualized evidence. This approach starts from the intuitive distinction between mistakes that do and mistakes that do not call for explanation, (62) and so we will refer to it as the explanatory test. While it is close in certain respects to the Sensitivity approach, it is still distinct from it and will prove useful in the application to legal doctrines later in this Article.

Good evidence, we all know, sometimes misleads; what renders it good is not the fact that it never misleads, but rather that it doesn't mislead often. Importantly, not all cases of misleading evidence are alike. In some cases in which a piece of evidence misleads us, there seems to be nothing more to say, except to note that the evidence is usually good and rarely misleading, and that this time we were unlucky. This, it seems, would be the right attitude to have if we rely on the statistical evidence in the Blue Bus Company or the gatecrasher cases and then find out that it misled us. But in other cases, the fact that evidence misled us calls for explanation. This is the case with eyewitness evidence, for instance. If we rely on an eyewitness and then find out that he or she misled us, this seems to call for explanation: Why is it, we are tempted to ask, that he was mistaken on this occasion? The question makes sense and calls for an informative answer (the lighting was not good, the other company's bus looked very similar, the witness had an interest to lie, etc.). And so we have another epistemic way of distinguishing between statistical and individualized evidence. Individualized evidence is the kind of evidence that, when it misleads, calls for explanation. Misleading statistical evidence does not call for a similar explanation. (63)

Of course, for this thought to be fully developed, many more details need to be filled in. In particular, more needs to be said about what does and what does not call for explanation. Furthermore, it would be interesting to pursue the relationship between this explanatory test and Sensitivity. Given some plausible thoughts about the relationship between explanations and counterfactuals (perhaps, for instance, if one thing explains another, then had it not been for the former, the latter wouldn't have happened), some close relation between the two ways of vindicating the distinction may not be too much to hope for. (64) But even without these further details, it is clear that this explanatory distinction--between mistakes that do and mistakes that do not call for explanation--captures something that is both intuitively important and that gets the cases we already mentioned right. When we return to discussion of legal doctrine, it will prove helpful from time to time to utilize this explanatory test as well.

C. From Epistemology to More Practical Concerns: Should the Law Care About Sensitivity (or Knowledge)? (65)

In this Subpart we ask whether the law should care about the epistemic considerations just discussed, and, in particular, about Sensitivity. The answer that will emerge, perhaps surprisingly, is negative: whether a belief or a finding is sensitive should not be a matter of legal concern--unlike whether a belief or a finding is reliable or accurate. Recall the natural thought we mentioned above about the relationship between an epistemic vindication of the distinction between statistical and individualized evidence and its practical implications: we claimed it was reasonable to suppose that the epistemic verdict will be relevant practically as well. But it is now time to question this plausible hypothesis, at least when it comes to the law.

Assume we're right in everything thus far said. Assume, in other words, that the problem in the Blue Bus and gatecrasher cases is similar to the problem in the lottery paradox. Assume also that what is needed is an epistemic and not merely a practical vindication of the distinction that each illustrates. And assume, finally, that the relevant epistemic story involves Sensitivity, or perhaps an analysis of which mistakes call for explanation. We can still ask why any of this matters when it comes to the law. Why should it make a legal difference whether a certain belief is sensitive to the truth or qualifies as knowledge? Why does it matter, from a legal perspective, whether or not some evidence is such that if it were to mislead us, it would call for explanation? To put it bluntly: Why think that the law should care about epistemology at all? In what follows, we're going to essentially concede that it should not. But Sensitivity is going to emerge as practically important nonetheless (as will related epistemic conditions, such as the ones employed by the explanatory test). In Part II.C.1, we start by stating the puzzle more clearly and precisely. In Part II.C.2, we then proceed to present Chris Sanchirico's theory of character evidence. This theory has nothing directly to do with the epistemology and is grounded in instrumental reasoning about incentives. In Part II.C.3, we generalize Sanchirico's account and apply it to statistical evidence. Thus, we will present a practical, incentive-based vindication of the distinction between statistical and individual evidence. But that story, as we will also show, is not independent of the epistemic story we've been telling so far. Interestingly, our epistemic story and Sanchirico's incentive-based story are intimately related; counterfactuals that are particular instances of the general Sensitivity condition and the facts depicted by them play an important part in both stories.

1. The remaining puzzle: why care about knowledge?

Should it make a legal difference, then, whether a finding satisfies Sensitivity?

One way to pump intuitions about which factors should be legally significant is to construct a thought experiment in which you have to choose the legal system under which your children will live. We describe two options, A and B, and make sure that they differ only in one way--the feature whose legal significance we want to explore. Then we can ask whether you should prefer A or B, or indeed, whether you should be willing to give up other important things in order to assure that your children live under legal system A rather than B.

If we apply this test to the accuracy or reliability of judicial decisions, it seems hard to deny that these are things the law should care about. If judicial mistakes are somewhat less common in system A than they are in B, this gives some reason to prefer A over B as the system under which your children will live. (66) There may not be complete consensus as to how important it is that courts not err, which mistakes it is more important to avoid, (67) or whether error avoidance is more or less important than other social goals. (68) But it is unequivocally agreed on that courts ought to avoid too many "big" mistakes. Whatever the functions of the law, whatever good it helps achieve, its ability to do so depends on factfinding accuracy. (69) Furthermore, parties seem entitled to court procedures that will render mistakes that infringe on their rights sufficiently improbable. (70) But good statistical evidence actually promotes accuracy. Because statistical evidence, in the cases we've been focusing on, is probabilistically good, over the long run, excluding statistical evidence is bound to lead to less accuracy (just like other cases of ignoring genuinely probative evidence). Why should we be willing to pay this price?

A simple thought experiment suggests that we should not. Assume that the epistemic stories from above are along the right lines, so that statistical evidence is epistemically inferior compared to individualized evidence. If so, by excluding statistical evidence and relying exclusively on individual evidence, the law will render its findings more epistemically respectable; perhaps, for instance, more of them will rely on what amounts to knowledge. Still, by excluding statistical evidence, the law will render its findings overall less accurate, as compared to a policy of including all probative evidence. Should the law trade some accuracy or reliability in return for more epistemic respectability? We can apply our test again. Suppose that legal system A is the more epistemically respectable one. Perhaps, for instance, in A (but not in B) courts only base their judgments on what they know, on beliefs that are sensitive to the truth. But suppose that it is system B that is somewhat more accurate. Which system do you choose for your children to live under? Is it more important for you, say, to minimize the risk of your children being wrongly convicted, or to assure that they will not be convicted unless a jury's belief fulfills the epistemic requirement of Sensitivity? Indeed, how much risk of a mistake are you willing to allow in order to assure the epistemic respectability of the legal system? The answer that seems plausible to us is none at all. To be willing to pay a price in accuracy in order to secure some epistemic respectability of the legal system looks to us like an objectionable kind of epistemic fetishism.

The point is quite general, and it applies much more widely than just to Sensitivity. Whatever the relevant epistemic factors--Sensitivity, the explanatory test, etc.--it still seems like epistemic fetishism to be willing to pay a price in accuracy in order to secure these factors. But this is precisely what excluding statistical evidence on account of its epistemic deficiency amounts to.

The point is not that the law--not even evidence law--should "care" only about accuracy. Other considerations (having to do with dignity, security, privacy, the inviolability of certain relationships, or the opportunity costs of the litigation process) may, at times, trump accuracy. (71) This is true in general, (72) and it may very well be true in our context as well. Perhaps, in other words, there are some cases involving statistical evidence in which other considerations trump accuracy. This may be the case with respect to certain profiling cases--where human dignity trumps accuracy. (73) Our point is merely that epistemic considerations alone never seem to justifiably defeat considerations of accuracy when it comes to legal policy.

In this way the story of Sensitivity as an epistemically relevant condition may be thought of not as a vindication of the distinction between statistical and individualized evidence, but as a diagnosis of the common intuitions that are suspicious of statistical evidence and perhaps even the beginnings of a debunking of these intuitions. This story helps to articulate what these intuitions track, which is something like a desire for evidence that can support knowledge. But now that we know that the law of evidence should not care about what these intuitions track, we should perhaps discard them, at least when it comes to the law. The Sensitivity-based epistemic story may render the relevant intuitions understandable but not defensible as a cornerstone of legal policy. A different story is going to have to be told if the distinction between statistical and individualized evidence is to be vindicated. But that story, we will argue, is very closely related to the knowledge story. For in this story, though knowledge has no legal value, it will end up being indirectly relevant after all.

2. Sanchirico on character evidence

In order to articulate the instrumental story we need first to consider Sanchirico's work on character evidence, (74) a type of proof that features similarities to statistical evidence (75) and is subject to equally ambivalent treatment by courts. Character evidence is typically admitted in criminal cases at the sentencing phase (76) but is inadmissible, in most contexts, at the guilt phase. (77) This is in spite of the underlying suspicion that this type of evidence has probative potential to facilitate a more accurate decision at the guilt phase. (78) If character evidence is deemed inadmissible at one phase of the criminal trial, what is the justification for admitting it at a later stage of the same proceeding? Alternatively, why ban evidence of such probative value when deciding on the crucial question of guilt? Sanchirico addresses this question.

His core argument is that the rule prohibiting the use of character evidence for propensity reasons can be convincingly explained and justified only by the broader scheme underlying evidence law--namely, the creation of incentives for proper out-of-court conduct. (79) While character evidence has predictive (and, therefore, probative) value, claims Sanchirico, it has no incentive value: its presence dampens the incentives for previously convicted persons to refrain from the proscribed acts. The reason for this is that at the juncture most relevant for incentives--when an agent is deliberating whether and how to break the law--the relevant character evidence is already a given and can be used to his detriment whether or not he chooses to engage in the misconduct. This leads to a decrease in the marginal cost of engagement in the criminal activity ex ante. Ideally, in order to generate efficient incentives, we would want the actor to know that the likelihood of his being punished strongly depends on whether or not he decides to break the law here and now. The weaker this dependence, the weaker the incentive provided by the law to not engage in this specific criminal behavior. Thus, admitting character evidence at the trial stage would be counterproductive in terms of incentives. (80) The prohibition on character evidence promotes deterrence by avoiding a decrease in the marginal cost of engaging in criminal behavior. (81)

Sanchirico's argument underscores an important purpose of evidence law: evidence law should be construed as being also (perhaps primarily) about supplying good incentives for primary behavior--behavior of agents outside the courtroom and outside the legal process more generally. (82) Of course, Sanchirico's claim need not be construed as asserting that giving the right incentives to primary behavior is the only normative consideration governing the rules on character evidence. But even if other underlying rationales do apply, Sanchirico has succeeded in drawing attention to another kind of consideration, one that it would be foolish for a legal system to ignore.

Sanchirico's article is devoted to character evidence, not statistical evidence. The similarities and distinctions between the two types of evidence will be further pursued below. At this stage of the discussion, however, the relevant point is that his general strategy can be easily applied to statistical evidence as well. Think, for instance, about John, the potential gatecrasher who is now deliberating--weighing the options of purchasing a ticket, gatecrashing, or going home and doing something else altogether. We are assuming, of course, that John has no influence on the behavior of the other people at and near the stadium. This means that he has almost no influence on the relevant statistical evidence--the percentage of those who enter the stadium without a ticket. For all intents and purposes, he should think of it as already a given. If so, though, our willingness to rely on statistical evidence almost entirely eliminates whatever incentive the substantive criminal law can give John not to break the law. For if the statistical evidence is strongly against him--say, because ninety-eight percent of those at the stadium are gatecrashers--John already knows that he will be convicted, regardless of whether he buys a ticket. And if the statistical evidence is not strongly against him, he knows that it will constitute strong exonerating evidence whether or not he is guilty of gatecrashing. Either way, then, he might as well go ahead and gatecrash; whatever he decides will have negligible effect on the likelihood of his being punished.

Sanchirico's analysis can also be employed in the Blue Bus context: if statistical evidence regarding the 70% market share of the Blue Bus Company were admissible at trial, deterrence would be undermined. This is due to the fact that the Blue Bus Company's expected cost of engaging in negligent behavior is a function of the difference between the probability that liability will be imposed given negligence and the probability that liability will be imposed given engagement in the socially desirable behavior. Admitting the market share statistical evidence would enhance the probability of liability in the latter type of cases. In other words, introducing statistical evidence at trial (ex post) would lower the marginal cost of negligent behavior for the Blue Bus Company, thereby dampening its incentives to take the necessary precautions or to engage in the desirable level of activity (ex ante). At the same time, the Red Bus Company--holding only 30% of the market share--will also be disincentivized to adopt the socially optimal precautions or activity level so as to prevent the occurrence of negligent accidents, because introduction of the statistical evidence will lower the prospects that it will be held liable for such accidents. (83)

3. Solution: the instrumental significance of being sensitive

At this stage, we find ourselves in the following predicament: the scope of the resistance to relying on statistical evidence is much wider than its appearance in the law of evidence, applying even in more purely epistemic settings, where nothing resembling the instrumental considerations relevant to the law is present. An epistemic explanation is thus called for, and we tried to formulate one in terms of Sensitivity. But the Sensitivity-based vindication is not germane to the law, certainly not in a way that could justify tolerating a higher rate of inaccuracy due to the inadmissibility of accurate (though insensitive) statistical evidence. In the legal context, what is needed is an instrumental account, in line with Sanchirico's writing on character evidence. But of course, the instrumental story cannot assist with the lottery paradox or other nonlegal cases where talk of incentives seems out of place. Is there no way out? Furthermore, is it mere coincidence that the epistemic and instrumental considerations align so neatly, at least when it comes to the law?

The answer to these questions is no. Think about incentives as in the case of John, who is deliberating about whether or not to purchase a ticket. He is now thinking in terms of conditionals: "If I gatecrash the stadium, they will punish me. If I don't, they won't." Suppose that John proceeds to gatecrash. Then his conditional "If I don't gatecrash the stadium, they won't punish me" picks out the same fact that can later (perhaps when John is on trial) be captured by the counterfactual "Had he not gatecrashed, we would not have punished him." This counterfactual should sound familiar: it is the relevant instance of Sensitivity. The punishment and the beliefs on which it is based are sensitive if and only if this counterfactual is true. In other words, though the epistemic story is not itself of legal value, and though the instrumental story that is of legal value is not itself epistemologically respectable, both of them nonetheless stem from the same source--Sensitivity-style counterfactuals. Such counterfactuals are necessary both for knowledge (or are in some other closely related way epistemically relevant) and for a reasonably efficient incentive structure. While the epistemic story and the instrumental story do not depend on one another, they are not totally independent of each other either, for both are contingent on Sensitivity and related counterfactuals. (84)

What we end up with is the following: There is a need for an epistemic story, one that will treat lottery cases and legal cases alike. Sensitivity and its epistemic significance does so. There is also a need for a practical, most probably instrumental story, one that will vindicate the legal significance of the distinction between statistical and individualized evidence without resorting to knowledge fetishism. The generalization of Sanchirico's account does so. (85) But the incentive-based account derived from Sanchirico's argument also relies on the truth of relevant counterfactuals, indeed the very same counterfactuals the epistemic account relies on. Thus, Sensitivity is a part of the answer to both the epistemic and the practical questions.

Note, however, that what is relevant for policy purposes is the incentive story rather than the epistemic one. (Otherwise, we really would have a case of knowledge fetishism.) If there are cases, then, where the instrumental payoffs on which the incentive account relies are not in place, or if they are in place but are outweighed by other instrumental considerations, then even if relying on the relevant piece of evidence would violate Sensitivity, we do not see a practical reason not to rely on it. (86) In what follows, we will apply this theoretical structure to the legal doctrine--to demonstrate its capacity for solving some doctrinal puzzles--and offer prescriptions for legal reform.
COPYRIGHT 2015 Stanford Law School
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Introduction through II. The Theoretical Framework, p. 557-585
Author:Enoch, David; Fisher, Talia
Publication:Stanford Law Review
Date:Mar 1, 2015
Previous Article:God, civic virtue, and the American way: reconstructing Engel.
Next Article:Sense and "sensitivity": epistemic and instrumental approaches to statistical evidence.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |