Printer Friendly

The rule of probabilities: a practical approach for applying Bayes' rule to the analysis of DNA evidence.

AN APPRECIATION OF RICHARD CRASWELL
INTRODUCTION
I.   THE ISLAND OF EDEN
II.  BAYES FOR AN ERA OF BIG DATA
III. COMPARATIVE STATICS
     A. Trawling a Larger Database
     B. Tom Versus Tom, Dick, and Harry
IV.  APPLICATION TO PEOPLE V. COLLINS
V.   APPLICATION TO PEOPLE V. PUCKETT
     A. Minimal Priors
     B. A Model for Calculating Priors
     C. Application to Puckett
VI.  EMPIRICIZING THE ALIBI AND PRIOR PROBABILITIES
     A. Large Database Trawls
        1. Empiricizing the alibi probability
        2. Empiricizing the prior probability
     B. Small Database Trawls
VII. ADMISSIBILITY
CONCLUSION


AN APPRECIATION OF RICHARD CRASWELL

It is entirely fitting for an Article on the legal implications of information economics to honor Dick Craswell. (1) Large swaths of Dick's writings are careful workings-out of the ways that imperfect information can impact private behavior and constrain judicial decisionmaking. (2) Dick's analysis of how consumers might mistakenly update their prior beliefs after a corrective advertising order (3) is a close analogy to our claim that unguided juries are likely to mistakenly update in response to incomplete statistical DNA evidence.

INTRODUCTION

With the recent Supreme Court decision allowing the collection of DNA samples from any person arrested and detained for a serious offense, (4) it seems inevitable that the justice system will collect and use large DNA databases. Currently, DNA databases are widely used. As of April 2015, the Combined DNA Index System (CODIS) maintained by the Federal Bureau of Investigation (FBI) had more than "283,440 hits [and had assisted] in more than 270,211 investigations." (5) There is concern that as database size increases, so too will the rate of false positives, and thus innocent people will be convicted when their DNA matches evidence left at a crime scene. (6) This concern has led courts to a convoluted and misguided use of multiple lenses to evaluate DNA evidence.

In this Article, we argue that there is a single right answer for how to incorporate the use of DNA evidence. That answer is the application of Bayes' rule, a 250-year-old formula for updating a starting probability estimate for a hypothesis given additional evidence. (7) Applying Bayes' rule, we argue that triers of fact evaluating DNA evidence should be presented with what we call the "source probability": the probability that a defendant whose DNA matches the DNA found at the crime scene was the true source of that evidence. As we discuss below, the source probability is not the same as the chance of a random DNA match and does not equal the probability of guilt; even if the defendant was the source of the forensic DNA, the defendant might not have committed the crime. (8)

Our primary contribution will be to show that the source probability may turn crucially on the size of two variables that have not been introduced (or relied upon by experts) in DNA matching cases: (i) the initial or prior probability that the source of the DNA is included in the database, and (ii) the relevant or adjusted size of the DNA database, a calculation that takes into account the demographic information known about the criminal and the probability that a nonsource in the DNA database would have an alibi.

Experts have shied away from helping jurors form baseline beliefs, which are more formally called prior probabilities, and from then helping them convert those priors into a conclusion. The problem is that, absent priors, it is not clear how to coherently employ the expert information. As we discuss in our analysis of People v. Puckett, (9) an expert might well conclude that certain evidence makes it 100 times more likely that the suspect was at the scene of the crime. But 100 times more likely than what? The starting point or prior for a suspect who is identified from a large database trawl might well be less than 1 in 1000. In that case, 100-to-1 evidence is not persuasive. If the suspect was related to the victim and had motive and opportunity, then 100-to-l would be much more convincing.

We will argue that there are practical means of estimating the prior probabilities and the relevant database size and that, as a legal matter, these parameters as well as the final source probability are admissible. In particular, changing the focus from a question about the prior probability that the defendant was the source to the prior probability that the "database is guilty"--that is, the probability that someone in the database is the source of the forensic evidence--not only is analytically and empirically more tractable, but also avoids the evidentiary limitations concerning a particular defendant's prior bad acts.

In People v. Johnson, a California Court of Appeal panel, in reviewing different types of DNA statistics, emphasized that '"the database is not on trial. Only the defendant is.' Thus, the question of how probable it is that the defendant, not the database, is the source of the crime scene DNA remains relevant." (10) But to apply Bayes' rule, the probability that the database contains the source of the forensic DNA, assessed prior to any consideration of whether an individual in the database actually matches, becomes a crucial input in determining the (posterior) likelihood that a particular matching defendant is the source of the forensic DNA. Contrary to Johnson, assessing the prior probability that the database includes the source--colloquially, "the probability that the database is guilty"--provides at once a readier means of estimation and a stronger argument for admissibility.

At the end of the day, we will acquit or convict a defendant, not a database. The problem is that it is very hard to directly estimate a starting point or prior probability for the likelihood that a specific defendant committed a crime. For example, what is the chance that some "John Doe" committed a crime before we have any evidence about Mr. Doe? In contrast, it is more coherent to ask the chance that a class of individuals, for example, convicted felons, would include the perpetrator of a crime. (11) For example, if half of rapes are committed by convicted felons, then the starting point would be fifty percent, assuming that the database contains all convicted felons. If jurors are to properly understand the implications of finding a match from a large database trawl, the size and characteristics of that database are relevant information.

Some legal analysts have been dismayed by the ways in which evidence of a DNA match tends to eclipse any role for adversarial engagement--turning litigants into little more than potted plants. (12) But appropriate application of Bayes' rule, far from preempting the factfinding process and the adversarial process, can guide advocates to engage with the important aspects of the evidence that are still likely to be open to contestation. We will show how estimation of both the prior probability and relevant database size can be assessed under alternative assumptions that are appropriately open to literal and figurative cross-examination to assure the robustness of the bottom-line conclusion: the defendant was or was not the true source of the crime scene evidence.

For more than forty years, scholars have been debating the appropriate use of probabilities and Bayesian inference in the courtroom. (13) Among the criticisms leveled at Bayesian reasoning is that jurors will be unable to integrate probabilistic and nonprobabilistic evidence, that the occurrence or nonoccurrence of a past event does not admit to intermediate probabilities, and that Bayesian inference is incompatible with our presumption of innocence or proof beyond reasonable doubt values. (14) Instead of engaging on the question of whether probabilistic evidence can or should elicit valid inferences of defendant guilt, we instead focus on a practicable way to present powerful evidence of the posterior probability that the defendant was the source of forensic evidence. (15)

In Part I, we discuss a motivating example that illuminates three conflicting approaches to statistical inference. Part II then lays out our notation and model applying Bayes' rule and explains why the two variables that we emphasize need to be accounted for. Part III analyzes the comparative statics of our model--how the source probability is affected by changes in five underlying parameters. Parts IV and V apply our Bayesian model to the cases of People v. Collins (16) and People v. Puckett, respectively. Part V explains how the underlying parameters in our model can be empirically estimated. Part VI discusses whether our approach is compatible with current rules of evidence.

I. THE ISLAND OF EDEN

Imagine that a singular crime has been committed on the otherwise idyllic island of Eden. (17) This island has a population of 51,295, and no one has come or gone since the crime. Thus, we know with certainty that the criminal is one of these 51,295 individuals. Moreover, given the nature of the crime, it is not possible to rule out anyone on the island as potential suspects. But there is one clue: the criminal has left behind a trace of DNA at the scene. Based on the distribution of DNA, there is a one-in-a-million chance that a random individual in the Eden population would match the DNA. (18) The people of Eden are upset about this crime and they agree that each individual will be required to provide a DNA sample to be tested. The elders of Eden are able to collect samples from 51,294 individuals, all but Mr. Baker. Unfortunately, Mr. Baker was killed in a fishing accident. This accident occurred after the crime but prior to the decision to collect DNA. Mr. Baker was given the traditional burial at sea and therefore his corpse is not available, and there are no personal items from which a DNA sample could be retrieved. There is no evidence to suggest that the tragic accident was in any way related to the crime or the subsequent investigation.

After collecting all of the DNA samples, it turns out that there was one--precisely one--match, a Mr. Fisher. Being trained in probability and statistics, Mr. Fisher was able to provide the police with the following argument and analysis:
      The test is too imprecise to use. What if Mr. Baker is the guilty
   party and thus all 51,294 of the remaining islanders are innocent?
   Because so many tests are done, there is a greater than five
   percent chance (19) that there will be at least one match just by
   dumb luck. In this case, I was the unlucky fellow. But whether it
   was me or someone else, when everyone is innocent the chance is too
   high that someone will match. To state this more precisely, if the
   Eden court employs the test that a person is guilty if there is a
   match, then when Mr. Baker is the guilty party, the court will
   nonetheless convict an innocent person more than five percent of
   the time.


Eden's prosecutor was not convinced. Her reply was as follows:
      I want to be able to sleep at night. The question that would keep
   me up is the thought of convicting an innocent person. It is true
   that if Mr. Baker is the guilty party, then there would be a five
   percent chance of making a mistake. But the chance Mr. Baker is
   guilty (before doing any DNA testing) is only 1 in 51,295. For
   there to be a miscarriage of justice, two things have to go wrong:
   the rare 1-in-51,295 event that Mr. Baker was the culprit and the
   event that (at least) one of the 51,294 living folks is an unlucky
   DNA match. Thus the procedure is fine. Before any DNA evidence was
   collected, the chance that some innocent person would end up being
   a match was below one in a million! (20)


The judge took a third, slightly different view:
      The question we have in front of us is the following: We have a
   specific person accused of a crime. What is the chance that Mr.
   Fisher is guilty? Is it beyond a reasonable doubt or not? That is
   the standard in Eden.

      The probability that Mr. Fisher is the guilty party given that
   his DNA is the only match among the 51,294 samples can be determined
   as follows: At this point, there are only two possible suspects, Mr.
   Fisher and Mr. Baker. All of the other people in the population
   have been ruled out by the lack of a DNA match. Prior to the test,
   both Mr. Baker and Mr. Fisher were equally likely to have been the
   criminal. While we do not have additional information regarding Mr.
   Baker, we do know that Mr. Fisher had a match. The chance of this
   is one hundred percent if he is guilty and one in a million if he
   is innocent. Thus the odds are 1,000,000:1 in favor of his being
   the guilty party. We multiply this by the original odds of 1:1, and
   so the conclusion is that the revised odds are 1,000,000:1.
   Converting this to a probability, the chance that Mr. Fisher is
   guilty is 99.9999%.


The judge could also have reached this result using Bayes' rule. (21)

The calculations by the prosecutor and the judge are almost looking at the opposite sides of the same coin. The prosecutor is worried about implementing a rule that could end up convicting an innocent person. But the prosecutor's calculation is made before knowing whether or not a match will be found. In the present case, a match is very likely to be found (since the guilty party is very likely to be someone other than Mr. Baker). The judge is looking at the facts as they are, not as they might have been.

To see the folly of Mr. Fisher's argument, imagine that Mr. Baker was not lost at sea and that his DNA was included with everyone else's. And in this new scenario, Mr. Fisher remains the only match. Now we know with certainty that Mr. Fisher is the guilty party. Every other possible suspect has a perfect alibi: his or her DNA does not match. It is still the case that there is a five percent chance of finding a match by chance if everyone is innocent. But we know that not everyone is innocent: a crime was committed. If we would not reject the procedure in the case where all 51,295 DNA samples are collected, should we decide to abandon ship if only one sample is missing? We can no longer be certain that we have the guilty party in our pool of suspects, but the likelihood that we do is over 99.998%.

As the reader might have suspected, the population size of Eden was not chosen at random. Had the population been above 51,294, a test with a one-in-a-million chance of a false positive would lead to more than a five percent chance that at least one person would match even when everyone is innocent. Thus, once the population or database is above this size, there is little statistical power in the test of seeing a DNA match. If one employs a 95% confidence test, then Mr. Fisher (and classical statistics) argues that one should not engage in a trawl of a database larger than 51,294 (given the one-in-a-million random match probability). (22)

Our view is quite different. Whether the trawl is appropriate or not depends not just on the number of people in the database, but also on the chance the database contains the guilty party. Thus, if the entire population is 51,295, then it is entirely appropriate to trawl a database of 51,295 because the guilty party is in the database. Similarly, it is entirely appropriate to trawl a database of 1,000,000 if the entire population is 1,000,000. (23) What matters is the probability that the guilty party is in the database.

Returning to Eden, we do not have certainty, as the database is 51,294 out of 51,295, but it is close enough and would be for many numbers below 51,294 as well. That said, we should be explicit about a key assumption implicit in this statement: there is no special connection between the person who is missing and the crime.

The fact that the person is not in the database is not evidence of that person's innocence or guilt. (24) For example, if a remorseful Mr. Baker were to have committed suicide by drowning, then Mr. Baker's "fishing accident" wouldn't really have been an accident. His premature death would have been an indication that he was the guilty party.

Moving from Eden and back to reality, these three arguments illustrate three potentially different answers in response to three different questions: First, how likely are we to find a match if everyone is innocent? Second, how likely are we to find a single innocent match? And third, how likely is it that a single match is actually innocent? Our view is that courts should answer the third question, which is the judge's approach, although this is something courts have been reluctant to do, since it may require coming to an initial view about the defendant's chance of being guilty prior to hearing the evidence of the case. (25) In DNA cases where the suspect has been identified through other means, the prior can be thought of as the chance the defendant is guilty based on factors such as eyewitness accounts, motive, and opportunity. (26) But in cases where the suspect has been identified via a DNA trawl, the prior may be based on factors unrelated to the case at hand. What matters is not just the size of the database but the probability it contains the source of the forensic DNA. This is a critical ingredient to answer the ultimate question of whether the person in front of the court is guilty or not.

II. BAYES FOR AN ERA OF BIG DATA

Court cases introducing DNA evidence have traditionally focused on three different numbers:

1. The random match probability: the probability that a randomly selected person will be a DNA match.

2. The database match probability: the probability that the database contains at least one match. In practice, courts use the expected number of innocent persons in the database who will be DNA matches. (27)

3. The Bayesian posterior probability: the probability that the defendant is the true source of the DNA found at the crime scene given that the defendant is a DNA match. (28)

As the courts have recognized, these different numbers answer different questions. (29) But our view is that only one question is relevant, namely, what is the Bayesian posterior probability? This question is answered by combining the prior beliefs with the information value of a DNA match in connecting a suspect to a crime. (30) The random match probability and the database probability are both inputs to that equation, but they are not useful on their own.

The fact that random match probability and database match probability are not useful on their own does not make these numbers irrelevant. As essential inputs, they should be examined for their accuracy. If the inputs are inaccurate, then the final result will also be inaccurate. Even when there is some uncertainty about the inputs, we can still reach a conclusion if we can be confident that the inputs are above some level. This Article describes how, in practical terms, to convert the inputs into the number the trier of fact should care about.

The easiest and best way to clear up the confusion relating to how courts should use evidence concerning DNA is to demonstrate the correct way to proceed. In the process, one can see the role played by the supporting statistics.

It will be of enormous help to introduce some notation:
   S will stand for the result that the defendant is the source of DNA
   found at the crime scene. (31) This is simply shorthand. Instead of
   saying "What is the probability that the defendant is the source of
   DNA found at the crime scene?" we say "What is Probability(S)?" or
   even "P(S)."

   ~S will stand for the event that the defendant is not the source of
   the forensic evidence.

   M' represents the number of matches found in the database. In most
   cases considered, there was only one person with a DNA match found.
   But a trawl of a large DNA database with tens of millions of
   samples might well turn up more than a single match. (32) To be as
   general as possible, we will not restrict M' to be one. After M'
   matches have been identified, some of the people found may have an
   airtight alibi. Thus we introduce the related variable M.

   M is the number of matches found where the person does not have an
   airtight alibi.

   a is the probability that a random innocent person in the database
   has an airtight alibi. Examples include the following: the person
   was not born at the time of the crime, was the wrong gender, or was
   incarcerated at the time. The use of the alibi screen can be
   applied either before or after the screening is done. For example,
   the DNA lab could screen all samples and then afterward look to see
   if the match found has an airtight alibi. Alternatively, the
   database could be scrubbed in advance to eliminate all people in
   the set who have an airtight alibi. In practice, we expect that
   some of both approaches will be taken. The dataset might be reduced
   to Caucasian males prior to the screening, and then, postscreening,
   any matches might be checked for birthdates and incarceration
   records that provide an airtight alibi. Fortunately, for
   mathematical purposes, we need not care when the screening is done.
   It makes no difference to the results if the screening is done
   prior to DNA matching, afterward, or a combination of the two.

   r is the "random match probability." (33) It represents the
   accuracy of the DNA match. We measure accuracy by the chance that a
   random innocent person will have a positive match. More formally,
   it is the probability of match conditional on a random individual
   not being the source of the forensic evidence. Thus, lower values
   of r arise with more accurate tests. Experts may argue about the
   value of r in a specific case. In People v. Puckett, where there
   were five and a half loci matches, r was in the vicinity of one in
   1.1 million for U.S. Caucasians. (34) In another case, with
   thirteen loci matches, r could be as low as one in 4.3 quadrillion.
   (35) The specific value of r is adjusted for other factors that
   might be known about the assailant, such as race and gender. Note
   that we assume that if the DNA sample was actually present at the
   crime scene, then the probability of a match is 100%. (36)

   D is the size of the database against which the DNA sample is run.
   If there is just a single suspect whose DNA is run, then D = 1.
   When the DNA sample is compared to a larger database, D will
   represent the size of that database. The expected number of
   innocent matches in the database is rD. (37) In the literature,
   rD is sometimes referred to as the "database match probability."
   (38) In fact, rD is not a probability but is an upper bound on
   the probability of finding one or more matches. (39)

   p is the chance that the source of the forensic evidence is in the
   database. (40) For example, if all we know about the database is
   that it contains a single person picked at random from the planet
   Earth, then p might be one in twelve billion. (41) (The guilty
   person might be dead and thus not one of the living seven billion
   people.) If, on the other hand, we know that the guilty party is a
   Caucasian male who was alive last week and the crime was committed
   in Los Angeles, then this can be used to help refine p quite
   considerably. For example, in such a case, if the DNA database
   consisted entirely of Asian women, then the probability that the
   guilty party is in the database would be zero, p = 0.


At this point, we are ready to present the main finding. Any probability result is conditional on what we know. We know that a crime occurred, that DNA was found at the crime scene, and that if the source of the DNA is in the database, then there will be at least one match. We want to know the probability that someone in the database is the source of forensic DNA, given that we observe a certain number of matches:

Posterior Source Probability = Probability [S | M matches] (1)

We start with the aggregate probability that someone in the database is the source. This probability will tell us a great deal about the posterior source probability with regard to every individual in the database. Because we are assuming no false negatives, the posterior source probability of all unmatched individuals is zero. (42) This means that if there are no matches in the database, then the updated or posterior database source probability and the updated source probability of every individual will be zero. If there is a single unalibied match, then the posterior database source probability will be entirely focused on that matching individual (and the remaining unmatching individuals in the database will have a zero source probability). Finally, if there are multiple M unalibied matches in the database, then the posterior source probability of these matching individuals will just be the database source probability divided by M.

To derive this posterior database source probability, we deploy Bayes' rule, (43) which tells us that the posterior odds of an event occurring will equal the prior odds (not conditioned on evidence of any matches) of someone in the database being the source multiplied by the relative likelihood ratio of observing M matches:

Posterior Odds

= Prior Odds x Relative Likelihood of Observing M matches (2)

An odds formula is the ratio of two probabilities, one based on the event being true and the other based on the event being false. If the chance that the true match is in the database is p, the odds are p : (1 - p), which we will write as the ratio:

p/(1-p) (3)

Note that if the odds of a true positive are A : B, then the probability of a true positive is A/(A + B). As expected, pl{p + 1 - p) = p, the prior probability of a database match.

Once we have new information, Bayes' rule provides the updated odds:

P(S|M))/P(~ S | M) = P(S)/P(~ S) P(M | S)/P(M | ~ S) (4)

The odds start with our prior, namely, p : (1-p). But then we receive new information--there are M' matches of which M do not have alibis--and this allows us to update our odds.

The odds of observing M unalibied matches is the ratio of the probability this would happen when the source is in the database versus the probability this would happen when the source is not in the database (and the M unalibied matches occur by chance). Using the binomial distribution, (44) we show that the odds of observing M unalibied matches can be expressed in terms of M, D, r, and a (45):

M: rD(1-a) (5)

This likelihood ratio can be restated simply as the ratio of the actual number of unalibied matches relative to the expected number of unalibied, nonsource matches (46):

M: E[M] (6)

This likelihood ratio indicates how strong the new information is in terms of changing the prior opinion. For example, if the likelihood ratio in equation (6) is 10:1, then it is ten times more likely that the M matches observed are the result of the true match being in the database than all being there by luck. If our initial view was that it was twice as likely that the database did not contain the true match (prior odds are 1:2), then Bayes' rule tells us (via equation (4)) that putting these together means the new odds are 5:1 in favor of the database containing the true match.

Bayes' rule says that to derive the updated, posterior odds of the source being in the dataset, all we need to do is simply multiply the prior odds by the likelihood ratio of equation (6). Thus, by using Bayes' rule, the initial odds become the updated odds of:

p/1 - p M/rD(1 - a) = p/1 - p M/E[M]

These posterior odds can be converted into the database source probability. If the odds are A : B, then the associated probability is A/(A + B). In the present case:

Database Source Probability [S | M] =

pM/pM + (1 - p)rD(1 - a) = p/p + [(1 - p)E[M]/M (7)

This database source probability is the posterior probability conditional on observing M unalibied matches that the source of the DNA is in the database (i.e., one of the M matching persons). The probability is a function of five underlying variables: M, r, D, a, and p.

A central takeaway of our analysis will be that courts and litigants should focus on the appropriate estimates for each variable. While the number of matches (M) and the random match probability (r) have standard means of estimate, we will argue that the remaining three variables--the size of the database (D), the probability that a matching nonsource will have an alibi (a), and particularly the prior probability that the database includes the source (p)--have not been adequately theorized or empirically established in many court contexts.

The intermediate expression of equation (7) in terms of all five underlying parameters may look intimidating, but the expression on the far right has a more intuitive explanation. The posterior probability that the source was in the database is influenced by the ratio (E[M]/M) of how many innocent, unalibied matches are expected in the database (E[M] = rD(1 - a)) relative to how many unalibied matches are actually observed in the database (M). To the extent that the predicted number of unalibied, innocent matches is small relative to the actual number of matches, we conclude that the match is likely to come from the true source. If one expected three innocent, unalibied matches and only observed one match, then the probability that the source is in the dataset will be lower than if one expected 0.01 innocent, unalibied matches but found one match.

Before looking at the full generality of this formula, let us begin with some simple cases. If p = 1, then equation (7) implies the database source probability is also equal to 1 as long as at least one person in the database is an unalibied match. This makes perfect sense. The case of p = 1 means that the prior probability of the database including the DNA source is 100%, and as long as DNA testing uncovers at least one unalibied match, the posterior source probability remains at 100%. (47)

Another simple case arises when the expected number of unalibied, innocent matches (E[M]) is the same as the number of unalibied matches that are observed (M). In that case, the posterior database source probability remains unchanged by conditioning on the DNA evidence:

Posterior Database Source Probability = Prior Source Probability = p

We started out with a p chance that the source was in the database, and we end up in the very same spot. We expected (given our prior) to find a certain number of innocent matches, and that is precisely the number we have found. If we had found more matches than we had expected, that would have increased the chance our original pool included the DNA source. If we had found fewer matches than expected, that would have decreased the posterior likelihood that our original pool included the DNA source. But if the observed number of matches is just equal to what we had predicted would arise by chance, then it is as if we had received no new information, and so our original (prior) estimate remains unchanged.

In many cases, when there is only one match (M = 1), the database source probability can be thought of as a function of just two variables, the prior probability that the source was in the database (p) and the expected number of innocent, unalibied matches (E[M]). If there is a 50% chance that the DNA source is in the database, and we estimate that the expected number of innocent, unalibied matches is 0.01, then the posterior probability that the source is in the database becomes more than 99%. (48) Intuitively, when Bayesians expect to find only 0.01 unalibied, innocent (nonsource) matches in the dataset, but then find one match, they update their prior belief, increasing the database source likelihood from 50% to 99%.

When the prior probability is (or is deemed as a matter of law to be) a very small fraction, equation (7) suggests that the odds of seeing M matches must be in the millions- or even billions-to-one range to have a good deal of confidence that the true match is in the database. For example, if a 100,000-sample database (49) were a random selection of the U.S. population, then p would be on the order of 0.03% (100,000/300,000,000), and the requisite odds of observing one innocent, unalibied match would need to be at least 300,000 to 1 to produce a posterior database source probability of 99%. And to get odds of 300,000:1 with a database of 100,000, one needs a random match probability of less than one in 300,000 x 100,000, which equals 1 in 30 billion.

Of course, the probability that the source is in the database is different from the probability that a particular individual is the source of the forensic data. Equation (8) provides the final step to convert to the probability that a particular person who matches is the source of the forensic evidence:

Source Probability [S | M] = [1/M] Database Source Probability (8)

Equation (8) captures the simple point that when there are multiple matches, the DNA test evidence by itself cannot distinguish among the M people whose DNA matches the forensic DNA evidence. Thus, the source probability is divided equally between them, 1/M to each. If there is only one match, then the entire database source probability becomes the source probability for that matching individual. Thus, if we conclude there is greater than a 99% posterior probability that the source is in the dataset, and there is only one match, then there is a greater than 99% probability that the matching person was the source because all the other members of the database have been excluded.

III. COMPARATIVE STATICS

This Part explores how the source probability of equation (8) changes as we change the four underlying variables (a, r, D, and p) while holding M constant. (50) We also speculate on how these variables are likely to change over time. To begin, we rewrite the source probability formula, setting M to 1:

Source Probability [S | M = 1] = p/p + (1 - p) rD (1-a) (9)

We use this formula to show how the source probability changes with respect to r, a, and p (51):

1. Increasing the random match probability, r, while holding everything else equal decreases our confidence that the matching individual was the source of the forensic DNA. This makes intuitive sense. The larger the chance that an innocent person will match, the lower the likelihood that a matching person will be the source of the forensic evidence.

2. If the chance of a randomly selected person in the database having an alibi increases, then, ceteris paribus, the chance of the matching person being the DNA source increases as well. As the alibi probability (a) increases, the chance of an unalibied, innocent match decreases and therefore the likelihood of an unalibied match being the source increases.

3. If all that changes is an increase in our prior, p, that the database includes the source of the forensic DNA, then once again it follows directly from the formula that the chance the source of the match is true increases. As intuition would suggest, a higher initial belief that the database includes the forensic source tends to increase our posterior belief that matching unalibied individuals are the true source.

Technology combined with law is likely to increase the probability that innocents will have alibis. Nonsources increasingly leave technological footprints of their whereabouts at particular times (for example, via iPhones and surveillance cameras). (52) This likely increase in the parameter a will consequently also tend to increase the database source probability.

The implication of a match from a larger database has been controversial. (53) The seemingly simple question is whether matches from a larger database produce a higher or lower source probability. (54) While this question can be easily answered if we apply our earlier assumption that all else is held constant, the problem is that increasing the size of the database also increases the probability that the source is part of the database. There are two opposing effects: a larger database is more likely to contain the guilty party, but it is also more likely to lead to a match of an innocent person. (55)

Which of these two effects is more important depends on whether the new people added to the database were as likely as the original people to be the source of a match. The easy case is one in which everyone in the database has the same prior chance of being the source match, so that p is proportional to D. In that case, we show that the effect of the increase in p is (slightly) bigger than the reduction due to D, and thus we can be (slightly) more confident when a suspect is identified from a larger database.

A. Trawling a Larger Database

The question that often arises with a database trawl is how to adjust for the size of the database. (56) Is it more convincing to find a single match when trawling a database of size 1000 than when trawling a database of size 100,000?

To answer this question, we first assume that the two databases are each comprised of individuals who, from a Bayesian perspective, have identical prior likelihoods of being the source of the forensic DNA. In other words, the larger database has the same average quality as the smaller one in terms of finding matches. For example, two equal-sized cities might both have a database of convicted felons, but one city has been collecting data longer than the other. (57)

In this case, a match is more convincing from a database of 100,000 than from a database of 1000, but typically only by a small amount. (58) As it turns out, there are two forces that almost exactly cancel each other out.

On the one hand, when a single match is found in a database of size 1000, it is 100 times more informative than when it is found in a database of size 100,000. That is because the 100,000-size database should have 100 times as many random matches as a database of size 1000. Thus, the odds are 100 times more in favor of guilt when we observe a match from a database of size WOO than 100,000.

On the other hand, our prior belief goes almost exactly in the other direction. If we have 100,000 persons in the database, the probability that the true match is in the database is 100 times greater than if we only have 1000 persons.

Mathematically, one can show that the two effects don't perfectly cancel out: the prior probability is proportional to the size of the database, while the odds ratio is inversely proportional to the size of the database. (59)

If everyone in the database has the same prior chance of being the source, then p = aD, and substituting this into equation (9) yields:

Source Probability [S | M = 1]

= [alpha]D/[alpha]D + (1-[alpha]D)rD(1-a) (10)

= [alpha]/[alpha] + (1-[alpha]D)r(1-a)

Note that the D terms all factor out, except for the (1 - [alpha]D) in the denominator. As D gets bigger, the denominator falls, and this means the final probability that the match is the source goes up. This proves the result.

However, the effect is very small until [alpha]D is close to 1. For example, if [alpha] = 1/T so that individuals are random draws of the population, and T is the population of California (37 million), then the right-hand side of equation (10) is only reduced by one percent in going from a database of size 1 to 370,000. In contrast, if the database includes all 37 million people, then the right-hand side goes to zero, and so the probability that the database match is the source goes to one. Provided that the database is a small fraction of the population and the database grows with constant "quality," then it is just slightly more convincing to find a single match in a large database. As the database becomes a large fraction of the total population (as was the case in Eden), then a match becomes much more convincing.

At the other extreme, consider the case where we add someone to the database who we know to be innocent but the courts do not. In this case, the chance that the source match is in the database is unchanged at p, but the expected number of innocent matches has gone up by r. Now the result of a trawl from a larger database is less convincing:

Source Probability [S | M = 1] = p / p + (1-p)r(D + 1)(1-a) (11)

An increase in D to D + 1, holding p constant, increases the denominator and reduces the posterior source probability.

Therefore, whether a match from a larger database is more or less convincing depends on whether the increased odds that the larger database contains the true source are more than enough to offset the expected number of innocent matches and thus the reduced odds that a match is the true source. The problem with increasing the size of the database is that a larger database will often be of lower quality when it comes to potential suspects. For example, adding random individuals to a database of convicted felons is likely to increase D more than it increases p. Of course, if enough people were added to the database so that it eventually became all-inclusive, then p would converge to 1 and the match would be perfect evidence.

B. Tom Versus Tom, Dick, and Harry

David Balding and Peter Donnelly make an argument that it is always better to observe a single match from a large database than a small one. (60) The intuition they present is as follows: Say that you tell me that Tom is a match. There is some chance that Tom is the true source. But it might also be Dick or Harry. Thus, consider the new result if we expand the database from just Tom to Tom plus Dick plus Harry. If the lack of a DNA match rules out Dick or Harry, then there are fewer other people who might be the true source. In this case, this evidence now points more toward Tom.

This argument is one hundred percent correct, but it doesn't fully answer the question of whether a match from a larger database is more or less convincing than a match from a smaller one.

The Balding and Donnelly result takes the view that the suspect (Tom) who has matched has some prior chance of being the true source and that prior probability does not depend on the size of the database. This seems like a fine assumption when Tom has been identified through other evidence, such as by an eyewitness. But if the only evidence against Tom is the DNA match from a database trawl, then the prior chance Tom is the true source may be influenced by the size of the database. Indeed, in the model for calculating priors that we present in Part V, the prior probability for Tom (and all others in the database) is the chance the database contains the true source divided by the relevant number of suspects in the database. And--this is critical--the chance the database contains the true match does not directly depend on the size of the database. (It depends on the nature of the criminal behavior at issue, which determines the lifetime chance a criminal will enter the database.) Thus, the prior probability for Tom is inversely proportional to the size of the database.

Here is another way of putting the difference. Start with the case where Tom is a felon and his DNA is a match. It is always true that the power of the match goes up if we test all other felons and none of their DNAs are a match. But the power of the match has to be applied to our starting point. What is the chance Tom will be a true match before the results of the test are known? This is Tom's prior. If Tom is related to the victim or has some motive, then the number of other felons might be irrelevant to the prior beliefs. But if we know almost nothing at all about the suspect (as in People v. Puckett, discussed in Part V below), then all relevant felons in the database (those with the right demographics and no alibi) are equally good suspects. But how good they are depends on the database size.

It is possible that a larger database has proportionally more good suspects, as might be the case if the larger database is more complete than the smaller one. Here the prior for any one suspect from a large database could be the same as the prior for any suspect from a smaller one. (61) But there are other situations where the prior beliefs for the database as a whole don't increase with size, and in that case the priors are less favorable when the suspect is a random member of a larger database.

The difference in answers is based on a difference in experiments. If we start with a suspect and a fixed prior belief about the suspect, then our beliefs become more certain to the extent that we can rule out a greater number of other individuals. More nonmatches is evidence that the match is correct. But if we don't start with a specific suspect and just ask how we feel about a match that comes from a large database trawl, then we cannot say that a match from a larger database increases our confidence that the match is truly from the source.

We want our approach to work for large databases as well as for very targeted searches. DNA testing of known suspects, which is also referred to as "confirmation" testing, presents a trawl of a very small database, often a single suspect (62) We will have very different priors about the chance that the source is included in a test of five suspects than the chance that the source is included in a database trawl of 300,000 individuals, who as of yet are unconnected to the crime. We take up the question of how to implement the source probability equation for testing of a small database (that is, a case in which there is already a suspect whose DNA is being compared to the forensic sample) below in Part VI.B.

IV. APPLICATION TO PEOPLE V. COLLINS

Our analysis of DNA evidence can be usefully compared to the use of eyewitness evidence in the famous People v. Collins case. In 1968, the California Supreme Court overturned a jury verdict in which Malcolm Collins and his wife Janet were convicted of second-degree robbery. (63) The primary evidence against the couple was similar to a DNA match. The robbers were identified as an interracial couple, where the woman was blond with a ponytail, the man had a mustache and beard, and the couple drove a yellow car. Assuming that these features were all independent, the prosecutor's expert found a one-in-12-million chance that a couple would match this description. Many have commented on the implausibility of the independence assumption (64) and the challenge in finding the correct size of the relevant population. (65) The California Supreme Court also properly critiqued the lower court for not distinguishing between the random match probability (i.e., one in 12 million) and the probability of guilt. The Court calculated that the chance of finding two or more matches was 41.8%, which created reasonable doubt of the Collinses' guilt.

We focus on two points not previously considered by the courts or commentators that flow from our analysis. The first point is that even the California Supreme Court did not pose the correct probability question. The court focused on the probability that there would be a second couple that matched given that the police had identified one couple that matched. (66) If there was another such couple, that would indicate that the Collinses might not be guilty. However, it is better to ask the right question directly: What is the probability that this particular couple present in court, which matches the characteristics and has no alibi, is the guilty party? (67)

In the case of a DNA match, we calculate the probability that the DNAmatching defendant is the true source of the forensic evidence. The translation to Collins is whether the characteristics-matching defendants (i.e., the defendants whose characteristics match the eyewitness reports and who have no alibi) were the same couple that was seen at the crime. We calculate that probability below using the court's numbers as well as more reasonable estimates.

The calculation in Collins and any eyewitness match is different from a typical DNA trawl with respect to one very important feature. In DNA trawls, the police evaluate the entire database for matches. In contrast, with Collins, there is no practical opportunity to search the entire population of all couples in Los Angeles to see how many matches are to be found.

The probability of guilt in Collins thus depends on how far the police had gone in their search to find matching couples. If the police had searched the entire population of possible couples and found that the defendants were the only match, then we would know that the couple is guilty. The more the police have searched and only turned up one match, the more likely it is that the Collinses are guilty. The best case for the Collinses is that the police have not actually searched at all to see how many other couples fit the characteristics. We will proceed under this assumption to find a lower bound on the chance that the Collinses are guilty.

If there are T potential couples in the population and all of them are considered to be equally likely to be the guilty party, then our prior odds are 1 : T - 1. (This is the equal-prior assumption.) The evidence value (likelihood ratio, in our earlier parlance) of a match when the defendant has no alibi is 1 : r(1 - a). Thus, the updated or posterior odds are 1 : r(T - 1)(1 - a). If Mr and T are both large, this expression can be closely approximated by the odds 1 : rT(1 - a), which implies that the (posterior) probability of guilt is:

1/1 + rT(1-a)

The numbers considered by the court were r = 1/12 million and T - 12 million. Assuming that the probability that an innocent person in the database has an airtight alibi is a = 1/2, we get rT(1 - a) = 1/2. We thus would expect to see 0.5 innocent matches and 1 guilty match, and so the chance of guilt would be 1/1.5 = 67%. The court erred in never calculating this posterior probability that the defendants were the same people seen at the crime scene.

Moreover, the numbers used by the court significantly underestimated the expected number of innocent couples that would match and so overestimated the probability of guilt. We can use demographic information to illustrate the problem. In the mid-1960s, there were roughly 100,000 black males in the relevant age group in greater Los Angeles. (68) This offers a better starting point for the prior--namely, 1/100,000--given that the male robber was black. Suppose that we assume that (i) each black male knows 3 white women well enough to commit a robbery together, (ii) only 1 of 30 white women is blond with a ponytail, (iii) 1/10 of cars are yellow, (iv) 1/10 of black men have a beard and mustache, and (v) 1/2 of the innocent population would not have an airtight alibi. Given these assumptions, the information value of a match falls to r = 3 x 1/30 x 1/10 x 1/10 x 1/2 = 1/2000. Even if these characteristics were all independent, we would still expect to find 100 couples that match the full description, 50 of whom do not have an alibi. Thus, the expected number of innocent couples that match would be closer to 50 than to 0.50. With an equal prior, the chance the Collinses are guilty would be 1/(1 + 100,000/2000) = 1/51 or around 2%.

Our second point is that an equal prior is not the most reasonable starting point given all the information available to the factfinder. Even prior to the matching information, the Collinses seemed much more likely than an average couple to have committed the purse snatching. They had less than $12 to their names when they were married two weeks before the crime, and yet Malcolm Collins, who was unemployed, was able to pay $35 in traffic fines the day after the robbery (in which somewhere between $35 and $40 was stolen). (69) This shows the importance of using other information in forming the relevant prior or in updating the prior separate from the matching information.

One should start with the right question: What is the probability of guilt if a defendant matches the relevant characteristics and has no alibi? The answer depends on both the information from the match and the prior. The experts can do a far better job in presenting the relevant random match probabilities than was seen in Collins. With those probabilities in hand, calculating the chance of guilt is straightforward. In Collins, the correct calculation would have given the defense a strong argument that the probability of guilt was tiny, around 2%, unless one moves far away from a uniform or equal prior.

At the same time, experts helping jurors figure out an appropriate prior might still leave a jury with the confidence to convict. The Collinses were far from an average couple. The fact that they were dead broke just prior to the robbery and yet had unexplained spending right after the robbery should factor into the equation. Thus, enlightened application of Bayes' rule leaves ample room for advocates to engage on the most relevant issues regarding a defendant's innocence or guilt.
COPYRIGHT 2015 Stanford Law School
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Introduction through IV. Application to People v. Collins, p. 1447-1475; Symposium: Festschrift in Honor of Richard Craswell
Author:Ayres, Ian; Nalebuff, Barry
Publication:Stanford Law Review
Date:Jun 1, 2015
Words:9249
Previous Article:Debiasing through law and the First Amendment.
Next Article:The rule of probabilities: a practical approach for applying Bayes' rule to the analysis of DNA evidence.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |