Printer Friendly

More than zero: accounting for error in latent fingerprint identification.

LOUISE: I never would have guessed that he was selling fake insurance.

CANEWELL: That's what the whole idea was ... he didn't want you to guess it. If you could have guessed, then he couldn't have sold nobody no insurance.

--August Wilson, Seven Guitars (1996)

INTRODUCTION

The year 2004 witnessed what was probably the most highly publicized fingerprint error ever exposed: the case of Brandon Mayfield, an Oregon attorney and Muslim convert who was held for two weeks as a material witness in the Madrid bombing of March 11, 2004, a terrorist attack in which 191 people were killed. Mayfield, who claimed not to have left the United States in ten years and did not have a passport, was implicated in this attack almost solely on the basis of a latent fingerprint found on a bag in Madrid containing detonators and explosives in the aftermath of the bombing. Unable to identify the source of the print, the Spanish National Police emailed it to other police agencies. Federal Bureau of Investigation (FBI) Senior Fingerprint Examiner Terry Green identified Mayfield as the source of the latent print. (1) Mayfield's print was in the database because of a 1984 arrest for burglary and because of his military service. The government's affidavit stated that Green "considers the match to be a 100% identification" of Mayfield. (2) Green's identification was "verified" by Supervisory Fingerprint Specialist Michael Wieners, Unit Chief, Latent Print Unit and fingerprint examiner John T. Massey, a retired FBI fingerprint examiner with over thirty years of experience.

Kenneth Moses, a well-known independent fingerprint examiner widely considered a leader in the profession, subsequently testified in a closed hearing that, although the comparison was "quite difficult," the Madrid print "is the left index finger of Mr. Mayfield." (3) A few weeks later the FBI retracted the identification altogether and issued a rare apology to Mayfield. (4) The Spanish National Police had attributed the latent print to Ouhnane Daoud, an Algerian national living in Spain.

The error occurred at a time when the accuracy of latent print identification has been subject to intense debate. Because the Mayfield case is the first publicly exposed case of an error committed by an FBI latent print examiner and the examiners were highly qualified, it was particularly sensational.

But the Mayfield case was not the first high-profile fingerprint misattribution to be exposed in 2004. (5) In January, Stephan Cowans was freed after serving six and a half years of a 30- to 45-year sentence for shooting and wounding a police officer. (6) Cowans had been convicted solely on fingerprint and eyewitness evidence, but post-conviction DNA testing showed that Cowans was not the perpetrator. (7) The Boston Police Department then admitted that the fingerprint evidence was erroneous, (8) making Cowans the first person to be convicted by fingerprint evidence and exonerated by DNA evidence. (9) As with the May field case, the Cowans misattribution involved multiple experts, including defense experts. (10)

Latent print examiners have long claimed that fingerprint identification is "infallible." (11) The claim is widely believed by the general public, as evidenced by the publicity generated by the Mayfield and Cowans cases, with newspaper headlines like "Despite Its Reputation, Fingerprint Evidence Isn't Really Infallible." (12) Curiously, the claim even appears to survive exposed cases of error, which would seem to puncture the claim of infallibility. (13) Such cases have been known since as early as 1920 and have not disturbed the myth of infallibility. (14) Today, latent print examiners continue to defend the claim of infallibility, even in the wake of the Mayfield case. (15) For example, Agent Massey commented in a story on the Mayfield case, "I'll preach fingerprints till I die. They're infallible." (16) Another examiner declared, in a discussion of the Mayfield case, "Fingerprints are absolute and infallible." (17)

The question of the "error rate" of forensic fingerprint identification has become a topic of considerable legal debate in recent years. "Error rate" has been enshrined as one of the non-definitive criteria for admissible scientific evidence under the United States Supreme Court's decision Daubert v. Merrell Dow Pharmaceuticals. (18) In discussing how trial judges should exercise their "gatekeeping" responsibility to ensure that "any and all scientific testimony or evidence admitted is not only relevant, but reliable," (19) the Court noted that "in the case of a particular scientific technique, the court ordinarily should consider the known or potential rate of error." (20) In Kumho Tire v. Carmichael, (21) the Court decided that the "gatekeeping" responsibility extended to non-scientific expert evidence and reiterated the same non-definitive checklist it enumerated in Daubert. (22) Though courts have found that latent print identification is non-scientific expert evidence, Kumho prevents such a determination from becoming a loophole through which latent print identification could evade Daubert's requirement that judges ensure its reliability. Indeed, the Court specifically noted that even the case of "experience-based" testimony--which, presumably, is what latent print identification is, if it is not science--it is relevant to know the error rate. (23) Although the Supreme Court was careful to note that its proposed checklist was merely illustrative, courts frequently treat it as a de facto litmus test for admissibility. Since criminal defendants began challenging the admissibility of forensic fingerprint evidence in 1999, (24) the error rate of fingerprint evidence has been extensively discussed and litigated.

Curiously, it would appear that the Court's inclusion of error rate in Daubert/Kumho, rather than having the palliative effect of encouraging latent print examiners to measure their error rate, has had the unintended consequence of tempting them to make even less sustainable claims. (25) Thus, in response to a challenge to the admissibility of latent print evidence under Daubert/Kumho, the government and latent print examiners advanced the "breathtaking" (26) claim that the error rate of forensic fingerprint identification is zero. (27)

As with infallibility, latent print examiners defend the claim of a zero error rate even when confronted with known cases of misattribution in real cases. In a 60 Minutes interview about the Jackson case, Agent Meagher demonstrated an identification to reporter Leslie Stahl:

MEAGHER: The latent print is, in fact, identical with the known exemplar.

STAHL: It's identical?

MEAGHER: Yes.

STAHL: You can tell that?

MEAGHER: Yes.

STAHL: What are the chances that it's still not the right person?

MEAGHER: Zero.

STAHL: Zero.

MEAGHER: It's a positive identification. (28)

How can a process commit errors and yet be considered infallible? How can the "error rate" of any technique, let alone one that has been known to commit errors, be considered zero? In this article, I will argue that the coexistence of these two contradictory notions is not merely a product of simple "doublethink." (29) Rather, I will show that it the product of a rhetorical strategy to isolate, minimize, and otherwise dismiss all exposed cases of error as "special cases," or "one-offs," (30) and therefore as irrelevant.

After a brief legal and technical background discussion in Part I, Part II of this paper discusses what we do know about the error rate of latent print identification. Part II.A catalogs twenty-two cases of fingerprint misattribution that have been reported in the public record. An analysis of these cases shows that they are most likely only the tip of the proverbial iceberg of actual cases of fingerprint misattribution. Part II.B discusses the results of proficiency testing of latent print examiners. These tests also show a non-zero error rate. In Part III, I discuss what might be called "the rhetoric of error." This Part analyzes rhetorical efforts by fingerprint advocates and courts to minimize, dismiss, and explain away the evidence of error revealed in Part II. Fingerprint practitioners seek to create an error-free aura around fingerprint identification that has the potential to dangerously mislead finders of fact. At the end of Part III, I discuss some more defensible ways of conceptualizing fingerprint error. Far from being "one-offs," I suggest that the cases of error are more likely the product of routine practice. Whatever special circumstances exist in the misattribution cases are more likely to account for the exposure of the misattribution than the misattribution itself. I conclude by arguing that it is necessary to confront, analyze, and understand error if we ever hope to reduce it.

I. BACKGROUND

A. LATENT PRINT IDENTIFICATION

Latent print identification is a process of source attribution. (31) Latent print examiners compare "latent" prints taken from crime scenes to prints of known origin. Although prints taken from suspects using ink or scanners are typically of good quality--and can be re-taken if they are not--latent prints are typically partial, smudged, or otherwise distorted. It is the poor quality of many latent prints that makes latent print identification problematic. The most valuable aspect of the latent print testimony in criminal justice proceedings is the attribution of the latent print to the defendant. Although latent print testimony is often phrased as claiming that the latent print and the known print of the defendant are "identical," this is not strictly true; all fingerprint impressions, including those taken from the same finger, are in some way unique. (32) The true import of latent print testimony is not that the unknown print and the known print are "identical" but rather that they derive from a common source. (33) Since the source of the known print is known to be the defendant (because someone in the chain of custody took them from the defendant), the unknown print is then attributed to the defendant. The defendant is said to be the source of the latent print.

1. Conclusions

In the above respects, latent print identification is similar to many other areas of forensic analysis. But latent print evidence differs crucially from most other types of forensic evidence in the manner in which source attributions are phrased. In forensic DNA analysis, for example, the analyst typically testifies that the defendant may be the source of a DNA sample. This statement is then accompanied by a random match probability which indicates the frequency with which randomly chosen individuals with the same racial or ethnic background would also be consistent with the unknown DNA sample. When latent print examiners make a "match," however, they always testify that the defendant is the source of the latent print to the exclusion of all other possible sources in the universe. Latent print examiners are, in fact, ethically bound to only testify to source attributions; they are banned from offering probabilistic opinions in court. (34) Latent print examiners are the only forensic expert witnesses who are so restricted. Latent print examiners are permitted by the (largely unenforceable) rules of their profession to offer only three possible conclusions (35) from any comparison of a known and unknown set of prints:

1. Individualization

2. Inconclusive

3. Exclusion (36)

Many of the press reports about the Mayfield case reported with apparent surprise the FBI's characterization of an attribution that would turn out to be erroneous as "a 100 percent positive identification." (37) These reports were apparently unaware of the fact that all latent print attributions are supposed to be characterized with such an inflated degree of certainty. (38)

2. Individualization

Latent print examiners reach conclusions of "individualization" (39) by finding corresponding "ridge characteristics" (40) between the unknown and known prints. Any "unexplainable dissimilarity" results in a conclusion of exclusion. (41) Insufficient correspondences result in a conclusion of "inconclusive." (42) "Sufficient" correspondences result in a conclusion of "individualization," or source attribution. (43) A crucial question is, of course, where the boundary lies between insufficient and sufficient correspondences. The latent print community has been unable to answer this question with any precision or consistency other than to posit a circular answer, which simply rests upon the analyst's subjective measure of "sufficiency," such as the following: "Sufficiency is the examiner's determination that adequate unique details of the friction skin source area are revealed in the impression." (44)

3. Methodology

Recently, latent print examiners have taken to describing their "methodology" as "ACE-V" (Analyze, Compare, Evaluate-Verify). (45) For our purposes, the important thing to note is the "verification" component in which a second examiner "ratifies" the conclusions of the initial examiner. The latent print community has until recently resisted any pressure to conduct "blind" verification--that is, to prevent the "verifier" from knowing what conclusion the initial examiner has reached, or even whether the initial examiner has reached a conclusion. (46) An FBI report on the Mayfield case, however, has now endorsed blind verification in "designated" cases. (47)

4. Qualifications

There are no qualifications necessary to render an individual a "latent print expert"; whether to let an individual testify as such is entirely up to the court. (48) There is, however, a certification program, administered by a professional organization, the International Association for Identification (IAI). (49) Upon creating the certification program, the IAI specifically stated that lack of certification should not be construed as rendering a purported expert unqualified to testify as an expert witness. (50)

B. FINGERPRINT ERROR RATES

Although I will criticize below the parsing of error into different "types," there are some legitimate distinctions to be made when talking about forensic error. First is the distinction between false positives and false negatives. These are also sometimes called Type I and Type II errors. (This distinction, unlike some of those discussed below, is well recognized in numerous fields of science.) In the context of fingerprint identification, a false positive would consist of reporting that an individual is the source of an impression when in fact she is not. A false negative would consist of reporting that an individual is not the source of an impression when in fact she is. These errors can be of differing importance depending on the context. For example, in criminal law the classic formulation of this is "Blackstone's maxim," which states that it is better to let ten guilty people go free than to falsely convict one innocent person. (51) This would suggest that false positives are ten times more catastrophic than false negatives. (52)

In addition, there are some distinctions that may be made among false positives based on the stage of the criminal justice process at which the error is detected. Presumably, some false positives are detected and corrected within the crime laboratory itself. An analyst may take a second look at the evidence and change her mind. Alternatively, another analyst may disagree with the initial analyst's conclusion. In current fingerprint parlance, this process is known as "verification." The dispute would be resolved within the laboratory and reported as "inconclusive" or an exclusion. No one outside the laboratory would know that there had been an "error." We know very little about these types of errors. They are unlikely to generate media attention, officially published reports, or legal records, our primary sources for learning about fingerprint errors. In all likelihood the disagreement is resolved quietly within the laboratory.

There is legitimate reason to distinguish between errors that are detected in the laboratory and errors that are not detected until after a laboratory has in some way input its conclusions into the criminal justice system, leading to arrest, indictment, trial, or conviction. In the former case, it may reasonably be argued that whatever safeguards the laboratory has in place (such as "verification") functioned correctly, detected the error, and prevented false information from being offered into evidence. It might reasonably be said, "the system worked." In the latter case, whether the error is ultimately detected before conviction or after conviction, the error is nonetheless far more serious. Once the laboratory inputs a conclusion into the criminal justice, it has effectively terminated whatever processes it has in place to detect errors. At this point, responsibility for exposure of the error rests with other actors, such as the prosecutor, judge, jury, or, most important, the defense expert, if there is one.

Thus, it would be oversimplified to speak of "an error rate" of a forensic fingerprint identification. Are we interested in the rate of false positives, false negatives, or the sum of the two? How expansive is our definition of an "error"? Are we interested in errors exposed within the laboratory, errors exposed after they leave the laboratory, or are we interested in estimating the prevalence of all actual errors, whether or not they are exposed? In this article, my focus will be on false positives that leave the laboratory. I will not discuss false negatives or errors that are detected within the laboratory. Estimating the number of errors that are detected within laboratories would be a nearly impossible task for a laboratory outsider. The latent print community itself could, if it wanted, produce data about the occurrence of errors within the laboratory. So, for example, the two false positives committed by Agent Massey back in the 1970s that were detected within the FBI laboratory are not included in my data set.53 I omit discussing false negatives because no one disputes that false negatives occur. The rate and occurrence of false positives, however, is more controversial.

II. WHAT DO WE KNOW ABOUT ERROR RATES IN LATENT PRINT IDENTIFICATION?

There are two basic ways of going about calculating an error rate, neither of which is entirely satisfactory. The ideal method would be to divide the number of actual cases of error by the number cases in which fingerprint evidence was used, thus yielding an error rate (or rates--false positives and false negatives). This approach has a fundamental problem: we do not know the "ground truth." In casework we do not know whether the suspect is or is not, in fact, the source of the unknown print. Therefore, any error rate calculated from casework is inherently untrustworthy. A second approach would be to run a simulation. In a simulation, the researcher can control the materials that are submitted to the process or technique and thus know the ground truth. The drawback to simulations is that they usually differ in significant ways from the real-world practice to which their error rates will be extrapolated. Therefore, the extrapolation of an error rate from simulation to the real world can always potentially be contested. Indeed, in scientific controversies, the extrapolation from a simulation to the "real world" will almost inevitably be contested. (54) Therefore, we should expect that any declared error rate for latent print identification will be contested by one party or the other (or perhaps both). An accepted error rate will not simply emerge from some academic study. That is, however, no reason not to try to assess the likelihood of error.

A. FINGERPRINT MISATTRIBUTIONS

1. Case selection

In this section, I use archival analysis of reported cases of misattribution to attempt to estimate the error rate of latent print identification. Any effort to calculate the false positive rate of forensic fingerprint identification from known cases of misattribution is hampered by the fact that there is no central repository of knowledge about such cases. No mechanism for recording, compiling, reviewing, or analyzing cases of fingerprint misattribution exists. Some latent print examiners and legal scholars have compiled mistattibution cases on various web sites. (55) I have compiled below those cases known to me through my historical fingerprint research. Overwhelmingly, these are cases that were reported either in the media or in published court decisions. Since I have occasionally seen reference in the fingerprint literature to cases of misattribution that were not publicized, (56) I believe that the number of known cases of misidentification listed here is probably significantly less than the number known to the "collective mind" of the fingerprint profession.

A second problem concerns case selection. Case selection for any such exercise raises difficult methodological problems. Criteria for case selection that are too liberal may overstate the potential for latent print error, whereas criteria that are too conservative may understate it. Moreover, how do we determine that a latent print attribution was erroneous? Even in cases that are widely treated within the fingerprint community itself as clear errors, there is rarely definitive scientific proof that the attribution was erroneous. (57) Only in two of the cases listed below, Hatfield, infra Part II.A.3.0, and Cowans, infra Part II.A.3.q, is there definitive proof that that attribution was erroneous. In Hatfield, a forensic technician used fingerprint impressions to identify a corpse. (58) The individual identified as the corpse turned out to be alive. (59) Cowans was excluded as the source of DNA evidence which was taken from the same object as the latent print. (60) In most of the other cases, the "evidence" that the match was erroneous consists of the consensus of the fingerprint community itself. This creates difficulty because it demands using the very technique that is being questioned to establish the ground truth. (61)

In most cases, there is no way of proving that the attribution was erroneous without assuming the very infallibility of latent print examiners' consensus judgments that these cases undermine. For example, McKie, one of the best-known cases of "error," (infra Part III.A.3.1), is generally viewed within the latent print community as an erroneous attribution. (62) But, in fact, we have no way of knowing that Shirley McKie did not make the print in question, other than through the consensus judgment of latent print examiners. In McKie (unusually), there is not even a complete consensus. Some latent print examiners continue to stake their professional reputations on the claim that McKie was indeed the source of the disputed print. (63)

In Table 1 and Part II.A.3, I list and discuss twenty-two cases of latent print "misattributions." These are cases where the consensus of opinion in the latent print community itself holds that attribution is erroneous. The conservative nature of my case selection has led me to exclude from my sample several cases of "disputed attributions." (64) These are cases in which reputable latent print examiners have either declined to corroborate an attribution (claimed the correct conclusion should have been "inconclusive") or disagreed about the attribution of a latent print (claimed the correct conclusion should have been "exclusion"), but there is no consensus judgment that the attribution was erroneous. (65)

2. Intentional Misattribution (Fraud)

Finally, I have also excluded here any discussion of cases of alleged fraud, forgery, or fabrications. Again, distinguishing between fraudulent intent and honest error poses problems. Typically, an examiner involved in a misattribution is well advised not to talk to the authorities. Even if the examiner were willing to talk, any effort to divine the examiner's state of mind during the error is inherently difficult and unreliable. Some of the cases discussed here (e.g., McKie/Asbury, Cowans) have been alleged to have been caused by fraud. (66) Ultimately, to assign one of these cases to fraud would require knowing the state of mind of the latent print examiner at the time of the misattribution, which, in most cases, will be an impossible task.

Certainly, there are numerous cases in which fraudulent intent has been fairly clearly documented. (67) I do not discuss those cases here. My interest here is primarily in unintentional misattributions, which constitute a more difficult problem than fraud. That fraud occurs in the fingerprint field is to be expected and not generally disputed. That unintentional misattributions can occur is a far more controversial matter. In addition, unintentional misattributions are probably more difficult to detect. The cases of fingerprint fraud, and forensic fraud in general, demonstrate that vigilante forensic scientists often leave ample paper trails that make their misdeeds easily traceable and documentable, once the analyst has been exposed as fraudulent. (68) Far more difficult to detect are cases in which the analyst honestly believes in an erroneous conclusion.

3. Known cases of fingerprint misattribution

a. Loomis

Robert Loomis was convicted in 1920 for the murder of Bertha Myers during a burglary in 1918 in Easton, Pennsylvania. (69) Two latent print experts testified for the government that a latent print found on a jewelry box could be identified to Loomis. (70) Loomis won a new trial on the basis of faulty jury instructions. (71) At Loomis's second trial, the government admitted that Loomis was not the source of the latent print and declined to offer it into evidence. (72) The record does not show what led the government to this conclusion. Loomis then sought to enter the print into evidence, claiming it belonged to the true perpetrator. (73)

b. Stevens

A latent print found on a calling card at the scene of the notorious Hall-Mills murders in New Brunswick, New Jersey in 1926 was attributed to William Stevens by three latent print examiners. (74) Interestingly, one of the examiners was Joseph Faurot, who had been one of the first examiners to offer testimony in court in the United States. (75) Two latent print examiners testified for the defense and claimed the attribution was erroneous, but they also contended, inconsistently, that the print might have been forged. Stevens was acquitted; the jury reportedly disregarded the latent print evidence. (76)

c. Stoppelli

John "The Bug" Stoppelli was convicted in 1948 for the sale of narcotics in Oakland. (77) After a drug raid, in which four other suspects were arrested, a latent print was recovered from an envelope containing heroin. (78) The print did not match any of the four arrested. (79) After an extensive database search, Internal Revenue Agent W. Harold "Bucky" Greene attributed the latent to Stoppelli, a parolee in New York City. (80) Greene found fourteen matching ridge characteristics. (81) No other evidence linked Stoppelli to the crime. (82)

Stoppelli was convicted. (83) Eventually, his attorney, Jake Ehrlich, convinced the arresting officer, Colonel White, to talk to Stoppelli. (84) White became convinced of Stoppelli's innocence and had the print reviewed by the FBI laboratory. (85) The FBI excluded Stoppelli as the source of the print, and President Truman commuted his sentence. (86) He had served two years. (87)

d. Caldwell

Roger Caldwell was convicted of the murder of Elisabeth Congdon in Minnesota in 1978. (88) Three latent print examiners attributed a latent print found on an envelope to Roger Caldwell. The envelope was addressed to Caldwell and contained a gold coin believed to have been stolen from the victim's home. (89) The examiners were: Steven Sedlacek, who testified for the government at trial, Claude Cook, who "verified" Sedlacek's identification, and Ronald Welbaum, who was retained by Caldwell and also corroborated the match. (90) All three were IAI-Certified Latent Print Examiners. (91) Sedlacek testified that "the latent print partial ... I found to be identical with the inked impression on the fingerprint card bearing the name Roger Caldwell." (92) This conclusion was based on eleven matching ridge characteristics and no unexplainable dissimilarities. (93)

The original negative of the latent print was reexamined for the trial of Caldwell's wife and supposed co-conspirator, Marjorie Caldwell. The forensic scientist Herbert MacDonell and the latent print examiners George Bonebrake and Walter Rhodes testified that Roger Caldwell could not have been the source of the latent print. Marjorie Caldwell was acquitted, and Roger won a new trial. That the fingerprint evidence was erroneous does not necessarily exonerate the Caldwells, and Roger Caldwell eventually pied guilty to time served rather than submitting to a new trial. On the other hand, a guilty plea to time served is a difficult offer for even an innocent person to refuse and is, therefore, not particularly convincing evidence of Caldwell's guilt. (94) Sedlacek, Cook, and Welbaum had their certifications revoked by the IAI.

e. "Midwestern"

Special Agent German reports a case of erroneous identification reported by an examiner from "a small American police department in the Midwest" in 1984. (95) The nature of the crime is not reported. The defendant was a parolee. (96) Testimony implicating the defendant based on latent print evidence was given at a preliminary hearing and parole revocation hearing. The latent print examiner was IAI-certified (97) and was decertified upon exposure of the error. The defendant was released upon exposure of the misidentification. (98) German reports that "[t]he Latent Print Examiner, being relatively new in the business, had not previously caused anyone's incarceration based upon fingerprint evidence and the Prosecutor decided that no future warrants would be issued based on just the local examiner's work." (99) After decertification, the examiner continued to work as a police officer, crime scene technician, and, apparently, latent print examiner, since German reports that the examiner "to my knowledge has since always submitted fingerprint identifications to outside agencies for verification." (100) German withholds the identifying details "because I am proud of his (and his department's) integrity and professionalism." (101)

f. Cooper

Michael Cooper was arrested for being the "Prime Time Rapist," a serial rapist, in Tucson, Arizona in 1988. (102) Two latent prints from two different crime scenes were attributed to Cooper by two law enforcement personnel: Timothy O'Sullivan and Gene P. Scott. (103) While O'Sullivan apparently had minimal latent print experience, Scott was a Supervisor. (104) The examiners claimed to have found "eleven or twelve" corresponding ridge characteristics between a crime scene print and an inked print taken from Cooper, (105) and Scott called the match a positive comparison. (106) On the basis of the fingerprint evidence, Cooper was subjected to an illegal interrogation, which the Ninth Circuit later decided violated his civil rights. (107) During the interrogation, one investigator, Weaver Barkman, began to harbor doubts about Cooper's guilt, which he expressed outside the interrogation room. (108) According to Barkman, his supervisor, Tom Taylor, "said something very close to fingerprints do not lie. Get your ass back in there, Weaver." (109) Identification technician Mary McCall also participated in the interrogation, telling Cooper that he had been positively identified by fingerprint evidence. (110) The record does not show whether or not McCall had yet examined the evidence herself. Upon double-checking her work, however, McCall began to doubt the match. (111) O'Sullivan and Scott initially "ignored her and declined to reexamine the exemplars." (112) Eventually, however, the examiners changed their conclusion to one of exclusion. At the time, they maintained that there were twelve corresponding ridge characteristics but also some unexplainable dissimilarities, which rendered the comparison an exclusion. (113) Scott and O'Sullivan were demoted, and McCall was suspended for two days without pay. (114)

g. Trogden Cases

Bruce Basden was arrested in 1985 for the murders of Remus and Blanche Adams in Fayetteville, North Carolina. (115) A latent print found in the Adams' home was attributed to Basden by latent print examiner John Trogden. (116) Upon reexamining and enlarging the evidence in response to a discovery request by the defense, Trogden withdrew his conclusion of identification. (117) The charges were dismissed. Basden had been jailed for thirteen months. (118)

The FBI and the North Carolina State Bureau of Identification reviewed the work of Trogden and another latent print examiner named Sue George. (119) Their review found three erroneous identifications. (120) A latent print in a burglary case was attributed to Maurice Gaining, who had been convicted of burglary and sentenced to ten years. (121) The print apparently belonged to Gaining's co-defendant James Hammock. (122) Other latent print evidence, reportedly correctly attributed, remained against Gaining in other pending burglary cases. (123) Coincidentally, one of the other misattributed prints was attributed to Hammock in another burglary case for which he was sentenced to ten years. (124) Again, there was additional print evidence, apparently correctly attributed, against Hammock. (125) The third error was the attribution of a palm print to Darian Carter. (126) Carter had been convicted of larceny and sentenced to ten years. (127) Again, there were also two fingerprints, which had apparently been correctly attributed to Carter. (128) Identification Bureau officials noted that the errors occurred "early in the identification careers" of Trogden and George, that the examiners "did not have [the] luxury" of "learn[ing] from more experienced people," and that they "had identified a record 118 fingerprints in 1987." (129) Trogden and George remained on the job. Their supervisor commented, "I'm not going to throw them out because of a mistake. I think with additional experience and training, our print examiners will be the finest in the state." (130)

h. Lee

Neville Lee was arrested in 1991 in Nottinghamshire, England, for the rape of an eleven-year-old girl on the basis of a supposed fingerprint match. (131) It is not known how many corresponding ridge characteristics were identified, but at that time a minimum requirement of sixteen matching ridge characteristics was in force in the United Kingdom. (132) Lee's home was wrecked by vigilantes, and he was assaulted in jail. (133) Another individual subsequently confessed to the crime, and Lee was released. (134) The authorities admitted that the fingerprint match was erroneous. (135)

i. Blake

Martin Blake (136) was arrested and interrogated for three days in 1994 for the murder of seven people during a robbery in Palatine, Illinois. (137) A Chicago Police Department latent print examiner matched a print from the crime scene, a Brown's Chicken & Pasta, to Blake. (138) Upon review by the Illinois State Police and the FBI, the match was determined to be erroneous. (139)

j. Chiory

Andrew Chiory was charged in 1996 for the burglary of the home of Miriam Stoppard, a writer and broadcaster who also happened to be the exwife of the well-known playwright Tom Stoppard, in London, England. (140) Two separate latent prints from the crime scene were attributed to Chiory. (141) Both matches were "allegedly triple-checked," and both were conducted under the requirement for sixteen corresponding ridge characteristics in force in the United Kingdom at that time. (142) Chiory served two months in prison before the match was exposed as erroneous. (143) Despite an extensive external investigation of this miscarriage of justice, (144) no explanation for the misidentification has ever been made public.

k. McNamee

Danny McNamee was convicted in England in 1987 of conspiracy to cause explosions. (145) He was dubbed the "Hyde Park Bomber" for his alleged role in a 1982 Irish Republican Army bombing that killed four soldiers and seven horses. (146) McNamee was implicated in the crime by three latent prints: two from tape found with explosive-making equipment, and one from a battery recovered from debris after a controlled explosion in London. (147) The latent print from the battery was the most incriminating. At McNamee's trial, Metropolitan Police latent print" examiners offered evidence that McNamee was the source of the latent print on the battery. (148)

As McNamee appealed his conviction, controversy emerged over the battery print. At least fourteen different examiners analyzed the evidence. (149) Two Glasgow examiners found eleven corresponding characteristics between the latent print and McNamee's inked prints, but they were not the same eleven characteristics. (150) At least two Dorset examiners also attributed the print to McNamee, but did not agree with some of the corresponding ridge characteristics identified by the original examiners. (151) Other experts, including Peter Swann and Martin Leadbetter, found the latent print insufficient for identification. (152) The appeals court quashed the fingerprint evidence, the case collapsed, and McNamee was released in 1998 after serving eleven years in prison. (153)

1. Scottish Criminal Records Office Cases

These were the best-known cases of fingerprint misidentification until the Mayfield case. The cases surrounded the murder of Marion Ross in Kilmamock, Scotland in 1997. (154) David Asbury was identified as a suspect, in part, based on a latent print found on biscuit tin in his home containing a substantial amount of cash. The print was attributed to Marion Ross. (155) Asbury was convicted of murder and sentenced to life in prison. (156)

Shirley McKie, a detective with the Strathclyde Police Department, had been assigned to secure the crime scene. (157) A latent print found inside Ross's house was attributed to McKie. (158) (It is standard practice to "eliminate" latent prints by checking them against the known prints of nonsuspects, such as victims and investigating police officers.) McKie denied entering the house. (159) After resisting substantial pressure to admit having abandoned her post and entered the house, McKie was charged with perjury. (160) Both the Ross and McKie fingerprint matches were attested to by four (the same four in both cases) (161) latent print examiners from the Scottish Criminal Records Office (SCRO) and were described as meeting the British requirement of having at least sixteen corresponding ridge characteristics. (162) However, unbeknownst to either prosecution or defense, five SCRO examiners had declined to attribute the disputed print to McKie. (163) A clinical psychologist who examined McKie and formed the opinion that she was telling the truth was "told that any question of a mistake in the fingerprint evidence was 'unthinkable because of its implications.'" (164)

On the eve of McKie's trial, in 1999, she and her father Iain McKie, a former police officer, persuaded two American examiners, Pat Wertheim and David Grieve, to come to the Scotland to reexamine the evidence. (165) Wertheim and Grieve testified that McKie could not be the source of the latent print. (166) McKie was acquitted and released. (167)

In 2002, the biscuit tin latent was reviewed by Wertheim and Allan Bayle, a former Scotland Yard examiner. (168) They concluded that Ross could not be the source of the print. (169) In other words, the SCRO had allegedly made two erroneous identifications in a single investigation. Asbury was released. (170) This does not necessarily mean that he was actually innocent.

McKie sued the police, (171) and a full investigation into the SCRO was launched. (172) Two extensive reports issued in response to the scandal said a great deal about the organizational culture and procedures of the Scottish Criminal Records Office, but virtually nothing about the technical details of the McKie and Asbury attributions themselves and why they may have occurred. (173) Reforms were instituted at the SCRO. (174)

Another SCRO case emerged after the reforms undertaken in response to the McKie case. Mark Sinclair was convicted of armed robbery in 2003, in part based on a latent print from one of the crime scenes. SCRO examiners testified that they had "no doubt" that Sinclair was the source of the latent print. (175) Allan Bayle concluded the "identification to be unsafe." (176) Two examiners from the Police Service of Northern Ireland agreed that the latent print was insufficient for identification. (177) Because no consensus has formed, the Sinclair case is not included as a misattribution in my data set.

m. Jackson

In 1998, Richard Jackson was convicted and sentenced to life in prison for the murder of Alvin Davis, his friend and occasional lover, in Upper Darby, Pennsylvania. The sole evidence against Jackson was a latent print found on a fan in Davis's home. Three latent print examiners attributed the crime scene print to Jackson: Anthony Paparo of the Upper Darby police, William Welsh of the county police, and Jon Creighton, an IAI-certified examiner from Vermont. (178) Jackson hired his own experts, Vernon McCloud and George Wynn, both former examiners for federal agencies, who concluded that he was not the source of the print. (179) With McCloud and Wynn questioning the prints, the government hired a consultant, Eugene Famiglietti. According to District Attorney Patrick Meehan, Famiglietti said, "You guys made a gutsy call. Stick to your guns." (180) Later, however, Famiglietti said the comparison was inconclusive. (181)

Although McCloud and Wynn testified at trial, the jury convicted Jackson, and he was sentenced to life in prison. After Jackson was convicted, McCloud and Wynn complained to the IAI and the FBI. (182) The FBI and the five members of the IAI Latent Print Certification Board reviewed the evidence and agreed with McCloud and Wynn's conclusion that Jackson was not the source of the print. (183) After some prosecutorial resistance and delays, Jackson was released, having served two years in prison. (184) The true perpetrator has never been caught. (185) Creighton was decertified by the IAI. (186)

n. "Manchester"

Journalists' investigation of two disputed identifications in Manchester, England (the Wallace case and McNamara case) (187) turned up an erroneous identification that occurred in 2000. (188) This attribution had been "triple-checked." The suspect had a convincing alibi and did not fit the witness's description. It was eventually discarded as an erroneous identification. (189) It is not known how many corresponding ridge characteristics were testified to in these two misidentifications, but the sixteen-point minimum standard was in place in the United Kingdom at that time.

o. Hatfield

Kathleen Hatfield was mistakenly identified as dead, based on an erroneous fingerprint identification in 2002. (190) In June 2002, an unidentified corpse was found in the desert near Las Vegas, Nevada. "After some skin restoration using tissue builder," the coroner was able to obtain a single thumbprint "of value." (191) This print was compared unsuccessfully with a number of inked prints from missing persons. Hatfield, a forty-six-year-old transient from Sonoma County, California, had been listed as a missing person in May by her mother. (192) Hatfield matched the physical description of the corpse. The California Sheriff's Office faxed a copy of Hatfield's ten-print card to the Las Vegas Metropolitan Police Department. (193) The prints were examined by a Law Enforcement Support Technician Supervisor. This individual did not work in the ten-print section of the Police Department but had twenty-five years of ten-print experience and "had been helping the coroner's office make identifications for many years." (194) This individual identified the body as Hatfield based on the fingerprints. Las Vegas Police Detective David Mesinar said, "We only had one readable fingerprint, and it was so close a match that they went ahead and made an identification." (195) Hatfield's mother was informed, and funeral preparations were made. Hatfield had by this time been stopped and released by the Sonoma County police. The Sonoma County sheriffs began looking for Hatfield and eventually found her in August. Her mother was informed. Hatfield's grave had already been dug. (196)

Meanwhile, the Sonoma County Sheriff's Office mailed Hatfield's ten-print card to Las Vegas. The Technician re-examined the print and decided that she had made an error. (197) The Las Vegas Municipal Police Department Latent Print Unit confirmed that the prints did not match. No official analysis of the erroneous identification has been made public.

p. Valken-Leduc

In 2001, David Valken-Leduc was charged with the 1996 murder of a motel clerk in Woods Cross, Utah. (198) Latent print examiner Scott Spjut testified at a preliminary hearing that Valken-Leduc was the source of two bloody prints found at the crime scene. (199) Spjut was not merely an IAI-certified examiner; he was the Chair of the IAI Latent Print Certification Board, the body that oversees the certification examination (and had helped determined that the match in the Jackson case was erroneous, see infra Part II.A.3.m). (200) Spjut subsequently died, shot by a rifle he was examining in the laboratory. (201) Whether the shooting was accidental or suicide is still not clear. After Spjut died, the crime laboratory reviewed his findings and found that the victim was the actual source of the bloody crime-scene prints. (202) Whether the misattribution was fraud or an "honest error" is also not clear. Crime Laboratory Director Rich Townsend told the press, "We're mystified as to how he came up with this conclusion with his level of training and expertise." (203) But Valken-Leduc's attorney told the press, "[O]ur first line of attack was going to be that [Spjut] had manufactured evidence in other cases." (204) No such additional cases have yet been reported.

q. Cowans

The Cowans case is the first in which DNA evidence played a role in demonstrating that the fingerprint evidence was erroneous. Stephan Cowans was convicted of attempted murder in 1997 for allegedly nonfatally shooting a police officer, while fleeing a robbery in Roxbury, Massachusetts. (205) He was implicated in the crime by the testimony of two eyewitnesses, including the victim, and a fingerprint found on a cup. (The perpetrator fled the scene, invaded a home, and held the family hostage for around ten minutes. During that period, the perpetrator drank from a cup.) Boston Police Department (BPD) latent print examiner Dennis LeBlanc testified that he found sixteen corresponding ridge characteristics between the latent print from the cup and Cowans's known print. (206) LeBlanc testified that the two prints were "identical" and that the latent print belonged to Stephan Cowans. (207) BPD latent print examiner Rosemary McLaughlin verified the attribution. Cowans was sentenced to thirty to forty-five years in prison. (208) According to Cowans's attorney, Cowans retained two former BPD fingerprint experts who agreed that he was the source of the latent print. (209)

Cowans served six years in prison, volunteering for "biohazard" duty in order to earn money for a post-conviction DNA test. (210) Three DNA samples recovered from the same mug that contained the latent print and from a hat and sweatshirt discarded by the fleeing perpetrator all excluded Cowans as the donor of the DNA. Based on the DNA evidence, the Boston and State Police reexamined the fingerprint evidence and concluded that it was erroneous. Cowans was freed in January 2004. (211) Subsequent investigation revealed the latent print actually belonged to one of the family members who was held hostage. (212) Unlike the other cases discussed here, criminal charges were brought against the latent print examiners involved. An external review reported that LeBlanc had "discovered his mistake" before trial "and concealed it all the way through trial." (213) However, a grand jury declined to indict LeBlanc and McLaughlin. (214) They were, however, reassigned and suspended with pay. In an extraordinary move, Police Commissioner Kathleen O'Toole shut down the entire BPD fingerprint unit and turned latent work over to the state police. (215) Allegations were made that Boston Police Identification Unit had long been a "dumping ground" and "punishment duty" for troubled cops. (216)

r. Mayfield

The most recent and best-known case in the U.S. is the Mayfield case (see supra Introduction). Mayfield, an attorney in Portland, Orgeon, was a Muslim convert and a U.S. Army veteran. (217) He had once represented, in a child-custody case, one of the "Portland Seven," who had pied guilty to conspiracy to wage war against the United States. (218) Even when Mayfield was first arrested, it was known that the Spanish National Police were uncertain about the identification. (219) While FBI examiners identified fifteen corresponding points of comparison, the Spanish could only find eight. (220) Spain has a ten-point minimum standard. (221) The FBI adheres to no set standard for declaring a match. (222) FBI examiners reportedly traveled to Madrid to try to convince the Spanish that the identification was legitimate. On this occasion, the FBI reportedly declined to examine the original evidence and instead "relentlessly pressed their case anyway, explaining away stark proof of a flawed link--including what the Spanish described as tell-tale forensic signs--and seemingly refusing to accept the notion that they were mistaken." (223) Further investigation showed that the FBI had reprimanded Agent Massey for making false attributions in 1969 and 1974. (224)

4. Analysis of Known Cases of Misattribution

I compiled the above twenty-two reported cases of misattribution using conservative selection criteria. Although there is no information on how many times latent print identification has been used in crime investigation, the number is clearly large, and twenty-two cases pale in comparison. Some might even go so far as to suggest that this figure is so small that the characterization of the error rate of latent print identification as zero is warranted. However, before doing so, we need to understand the problem of exposure. That is, are these twenty-two cases the full complement of actual cases of latent print misattribution (or close to the full complement), or are they merely the tip of the iceberg? The following analyses will indicate why the latter is more likely the case.

a. Temporal trends

The first reason to believe that the known cases of misattribution do not account for all actual cases of misattribution is their distribution over time (Figure 1). Clearly, misattributions are clustered in recent years and appear to be occurring at an accelerating rate. One possible explanation for this is that the quality of latent print analysis is degrading. This might be because training is being eroded by budget cuts or by computerization. (225) Or, perhaps latent print examiners have becoming increasingly complacent, and hence sloppy.

[FIGURE 1 OMITTED]

Complacency, however, seems unlikely. Although fingerprint examiners are not legal scholars and may not have been immediately aware of the import of the Daubert ruling in 1993, the fact that the case might stimulate heightened scrutiny from the defense bar has been in the legal literature since at least 1997. (226) The challenge to the admissibility of fingerprint evidence in United States v. Mitchell in 1999 was very well publicized within the fingerprint profession. (227) If the perceived level of defense, judicial, and media scrutiny is a measure of examiner vigilance, then latent print examiners should have been at their most vigilant since the first two decades of the twentieth century during the period after 1999. And yet, that period contains some of the most embarrassing cases of misattribution.

A more plausible explanation is that misattributions are being brought to the public's attention at a higher rate. There is little doubt that the growing controversy over the validity of forensic fingerprint identification after Mitchell has made fingerprint misattributions more newsworthy. A glance at the sources, infra Part II.A.3, reveals that the earlier cases appear in legal and scholarly literature, but not in the press, whereas the opposite is generally true of the more recent cases.

If the apparent increase in misattribution is actually an increase in exposure, the temporal trend is disturbing. Misattributions have been exposed at a rate of more than one per year, during a period in which latent print examiners are well aware that they are under greater scrutiny than any other time since the introduction of the technique.

b. Offense characteristics

An analysis of the offenses implicated in the known cases of misattribution gives even stronger reason to doubt that actual cases of misattribution are limited to this data set. Figure 2 shows the distribution of offenses in the known cases data set. The overrepresentation of very serious crimes is striking. More than half of the misattributions occurred in homicide cases (murder, murder investigation [Hatfield, McKie], or terrorist attacks). Sixty-eight percent involved very serious crimes (homicide, attempted homicide, or rape). If the cases in which the offense is unknown are removed (Figure 3), the figures are comparable. Sixty percent of cases involve murder or attempted murder; seventy-five percent involve very serious crimes.

Since homicide accounts for only around one percent of the total number of felony charges, (228) it is clearly overrepresented among the known cases of disputed identification. Moreover, since I have combined cases for the United States and the United Kingdom, where the murder rate is one fifth that of the U.S., (229) this significantly understates the overrepresentation of errors in homicide cases.

It may be thought that this overrepresentation may be explained by the greater likelihood of using fingerprint evidence in homicide cases, as opposed to other criminal investigations. We can test this hypothesis. Professor Peterson et al. collected detailed data on the use of forensic evidence in a representative sample of adult serious crime cases in four American cities (230) from 1976-1980. (231) Table 2 shows that fingerprint evidence is indeed more likely to be recovered in homicide cases than in other criminal investigations, including burglary. However, the difference is not great enough to explain the overrepresentation of misattributions in murder cases. For example, homicide accounts for 54% of the misattributions, burglary (a crime for which it is plausible to think the use of fingerprint evidence would be common) only 18%. And yet, although fingerprint evidence is recovered in around 40% of homicide cases, it is also recovered in around 24% of burglary cases.

Another possible explanation is that misattributions are far more likely to occur in homicide cases than in less serious offenses like robbery, burglary, and drug offenses. It is possible that the pressure to close a homicide case leads latent print examiners to "push the envelope" further in these cases, elevating the potential for a misattribution.

A third possible explanation is that misattributions occur at the same rate in homicide cases and other cases but are more likely to be publicly exposed in cases involving very serious crimes because of the increased attention focused on those cases by media, defense counsel and experts, and other actors. If this were the sole explanation, it would suggest that--even accounting for the greater prevalence of fingerprint evidence in homicide cases--if misattributions in felony cases were exposed at the same rate as in homicide cases, there might be around 600 exposed cases of misattribution (this still excludes the "dark figure" of unexposed cases). (232)

Is the overrepresentation of homicide cases in exposed cases of fingerprint misattribution a consequence of examiner overzealousness or more efficient exposure mechanisms? As Professor Gross has commented in another, though related, context, "the truth is probably a combination of these two appalling possibilities." (233) In its report on the Mayfield case, however, the FBI has opted for the former explanation. The report concludes that "the inherent pressure of working an extremely high-profile case ... was thought to have influenced the examiner's initial judgment and subsequent examination." (234) Similarly, the report concludes that the verification process was tainted "because of the inherent pressure of such a high-profile case" and recommends that "[a] new quality assurance rule is needed regarding high-profile or high-pressure cases." (235)

c. The fortuity of exposed cases

Perhaps the strongest evidence that the known cases of misattribution only represent the tip of the iceberg is the fortuity of the exposure of cases of misattribution. Only in 27% of the cases of misattribution could the exposure be said to have occurred in the routine process of a criminal trial, usually through the efforts of defense experts. (236) In two cases (Chiory and Manchester) (237) there is not enough information to determine how the error was exposed. In 63% of the cases, extraordinary circumstances were required to expose the fact that misattributions had occurred. The Loomis print was disputed during his trial, but he was convicted; the identification was only retracted during a second trial that Loomis had won on unrelated grounds. (238) The Caldwell error was only exposed during the trial of a co-conspirator. (239) Had the co-conspirator died, plea-bargained, had charges dropped, or not mounted a vigorous defense, the error would never have been exposed. The Lee error was brought to light by the confession of the true perpetrator, always a fortuitous and highly unlikely event. (240) The McNamee error was exposed during the course of vigorous appeals and reinvestigations undertaken over the course of eleven years. (241)

The McKie case involved the prosecution of a police officer with an extremely supportive father who was also a police officer and the extraordinary last-minute intervention of American fingerprint examiners in a Scottish case. That a former police officer would be driven to the brink of suicide and into depression by her efforts to contest fingerprint evidence, (242) suggests something of the uphill battle faced by a criminal defendant who has fewer material and psychological resources with fingerprint evidence being adduced against them.

The Manchester Case was exposed only because the suspect had an alibi and did not match the physical description. The Hatfield error was exposed by the highly unusual circumstance of a supposedly identified corpse turning up alive. The Valken-Leduc error was exposed by a new review of the evidence, occasioned by a bizarre, fatal laboratory accident. (243)

In addition, many of the cases were exposed by "cascading"--the exposure of one disputed attribution generated scrutiny that would not otherwise have occurred. This scrutiny, in turn, revealed further cases of disputed attributions. A defense motion for discovery of the fingerprint evidence, which prompted the exposure of the Basden error, may be the normal course of business. (I have coded it as normal.) But, even if it is, the three additional Fayetteville cases would probably never have been exposed were it not for the exposure of the Basden error. The Asbury error was exposed only through the attention generated by the McKie error. And, Wallace and "Manchester" were only exposed after journalists began investigating the McNamara case. (244)

Fingerprint evidence is so powerful that erroneous fingerprint evidence is likely to convict, convict securely, and never be exposed. (245) In most cases, extraordinary circumstances are necessary to expose a fingerprint misattribution. Consider, for example, the Cowans case. (246) Imagine that the perpetrator were not so obliging as to have (1) drunk from a cup, while fleeing the crime, and (2) discarded two items of clothing containing his DNA at the scene. Had the perpetrator not done those two things it is virtually certain that Cowans would have served his full sentence of thirty-five years without anyone ever knowing that the fingerprint evidence (and the eyewitness evidence) was erroneous. (247) Cowans's exoneration (and the exposure of the fingerprint misattribution) also required the retention and preservation of the evidence containing the DNA for six years and the willingness of a court to order post-conviction DNA testing. Stephan Cowans himself expressed this most poignantly after his exoneration when he remarked to a reporter "that the evidence against him was so overwhelming that if he had been on the jury, he would have voted to convict himself." (248)

Similarly, consider the Mayfield case. Only the stubborn resistance of the Spanish National Police to apparently intense pressure from the FBI exposed the error. Imagine the Mayfield latent being discovered on U.S. soil. As a terrorist case, the print probably would have gone directly to the FBI. No other agency would have looked at it. With the Spanish National Police out of the picture, the error might never have been exposed. Even Mayfield's own expert corroborated the erroneous match. Now imagine the Mayfield latent being discovered on U.S. soil and being initially examined by a local law enforcement agency, rather than by the Spanish National Police. Would a local U.S. law enforcement agency have withstood as well the pressure that the FBI apparently applied to the Spanish National Police? Even in those circumstances, it seems highly unlike that the Mayfield error would ever have been exposed. Finally, there is the role of the media in bringing the Mayfield identification to light. The Mayfield case was publicized prematurely because of press leaks in Europe. (249) From the earliest reports of Mayfield's arrest, it was reported that the Spanish police entertained doubts about the fingerprint evidence. (250) Had the leak not occurred, the Mayfield error might have been resolved behind closed doors and never made public. FBI latent print examiners might still be claiming, in sworn testimony, never to have made a misattribution. (251)

The high degree of fortuity associated with the known cases of disputed attribution further strengthens the likelihood that known cases represent only a small portion of actual cases of error and that the "dark figure" of unknown cases is likely to be significantly higher than the "light figure" of known cases.

It may, of course, be argued that each one of the known cases of misattribution demonstrates that "the system works," precisely because it has become known to us. (252) In a case, such as Jackson, where reputable defense experts offered clear and explicit testimony that the attribution was erroneous, this is a plausible argument (though, since the jury convicted anyway, Jackson certainly diminishes our faith that the criminal justice system "works"). But the majority of misattributions were not exposed through such routine reviews. Moreover, the "system works" argument puts those with fingerprint evidence adduced against them in a double bind: if errors are not exposed, latent print examiners claim that latent print identification is infallible; if errors are exposed, latent print examiners claim that their mechanisms for detecting errors "work."

d. Safeguards against misattribution

The misattributions data set demonstrates that none of the supposed safeguards against misattribution is immune from failure. For example, some courts have held that "verification" provides a safeguard against error. (253) Latent print examiners have argued that competence is a safeguard against error. (254) It has also been argued that a high "point standard"-requiring a certain (high) number of matching ridge characteristics in order to declare a match--protects against misattribution. (255) Most persuasively, it has been argued that defense experts provide a safeguard against false attributions. (256) Even within this relatively small data set, misattributions have been known to occur when each of the aforementioned safeguards is in place.

For example, the misattributions data set demonstrates that verification does not prevent misattributions. Erroneous identifications were verified by one examiner in Caldwell, at least one examiner in Cooper, two examiners in Chiory, several examiners in McNamee, two examiners in the Manchester Case, three examiners in both McKie and Asbury, two examiners in Jackson, one examiner in Cowans, and two examiners in Mayfield. Indeed, more than half (12/22) of the known misattributions were attested to by more than one examiner. This supports that argument, posited by Haber and Haber, that, if "verification" is not conducted blind, the "verifier" is more likely to ratify misattributions than detect them. (257) These findings are particularly important because "quality assurance" and "quality control" (QA/QC) are increasingly invoked as the basis for confidence in the reliability of latent print identification. (258) These findings show that existing quality control measures do not appear to be particularly effective at detecting fingerprint misattributions.

Similarly, the data set refutes the notion that certified latent print examiners do not make errors. Caldwell was erroneously identified by three IAI-certified examiners. Midwestern involved an IAI-certified examiner, as did Jackson. Valken-Leduc was erroneously identified by the Chair of IAI Latent Print Certification Board. In fact, nearly one-third (7/22) of the total number of American (259) examiners implicated in disputed identifications after IAI certification was instituted in 1977 (260) were IAI-certified. (261) Given that only a small (though unknown) percentage of practicing latent print examiners are IAI-certified, IAI-certified examiners carry a surprisingly high proportion of the responsibility for disputed identifications. This suggests that the misattribution rate for IAI-certified examiners may be equal to, or even greater than, that for non-certified examiners. It is possible that certified examiners are more overconfident in making marginal attributions.

The data also show that a high point standard is insufficient to protect against misattribution. Of the twelve cases in the data set for which the number of supposed matching ridge characteristics is known, in fully half of those cases the misattribution was made with at least sixteen points. Sixteen points has historically been considered a very exacting standard. (262) Three-quarters of the cases had at least fourteen points, and none of the cases involved fewer than eleven points.

Perhaps most surprisingly, the data show that even the provision of defense experts does not protect a criminal defendant against misidentification. In four cases (Caldwell, McKie, Cowans, and Mayfield), disputed identifications were corroborated by independent experts. As will be discussed further below, that independent experts would corroborate erroneous attributions suggests that the underlying cause of misattributions runs very deep indeed.

e. Post-conviction DNA exonerations

It might be argued that the low number of fingerprint misattributions in the set of post-conviction DNA exonerations, collected and analyzed by the Innocence Project (IP) is evidence of the high accuracy of latent print identification. Fingerprint misattribution has been implicated in only one of the 155 cases of post-conviction DNA exoneration (the Cowans case). (263) In other words, approximately 0.6% of wrongful convictions exposed by post-conviction DNA testing were caused, in part, by fingerprint misattribution. By comparison, twenty-one (16%) of the first seventy wrongful convictions exposed by post-conviction DNA testing were caused, in part, by microscopic hair comparison and forty (57%) at least in part by serological evidence. (264) If we could extrapolate these findings to the total innocence project data set, which now stands at 155 cases, we would expect around forty-two cases involving microscopic hair comparison and eighty cases involving serology.

From this data, one might be tempted to conclude that fingerprint evidence is around forty-two times more reliable than microscopic hair comparison and around eighty times more trustworthy than serology. However, before reaching any such conclusion, we would need to control for the relative likelihood with which someone falsely convicted on these three types of evidence would be exonerated by post-conviction DNA testing. It is possible that a fingerprint misattribution is less likely to be exposed by post-conviction DNA testing than a serology inclusion or microscopic hair comparison. Although this is a difficult estimate to make, Professor Peterson et al.'s data set (265) can again be used to make provisional estimate. Peterson et al. enumerate the recovery of different types of forensic evidence in each case in their data set. (266)

As Table 3 shows, 86% of cases in which hair evidence was recovered also had biological evidence. In contrast, only 29% of cases in which latent print evidence was recovered also had biological evidence. In short, a defendant with false microscopic hair comparison evidence against him would be around three times more likely to have biological evidence available for post-conviction DNA testing than a defendant with false fingerprint evidence adduced against him. These figures are probably conservative because mitochondrial DNA can be extracted from hair, even very old hair. (267) In contrast, although it is now possible, in the laboratory, to extract DNA from a fingerprint, (268) this has not been done in the field, and it would certainly not be possible with a fingerprint that has aged in an evidence locker. Since serological evidence is by definition biological evidence, a defendant with false serological evidence adduced against him would be around 3.5 times more likely to have biological evidence available for post-conviction DNA testing than a defendant with false fingerprint evidence adduced against him.

These figures do not, of course, fully explain the greater presence of microscopic hair comparison and serology in the IP data set. But they do suggest that the reason there are fewer fingerprint cases than microscopic hair comparison or serology cases is not solely that fingerprint evidence is more accurate evidence. Rather, these figures suggest that the error rate for microscopic hair comparison may be around fourteen times that of fingerprint evidence. That is scant reason for comfort because microscopic hair comparison is widely considered to be very bad evidence indeed. (269)

Although existing data on error rates for forensic techniques is extremely poor, estimates for the false positive rate for microscopic hair comparison range from 4% to 35%. (270) Similarly, we might conclude that the error rate for traditional serological evidence may be around twenty-three times that of fingerprint evidence. Again, serological evidence is notoriously unreliable. (271) These figures would suggest error rates for fingerprint identification ranging from 0.2% to 2.5%. (272) Given the acknowledged weaknesses in the studies that generated these false positive rates, (273) these should be regarded as lower bounds of the actual error rate. (274) It should be noted, as well, that while these percentages may sound small, they would amount to thousands of fingerprint misattributions. And, because fingerprint evidence is much more persuasive, far better trusted, and presented to the jury in much stronger terms than microscopic hair comparison or serology ever were, fingerprint errors are probably far more likely to result in wrongful convictions and to go undetected if they do. If these error rates are taken as lower bounds of the actual error rate of these techniques, the IP data set suggests that the error rate of latent print identification, while significantly lower than that of microscopic hair comparison or serology, is not insignificant.

B. SIMULATIONS

As I have argued more fully elsewhere, the fingerprint community has thus far failed to conduct meaningful well-designed simulations intended to capture their potential error rate. The principal reason for this, I have argued, is that courts have allowed them to testify to extraordinarily powerful conclusions without adducing any results from such simulations. (275) Legally, latent print examiners can only lose by conducting simulations that indicate a non-zero error rate.

Nonetheless, some poor simulations have been conducted, and it may be possible to use them to say something about the probable error rate of forensic fingerprint identification.

1. Proficiency tests

Proficiency tests of latent print examiners have been conducted since 1983. The purpose of proficiency tests is to measure the competence of individual laboratories or techniques; their intent is not to generate an estimate of the accuracy of latent print identification. Nonetheless, proficiency tests are simulations; the correct answer is known to the testmaker, and it is possible to measure the number of correct and incorrect responses. It, therefore, may be possible to infer something about the accuracy of forensic fingerprint identification from existing proficiency tests data.

a. Externally conducted proficiency tests

By "externally conducted" proficiency tests, I mean those tests designed and administered by an institution with a modicum of independence from the crime laboratory itself. Although there may be reasons to question the independence of such institutions as the Law Enforcement Assistance Administration, the American Society of Crime Laboratory Directors Laboratory Accreditation Board (ASCLD-LAB), and Collaborative Testing Services (a private corporation one of whose clients is ASCLD-LAB), externally conducted tests are nonetheless distinguishable from "internally conducted" tests, in which an individual laboratory designs and administers a proficiency test to itself.

Beginning in 1981, a series of proficiency tests were conducted for latent print examiners. (276) The tests were administered by a private company, Collaborative Testing Services (CTS). Beginning in 1993, the tests were designed in consultation with the Proficiency Advisory Committee (PAC) of the American Society of Crime Laboratory Directors (ASCLD).

There are a number of difficulties interpreting these tests. First, there are design flaws in the tests themselves. The tests were conducted by mail under unproctored, untimed conditions. (277) It is not known whether the tests were completed by individual examiners or "by committee." Second, no metric exists for measuring the degree of difficulty of the latent prim comparison. Even the number of "points of identification" in a latent print cannot serve as such a metric. This is because studies have found that there can be substantial disagreement between examiners as to how many "points" exist in a particular print. (278) In addition, it has been argued that the number of points in a latent print is not an accurate measure of the degree of difficulty of the analysis of that print. Therefore, there is no way of determining the level of difficulty of these proficiency tests relative to casework. Third, there is incomplete information about the level of experience and qualifications of the examiners who completed the tests. Fourth, the number of "elimination latents," or latent prints that should not be attributed to any of the known prints provided, is relatively small. This may mitigate the difficulty of these tests.

Another set of criticisms against the proficiency studies has been launched by the fingerprint community itself. Proponents of fingerprinting contend that, since there are no controls of who takes the test, many of the tests represent the work of novice examiners or foreign laboratories. (279) Therefore, they argue that the false positive rate may be higher on proficiency tests than in real casework. On the other hand, critics of fingerprinting argue that test-takers tend to overperform on non-blind proficiency tests, so proficiency test error rates may be lower than the rate on real casework. (280) In short, the proficiency test results may either underestimate or overestimate the true false positive rate. All we know for certain is that they should be interpreted with caution.

Table 4 shows the results of all known external proficiency tests to date. There are a number of different ways of reporting false positives. Often the false positive rate has been reported as the number of participants who committed at least one false positive divided by the total number of participants. This is what has led to the oft-quoted "one in five error rate" on the 1995 test. (281)

Another way of reporting false positives is to divide the number of false positives by the total number of comparisons undertaken. For example, for the 1995 test, the rate of false positives over the total number of comparisons is 4.4%. Though the latter figure would seem more comforting, it would indicate false fingerprint testimony against almost one out of every twenty criminal defendants.

Since the notorious 1995 test, there has been a decline in false positives. Because there is no metric for measuring test difficulty, however, it cannot be determined whether the decline is due to changing makeup of the test-taking population, greater seriousness with which the tests are treated, better performance, or easier tests. Overall, the comparison false positive rate, aggregated over the entire test-taking period is around 0.8%.

In addition, Doctors Haber and Haber have pointed out that all the external proficiency tests are mailed to laboratories and mailed back, so it is possible that many of them are completed by committee. In that case, each reported laboratory false positive may represent one or more false positives committed at the level of the individual examiner. Using a conservative estimate that each laboratory false positive represents two individual false positives (some may represent three or more, some only one), the Habers' "consensus error rate" is given as the square root of the "comparison error rate." (282)

Finally, it should be noted that signal detection theorists would point out an important problem with reporting only the false positive rate. Test-takers who are far more concerned about false positives than false negatives can in effect, "game" the test by reporting with extreme conservatism. For example, imagine an examiner who reported "inconclusive" for every test item. This examiner would score perfectly in my schema that reports only false positives. Therefore, signal detection theory must be applied in order to measure not just false positives, but the test-takers' power of discrimination. However, the reported data on fingerprint proficiency tests are insufficient to apply signal detection theory. (283)

b. Internally conducted proficiency tests

In the 2002 case United States v. Llera Plaza, the court issued a decision restricting the testimony of FBI latent print examiners, in part because although existing studies "fall far short of establishing a 'scientific' rate of error, they are (modestly) suggestive of a discernible level of practitioner error." (284) This marked the first time a judicial decision has so limited the testimony of latent print examiners. The court then granted the government's motion for reconsideration, and a hearing was held in which the government presented results of proficiency testing conducted since 1995. The evidence included the FBI's results on the CTS tests described in Part II.B. 1.a, revealing that an FBI examiner was responsible for one of the false positives on the 1995 CTS test. The government also presented results of internal proficiency tests designed and administered by the FBI since 1995. The results of these internal proficiency tests had not been published or otherwise made public in any way until the adverse ruling in Llera Plaza.

Table 5 shows the results of the FBI's internal proficiency tests. Clearly, FBI examiners performed quite well on these tests, committing only three false negatives and no false positives. However, upon closer examination two concerns emerged. First, although, as mentioned above, it is not possible to measure the difficulty of latent print comparison except subjectively, a subjective examination suggested that the tests were far easier than typical latent casework. Retired Scotland Yard examiner Allan Bayle testified that the simulated latent prints in the test were "nothing like" typical crime-scene latent prints, (285) that Scotland Yard examiners would "fall about laughing" if given the FBI's tests, (286) and that the tests were "a joke." (287)

Bayle's conclusions about the difficulty of the tests went uncontested, and they were credited by the court, which remarked, "[o]n the record made before me, the FBI examiners got very high proficiency grades, but the tests they took did not." (288) If, as discussed infra Part III.A, the FBI really believes that its examiners' false positive rate is zero, it is difficult to understand why they would not administer to them the most difficult tests possible.

The second issue concerned collusion on the tests. A four-page memorandum written by an FBI examiner echoed Bayle's concerns about the test being easy, but also claimed "that examiners routinely cheat on the test by discussing their answers with one another." (289)

2. IAI Certification Examination

Another simulation is a certifying examination administered by the International Association for Identification (IAI). Certification is a voluntary qualification available to latent print examiners; no U.S. court has ever ruled that certification is required to qualify a latent print examiner to testify, and the IAI explicitly disavows any such interpretation of its certification program. (290) Certification requires passage of an examination that includes a section that requires making source attributions for fifteen latent prints. (291) Haber and Haber report that almost all failures of the certification examination result from failing this section. (292) To be certified, a candidate must correctly attribute twelve of the fifteen test items without incorrectly attributing any test item (that is, the candidate is allowed three false negatives, but no false positives). (293) One response to this test design would be attribute only the twelve easiest comparison and give no answer for the three most difficult. (294) This would mean that the test is only measuring examiners' ability to attribute the twelve easiest prints. The pass rate of the first examination, in 1993, was 48%, and the rate of failures has stayed steady at around half. (295)

Since the IAI does not publish the results of the examination, and the general figure of 50% failure does not parse out how many of these derive from false positives, as opposed to more than three false negatives, it is impossible to extrapolate a general error rate from the IAI certifying examination. Nonetheless, it is troubling to realize that, given that virtually all certification candidates are likely to be active latent print examiners, around half of this self-selected group of latent print examiners cannot pass the IAI certification examination. Moreover, the least competent examiners are not likely to even submit to the examination.

C. SUMMARY

The existing data are inadequate to calculate a meaningful error rate for forensic fingerprint identification. Nonetheless, it is clear that misidentifications do occur: in real-life criminal case, on internally and externally administered proficiency tests, and on the IAI certification examination.

Indeed, in nearly every context in which misattributions are given a reasonable opportunity to occur--excluding the artificially easy self-administered internal FBI proficiency tests (296)--they do occur.

The existing data suggest that the error rate may not be trivial. While a 0.8% false positive rate may sound highly reliable to a layperson, it would lead to enormous numbers of false convictions. U.S. crime laboratories processed 238,135 requests for latent print analysis in 2002. (297) If these laboratories committed a false positive on 0.8% of these requests, they would have reported 1,905 false positives in 2002 alone. Given the enormous power and credibility of latent print evidence, it must be assumed that a very high percentage of these 1,905 reports would have resulted in convictions or guilty pleas. A very large proporition of these may well have been false. (298) And, again, it should be emphasized that there are reasons to believe that the 0.8% false positive figure may represent only a lower bound. (299)

A 0.8% false positive rate would also defy most people's expectations for fingerprint identification, which is presumed to be very accurate evidence indeed. Because of the special power of fingerprint evidence and the presumption of infallibility, latent print examiners testifying falsely 0.8% of the time would probably be viewed as unacceptable by most criminal justices system actors.

III. THE RHETORIC OF ERROR

A. THE ZERO ERROR RATE

As discussed above, latent print examiners continue to claim that the error rate of latent print identification is "zero." How can the claim that the error rate of forensic fingerprint identification is zero be sustained? The claim is sustained by two types of parsing of errors, which I will call typological and temporal parsing.

1. Typological Parsing

Typological parsing is achieved by assigning errors to two distinct categories: "methodological" (sometimes called "scientific") and "practitioner" (sometimes called "human"). It may be illustrated most clearly by Agent Meagher's testimony at the Mitchell Daubert hearing:

Q: Now--Your Honor, if I could just have a moment here. Let's move on into error rate, if we can, please, sir?

I want to address error rate as we have--you've heard testimony about ACE-V, about the comparative process, all right?

Have you had an opportunity to discuss and read about error rate?

A: Yes.

Q: Are you familiar with that concept when you talk about methodologies?

A: Sure.

Q: And where does that familiarity come from, what kind of experience?

A: Well, when you're dealing with a scientific methodology such as we have for ever since I've been trained, there are distinctions--there's two parts of errors that can occur. One is the methodological error, and the other one is a practitioner error.

If the scientific method is followed, adhered to in your process, that the error in the analysis and comparative process will be zero. It only becomes the subjective opinion of the examiner involved at the evaluation phase. And that would become the error rate of the practitioner.

Q: And when you're talking about this, you're referring to friction ridge analysis, correct?

A: That is correct. It's my understanding of that regardless of friction ridge analysis.

The analysis comparative evaluation and verification process is pretty much the standard scientific methodology and a lot of other disciplines besides--

Q: And that may be so.

Are you an expert or familiar with other scientific areas of methodologies?

A: No, I'm not an expert, but I do know that some of those do adhere to the same methodology as we do.

Q: Are you an expert on their error rate?

A: No.

Q: Based on the uniqueness of fingerprints, friction ridge, etcetera, do you have an opinion as to what the error rate is for the work that you do, latent print examinations?

A: As applied to the scientific methodology, it's zero. (300)

Meagher's invocation of the "zero methodological error rate" generated an approving response within the fingerprint community. (301) In another case, Meagher testified as follows:
   With regards to discussing the error rates in terms of methodology
   which from my understanding is the real focus of attention for the
   hearing here. The methodology has an error rate of zero where
   practitioner error rate is whatever practitioner error rates for
   that individual or group of individuals. (302)


Since the Mitchell Daubert hearing, the claim that the error rate of fingerprint "methodology" is zero has become enshrined as dogma within the fingerprint community. Latent print examiners are coached to recite this position when cross-examined. For example, Wertheim fils advises latent print examiners to answer the question "What is the error rate of fingerprint identification?" as follows:
   In order to fully address this issue, you must decide which error
   rate you are going to address. Two types of error are involved:
   PRACTITIONER error and the error of the SCIENCE of fingerprints.
   The fact is, nobody knows exactly how many comparisons have been
   done and how many people have made mistakes, so you can't answer
   that issue. Of course the error rate for the SCIENCE itself is zero.

   The way to answer this question on the stand might sound something
   like: If by error you mean HUMAN error, then I would answer that
   there is no way for me to know, since I do not have detailed
   knowledge of casework results from departments throughout the
   country. However, if by error you mean the error of the science
   itself, then my answer is definitely zero.

   If follow up questions are asked, you can explain: There are only
   three conclusions a latent print examiner can come to when comparing
   two prints: Identification, Elimination, or Insufficient detail to
   determine. (Explain each of these) Insufficient doesn't apply,
   because you are asking about the error rate involving
   identification. The fact is, any two prints "of value" either A:
   were made by the same source, or B: they were not. There is no
   probability associated with that fact. Therefore, the science allows
   for only one correct answer, and unless the examiner makes a
   mistake, it WILL be the correct answer. That is what I mean when I
   say the error rate for the science of fingerprints IS zero. (the
   little emphasis on "is", as you nod your head once to the jury,
   doesn't show up in the transcript, but it sure helps get the jury to
   nod back in agreement!!) (303)


It should be noted that, in their sworn testimony, latent print examiners appear to follow Wertheim's second piece of advice, but not his first. That is, judging from court opinions (infra Part III.B), latent print examiners do testify that the "methodological error rate" is zero, but they do not testify that the "practitioner error rate" is unknown. Rather, they testify that the practitioner rate is "essentially zero" or "negligible"--statements that have no basis in any attempt to actually measure the "practitioner error rate" but are nevertheless taken by courts as gospel.

2. Temporal Parsing

An alternative stratagem rests upon a temporal parsing of error. In this formulation, all documented errors are consigned to a conceptually distant past that is no longer relevant to the present. The reasoning is that errors provoke changes in procedure that then render past procedures obsolete. Since new procedures are now in place, it is unfair to brand the state-of-the-art practice with past errors. Temporal parsing may be illustrated by the testimony of Dr. Budowle at the Mitchell Daubert hearing:

Q: Tell us how [error rate] applies to scientific methods, methodology.

A: Well, this transcends all kinds of forensic, it transcends all disciplines in that[, but] in the forensic area particularly, this has been an issue discussed repeatedly in lots of disciplines, whether it is DNA chemistry and latent fingerprints.

We have to understand that error rate is a difficult thing to calculate. I mean[,] people are trying to do this, it shouldn't be done, it can't be done. I'll give you an example as an analogy. When people spell words, they make mistakes. Some make consistent mistakes like separate, some people I'll say that I do this, I spell it S-E-P-E-R-A-T-E. That's a mistake. It is not a mistake of consequence, but it is a mistake. It should be A-R-A-T-E at the end.

That would be an error. But now with the computer and Spell Check, if I set up a protocol, there is always Spell Check, I can't make that error anymore. You can see, although I made an error one time in my life, if I have something in place that demonstrates the error has been corrected, it is no longer a valid thing to add [as] a cumulative event to calculate what a error rate is. An error rate is a wispy thing like smoke, it changes over time because the real issue is, did you make a mistake, did you make a mistake in this case? If you made a mistake in the past, certainly that's valid information that someone can cross-examine or define or describe whatever that was, but to say there's an error rate that's definable would be a misrepresentation.

So we have to be careful not to go down the wrong path without understanding what it is we are trying to quantify.

Now, error rate deals with people, you should have a method that is defined and stays within its limits, so it doesn't have error at all. So the method is one thing, people making mistakes is another issue. (304)

Whatever the merits, in principle, of Budowle's argument, if taken seriously, it places an immovable obstacle in the path of any court seeking to estimate an error rate for anything. There are, of course, inherent problems in estimating any sort of error rate. But these are problems that practitioners in diverse areas of science and industry have managed to live with, and courts, according to the Supreme Court, are now duty-bound to struggle with them as well. Even if we accept Budowle's argument that it is difficult to calculate error rates prospectively, that does not mean that we should not try to estimate error rates, nor that past performance is still probably the best guide to estimating future performance. In Budowle's schema, no error rate could ever be calculated, as all exposed errors recede immediately into the supposedly "irrelevant" past. The error rate does indeed become "a wispy thing like smoke."

3. What is "Methodological Error Rate"?

The concept of "methodological error rate" is not one that the government adapted for fingerprinting from some other area of scientific or technical endeavor. Typing the term "methodological error rate" into an Internet search engine (for example, Google) yields results pertaining almost only to forensic fingerprint evidence, not to any other area of scientific or technical endeavor. (305) In none of its briefs in Mitchell supporting this concept did the government cite any other area of scientific or technological endeavor where it is thought appropriate to split the concept of error rate in this fashion. Nor does the government cite any other cases in which the Daubert error rate criterion is interpreted in this fashion. Since the concept exists only in the field of latent print identification, a field that is not populated by credentialed scientists, it merits especially strict scrutiny.

The problem is that the practitioner is integral to the method of latent print identification. In other words, the "methodology" consists entirely and solely of having a practitioner analyze the prints. There is no methodology without a practitioner, any more than there is automobile without a driver, and claiming to have an error rate without the practitioner is akin to calculating the crash rate of an automobile, provided it is not driven.

Even if one were to accept the distinction between "methodological" and "practitioner" error, these categories would be useful only for a scientific or policy-driven assessment of latent print identification. For legal purposes, the only relevant matter is the overall error rate--that is, the sum of the "methodological" and "practitioner" error rates. If one is boarding an airplane, one is interested in the total error rate--the sum of all error rates, if error is parsed. Although there may be some utility to parsing error in the case of airplane crashes into, say, pilot and mechanical errors--provided, of course, that attributions can be made consistently and coherently--no one would wish for them to substitute for, or obscure, the overall error rate. If one is deciding whether to board an airplane, the relevant information is the overall error rate. If one is deciding whether scarce resources should be allocated to pilot training or mechanical inspections, then the relevant information may be to parse crashes into "human" and "mechanical" causes. A legal fact finder is in the position of the passenger boarding the plane, not the policymaker allocating resources. Therefore, judges, who are responsible for ensuring that relevant and reliable information is put before the fact finder, (306) should be concerned with the rate at which the process or technique in question provides accurate conclusions to the fact finder, which is given by the overall error rate. Even if one were to grant the legitimacy of parsing of error into categories, the categorical error rates are irrelevant to the court's inquiry. The overall error rate is the only relevant piece of information to put before a court.

Moreover, unlike the broad categories posited in airplane crashes, the assignment of error in fingerprint identification is asymmetric. In aviation risk assessment, neither the pilot nor the mechanical error rate is zero. In fingerprint identification, one type of error is said to be zero. How can this be? The answer is that all known cases of error are automatically assigned to only one of the two categories: practitioner error. By attributing all documented errors to practitioners, the methodological error rate remains--eternally--zero. The "methodological error rate," by definition, could not be anything other than zero. This, of course, takes away the force of any claim of an empirical finding that the "methodological error rate" has been found to be zero. Fingerprint evidence could be shoddiest evidence ever promulgated in a court of law and, defined as it has been, the "methodological error rate" would still remain zero!

What this means, of course, is that even if in some areas a meaningful distinction can be drawn between "methodological" and "practitioner" error, in fingerprint practice the concept is vacuous.

The most generous interpretation of what latent print examiners mean when they claim the "methodological error rate" is zero is that they are saying that no latent print misidentifications are caused by nature. In other words, no misattributions are caused by completely identical areas of friction ridge detail existing on two different fingers. As one prominent latent print examiner, William Leo, testified: "And we profess as fingerprint examiners that the rate of error is zero. And the reason we make that bold statement is because we know based on 100 years of research that everybody's fingerprint are unique, and in nature it is never going to repeat itself again." (307)

As Wertheim pere puts it, "So when we testify that the error rate is 'zero,' what we mean is that no two people ever have had or ever will have the same fingerprint." (308) This argument fails to understand that the issue in the original Mitchell Daubert hearing--and the issue more generally--was never about errors caused by individuals possessing duplicate fingerprint patterns. (309)

An even more simplistic formulation of this generous version of the "methodological error rate" is the argument that because there is only one source for the latent print, the "methodological error rate" is zero. As Agent Meagher put it in his testimony in a pre-trial hearing in People v. Hood: "Because fingerprints are unique and they are permanent there can only be one source attributed to an impression that's left so there have--there can only be one conclusion. It's ident or non ident." (310)

One might just as well argue that since there is only one person that an eyewitness actually saw, one could claim that the "methodological error rate" of eyewitness identification is zero. Or, that, because each test subject is either pregnant or not pregnant, the "methodological error rate" of any pregnancy test--no matter how shoddy (311)--is zero.

It is apparent that, when pressed, latent print examiners can water down the claim of a "zero methodological error rate" to propositions that are, in and of themselves, so banal as to be unobjectionable. Who can doubt that only one individual is, in fact, the source a particular latent print? Or even that there are not individuals walking around with exact duplicate ridge detail on their fingertips? The danger lies in not fully communicating these retreats to the jury. Latent print examiners can clarify what they mean by "methodological error rate" in their professional literature and in pretrial admissibility hearings and neglect to do so in their trial testimony. A juror who hears "the methodological error rate is zero, and the practitioner error rate is negligible" would be forgiven for assuming that "methodological error rate," in this context, refers to something significant, rather than a banality, like "only one person could have left the latent print." This potential for using the aura of science to inflate the fact-finder's credence in expert testimony is precisely the sort of thing that an admissibility standard, like Daubert/Kumho, is designed to mitigate. The "methodological error rate" is so potentially misleading that courts must rein it in.

4. The "Roomful of Mathematicians"

The fallacy of the "methodological error rate" is well illustrated by an example that fingerprint examiners are fond of using: the roomful of mathematicians. Consider the following analogy drawn by Agent Meagher:
   The analogy that I like to use to help better understand the
   situation is the science of math. I think everyone agrees that
   probably the most exact science there is, is mathematics. (312) And
   let's take the methodology of addition. If you add 2 plus 2, it
   equals 4. So if you take a roomful of mathematics experts and you
   ask them to perform a rather complex mathematical problem, and just
   by chance one of those experts makes a [sic] addition error--adds 2
   plus 2 and gets 5--does that constitute that the science of math and
   the methodology of addition is invalid? No. It simply says is that
   that practitioner had an error for that particular day on that
   problem. (313)


Fingerprint examiners are particularly fond of using the mathematics analogy. Wertheim pere writes, "[j]ust as errors in mathematics result from mistakes made by mathematicians, errors in fingerprint identification result from the mistakes of fingerprint examiners. The science is valid even when the scientist errs." (314) Special Agent German argues, "[t]he latent print examination community continues to prove the reliability of the science in spite of the existence of practitioner error. Math is not bad science despite practitioner error. Moreover, air travel should not be banned despite occasional crashes due to pilot error." (315) In response to the Mayfield case, Wertheim pere commented, "Just because someone fails to balance his checkbook, that should not shake the foundations of mathematics." (316)

The analogy between the practice of forensic fingerprint analysis and the abstract truth of addition seems rather strained. But, even if we accept the analogy on its own terms, we can readily apprehend that the only relevant information for assessing the reliability of a forensic technique is precisely that which Agent Meagher deems irrelevant: the rate at which the roomful of mathematicians reaches correct results. In other words, it is the roomful of mathematicians that constitutes forensic practice, not the conceptual notion of the addition of abstract quantities. If defendants were implicated in crimes by mathematicians adding numbers, a court would want to know the accuracy of the practice of addition, not the abstract truth of the principles of addition.

B. THE COURTS' VIEW OF ERROR RATE

Courts have been generally credulous of the parsing of error into categories. In the first written ruling issued in response to an admissibility to challenge to fingerprint evidence under Daubert, the court wrote:
   The government claims the error rate for the method is zero. The
   claim is breathtaking, but it is qualified by the reasonable
   concession that an individual examiner can of course make an error
   in a particular case ... Even allowing for the possibility of
   individual error, the error rate with latent print identification
   is vanishingly small when it is subject to fair adversarial testing
   and challenge. (317)


On appeal, the Seventh Circuit credited Agent Meagher's testimony "that the error rate for fingerprint comparison is essentially zero. Though conceding that a small margin of error exists because of differences in individual examiners, he opined that this risk is minimized because print identifications are typically confirmed through peer review." (318) In United States v. Crisp, the court similarly accepted at face value the testimony of a latent print examiner "to a negligible error rate in fingerprint identifications." (319)

In United States v. Sullivan, the court did share "the defendant's skepticism that" latent print identification "enjoys a 0% error rate." (320) However, the court concluded that there was no evidence that latent print identification "as performed by the FBI suffers from any significant error rate," noting "FBI examiners have demonstrated impressive accuracy on certification-related examinations." (321) These are, of course, the examinations characterized as laughable by Mr. Bayle and the Llera Plaza court. (322) The court allowed the government's unsupported claim of a "minimal error rate" to stand. (323)

In the first decision in United States v. Llera Plaza (herinafter Llera Plaza 1), the court allowed the claim of zero "methodological error rate" to stand, although it dismissed it as largely irrelevant to the reliability determination before the court. (324)

In its second decision (hereinafter Llera Plaza II), however, the court credited the testimony of FBI examiners that they were not themselves aware of having committed any errors:
   But Mr. Meagher knew of no erroneous identifications attributable
   to FBI examiners. Defense counsel contended that such non-knowledge
   does not constitute proof that there have been no FBI examiner
   errors. That is true, but nothing in the record suggests that the
   obverse is true. It has been open to defense counsel to present
   examples of erroneous identifications attributable to FBI examiners,
   and no such examples have been forthcoming. I conclude, therefore,
   on the basis of the limited information in the record as expanded,
   that there is no evidence that the error rate of certified FBI
   fingerprint examiners is unacceptably high. (325)


The court appears to have understood full well the point made here (supra Part II.A.4.c) that because of the weakness of exposure mechanisms it would be foolhardy to assume that known errors are any more than a small subset of actual errors. Nonetheless, the court chose to use this argument to uphold fingerprint evidence on the "error rate" prong of Kumho Tire. As I have argued elsewhere, (326) this was poor enough reasoning at the time, but it is even more embarrassing now that two short years later we do have definitive proof that the FBI has committed at least one exposed false positive: the Mayfield case. The court's embarrassment should be even more acute since the Mayfield case has brought to light that: one of the examiners implicated in the Mayfield misattribution, John Massey, did, in fact, make errors that were exposed within the organization itself prior to being presented in court; that Massey continued to analyze fingerprints for the FBI and presumably testify in court with the usual "infallible" aura; and that Massey was still hired to "verify" an identification in an extremely high-profile case. Further, Massey's history of false attributions was exposed during a trial in 1998, four years prior to the Llera Plaza hearing. (327)

The court would have been better educated by asking Agent Meagher about exposed errors within the laboratory than focusing solely on the highly unlikely exposure of errors after they are presented in court as purportedly error-flee. In my critique of Llera Plaza II, I argued that Meagher's testimony was better evidence of the weakness of the FBI's error-detection mechanisms than it was that the FBI had not committed any errors. (328) Interestingly, in a presentation about the Mayfield case to the IAI, Meagher reportedly said the following:

Question: "Has the FBI made erroneous identifications before?"

Steve: "The FBI identification unit started in 1933 and we have had 6 or 7 in total about 1 every 11 years. Some of these were reported and some were not." (329)

Given Meagher's sworn testimony in Llera Plaza I, we must assume that he was referring here to errors that were caught within the laboratory before being testified to in court. Where and how some of these errors were "reported" is not clear.

The Third Circuit ruling on the appeal of Mitchell (the first challenge to fingerprint evidence under Daubert) rejected Mitchell's argument that there is no methodological error rate distinct from the practitioners. (330) But the court's reasoning made clear that by "methodological error rate" it understood something like an "industry-wide" error rate, (331) as contrasted with an individual practitioner error rate, not a theoretical error rate that is set by flat at zero. The court acknowledged the argument made in this article (infra Part III.C) that it is problematic to automatically assign all known errors to practitioners rather than "the method." But, like other courts, the Mitchell court then went on to make the unsupported assertion that "even if every false positive identification signified a problem with the identification method itself (i.e., independent of the examiner), the overall error rate still appears to be microscopic." (332) From this, the court concluded that "the error rate has not been precisely quantified, but the various methods of estimating the error rate all suggest that it is very low." (333) In short, the court completely neglected the exposure problem indicated by the fortuity of the known false positives. Instead the court noted that "the absence of significant numbers of false positives in practice (despite the enormous incentive to discover them)." (334)

Of course, a technique with a "very low" measured error rate may be admissible, but it ought not be permitted to tell the fact-finder that its error rate is "zero." Interestingly, the court acknowledges this, noting, "the existence of any error rate at all seems strongly disputed by some latent fingerprint examiners." (335) The court looks dimly on this. In one of its "three important applications" of "[t]he principle that cross-examination and counter-experts play a central role in the Rule 702 regime," (336) the court notes that
   district courts will generally act within their discretion in
   excluding testimony of recalcitrant expert witnesses--those who will
   not discuss on cross-examination things like error rates or the
   relative subjectivity or objectivity of their methods. Testimony
   at the Daubert hearing indicated that some latent fingerprint
   examiners insist that there is no error rate associated with their
   activities.... This would be out-of-place under Rule 702. But we do
   not detect this sort of stonewalling on the record before
   us. (337)


Here, then, is a welcome and long overdue judicial repudiation of latent print examiners' claim of a "zero methodological error rate." The only baffling part is the court's patently false assertion that such claims were not made "on the record before us," when, as we have seen (338) the claim originated and was most fully developed in the very record before the court. There is no record in which the claim of a zero error rate was made earlier, nor any record in which it was made more forcefully.

In sum, not only do courts gullibly accept the claim of the zero "methodological error rate," they also parrot totally unsupported assertions from latent print examiners that the so-called "practitioner error rate" is "vanishingly small," "essentially zero," "negligible," "minimal," or "microscopic." These assertions are based on no attempt to responsibly estimate the "practitioner error rate"; they are based solely on latent print examiners' confidence in their own practice. Confidence, as we know from the world of eyewitness identification, does not necessarily equate with accuracy. (339) A sign of hope, however, recently emerged from a concurring opinion in the Court of Appeals of Utah, which suggested that "we should instruct our juries that although there may be scientific basis to believe that fingerprints are unique, there is no similar basis to believe that examiners are infallible." (340)

"Methodological error rate" might be viewed not merely as a product of latent print examiners' and prosecutors' misunderstanding of the notion of error rate, but, worse, as a deliberate attempt to mislead finders of fact. The concern over the potential for a finder of fact to give inflated credence to evidence clad in the mantle of science is embedded in the very notion of having an admissibility barrier. (341) The potential to mislead a fact-finder by saying, "My methodological error rate is zero, and my practitioner error rate is negligible," is extremely high. The "methodological error rate" is a bankrupt notion that should have been immediately rejected when it was first proposed. Indeed, it probably would have been, had it not been advanced in defense of something with such high presumed accuracy as latent print identification. Since it was not rejected, courts should do as the Third Circuit said (if not as it did) and exclude any testimony claiming that the error rate of latent print identification (or, for that matter, anything) is zero because of the extreme danger that fact-finders will give it credence. If they do not, then all sorts of expert and non-expert witnesses will be able to invoke this notion as well. Why should the manufacturer of litmus paper not be able to claim that her litmus paper has a zero "methodological error rate" because substances are either acid, base, or neutral? Why not allow the eyewitness to claim a zero "methodological error rate" because only one person was seen? Why not allow a medium to claim a zero "methodological error rate" because the defendant is either guilty or innocent? Why not allow all pregnancy tests to claim a zero "methodological error rate" because all women either are pregnant or are not? The scientific and forensic scientific communities should also explicitly disavow the notion of "methodological error rate" as it is framed by latent print examiners.

C. ACCOUNTING FOR ERROR

In one sense, the claim of a zero "methodological error rate" is merely a rhetorical ploy to preserve fingerprinting's claim to infallibility. But, at the same time, it has a more insidious effect. The insistence upon "methodological" infallibility serves to deter inquiry into how the process of fingerprint analysis can produce errors. This, in turn, hampers efforts to improve the process of fingerprint analysis and, possibly, reduce the rate of error. Only by confronting and studying errors can we learn more about how to prevent them. (342)

The mechanism for assigning all errors to the category of "human error" is attributing them to "incompetence." Elsewhere I have explored the sociological dimensions of the fingerprint profession's mechanisms for sacrificing practitioners who have committed exposed false positives on the altar of incompetence, in order to preserve the credibility of the technique itself. (343) In fingerprint identification, incompetence is said to be the cause of all known cases of error--at least all of those that are not assigned to outright fraud or malfeasance. These attributions of incompetence, as we shall see, are made in a retrospective fashion and without evidence. In short, the only evidence adduced in favor of the claim that the examiner was incompetent is the same thing incompetence is supposed to explain: the exposed misattribution. Incompetence then supports a variant on the "zero methodological error rate" argument: the claim that "the technique" is infallible as long as "the methodology" is applied correctly. Again, attributions of incorrect application of the methodology are made in a retrospective fashion without evidence. It is the exposed error that tells us that correct procedures were not followed.

Fingerprint examiners steadfastly maintain that the process is error-free in competent hands. Ashbaugh states, "When an examiner is properly trained a false identification is virtually impossible." (344) Wertheim pere asserts, "Erroneous identifications among cautious, Competent examiners, thankfully, are exceedingly rare; some might say 'impossible.'" (345) Wertheim fils flatly declares, "a competent examiner correctly following the ACE-V methodology won't make errors." (346) And, elsewhere: "When coupled with a competent examiner following the Analysis, Comparison, Evaluation process and having their work verified, fingerprint identification is a science, the error rate of the science is zero." (347) Beeton states, "As long as properly trained and competent friction ridge identification specialists apply the scientific methodology, the errors will be minimal, if any." (348) These arguments can be sustained, even in the face of exposed cases of misidentification committed by IAI-certified, or otherwise highly qualified, examiners only by retrospectively deeming individuals who have committed exposed false positives incompetent.

Thus, Sedlacek, Cook, and Welbaum, the three examiners implicated in the Caldwell case, were deemed incompetent, despite being IAI-certified. (349) In the Cowans case, Massachusetts Attorney General Thomas Reilly, after failing to secure a criminal indictment for perjury against LeBlanc, stated, "Science is not an issue in this case. What we know is that there is right way to do this and the right way was not followed." (350) LeBlanc himself, said, curiously, "The system failed me. And the system failed Cowans." (351)

Regarding the Jackson case, Agent Meagher stated
   I think this was a--a case where you need to really look at the
   qualifications of the examiners. Having looked at the prints, I
   would certainly say that these individuals were lacking the
   necessary training and experience needed to take on that level of a
   comparison examination that they did. (352)


Again, one of the three examiners implicated in the disputed attribution (Creighton) was IAI-certified, and, therefore, should be difficult to deem incompetent. On paper, the IAI-certified expert Creighton, who was "wrong," was no less qualified than the IAI-certified experts Wynn and McCloud, who were "right." It is only because we now agree with Wynn and McCloud that we deem Creighton incompetent. Notice the circularity of Meagher's argument: "Having looked at the prints, I would certainly say that these individuals were lacking...." (353) It is by looking at the evidence, that we are able to judge the expert's competence. Yet, in all routine fingerprint cases it is only by looking at the competence of the expert that we are able to judge the evidence!

This approach to error raises the problem of the unreliability of mechanisms to expose incompetence. Imagine, for instance, that Jackson had fewer resources to marshal in his defense and had either been unable to procure defense experts or had procured less able defense experts who had corroborated the misidentification. The examiners who made the misidentification would now be presumed competent. Indeed, according to the logic put forward by proponents of fingerprint identification, the jury would be justified in believing--or being told--that forensic fingerprint identification, when in the hands of "competent" experts such as these, is error-free. Alternatively, consider the case of the identifications made by these experts just before they took on the Jackson case. Should the experts be deemed competent in these judgments or incompetent?

Finally, it should be noted that all of these attributions of incompetence are simply postulated. No evidence was advanced to show that Sedlacek, Cook, Welbaum, or Creighton were incompetent. Instead, the presumed misattributions serve as the sole evidence of incompetence.

1. Incompetence as a Hypothesis

At root, incompetence as an explanation for error is a hypothesis. Proponents of forensic identification attribute all exposed errors to incompetence. This may or may not be correct, but the answer cannot be known simply by assuming the conclusion.

Consider once again the analogy with airplane crashes, an area where the adjudication of the attribution of an accident to a category of error (pilot or mechanical) is often hotly contested and highly consequential. In this case, there are actors with an interest in both attributions of error (the manufacturer and its insurer favor pilot error; the pilots' union--and perhaps the victims, seeing the manufacturer as having the deeper pockets favor mechanical error). Clearly, reasons must be given to attribute the error to one cause or the other. Although the attribution may be contested, both sides must adduce evidence in favor of their hypothesized cause of error.

In the cases discussed above, the attribution of incompetence is circular. No evidence is offered that the examiner is incompetent other than the fact that he or she participated in an error. The fingerprint establishment "knows" that the examiner is incompetent only because it disagrees with that examiner's conclusion in a particular case. Thus, the fingerprint establishment's judgment of the examiner's competence is based, not on any objective measure of competence, but solely on whether it agrees with the examiner's conclusions in one case.

The effect of this is the creation of what might be called "a self-contained, self-validating system." Observe:

1. The proposition is urged that: Forensic fingerprint identification is 100% accurate (error-free) when performed by a competent examiner.

2. This proposition can only be falsified (refuted) by the demonstration of a case in which a "competent" examiner makes an error.

3. When cases of error are exposed, the examiners implicated are immediately, automatically, and retrospectively deemed "incompetent."

4. No exposed error--and no number of exposed errors--can refute the proposition.

5. The proposition cannot be refuted.

Note also another effect of this: all criminal defendants are forced into the position of assuming that examiners in their cases are competent. Since incompetence is only exposed in a retrospective fashion (i.e. by making a misidentification) and such examiners are almost always excommunicated from the profession, all criminal defendants are subject to the "infallible" competent examiner. (354)

The remarkable thing is that we can easily imagine a state of affairs in which the proposition urged in (1) above can be tested. All we need is some measure of competence that is not circular, that does not depend on exposed misidentifications. For instance, one might reasonably treat the IAI's certification examination as a measure of competence. In that case, we would reason as follows:

1. The proposition is urged that: Forensic fingerprint identification is 100% accurate (error-free) when performed by a competent examiner.

2. Passage of the IAI certification test is a measure of competence.

3. The proposition may now be falsified by the exposure of a case in which an IAI-certified examiner is implicated in a misidentification. (Of course, in true falsificationist fashion, even if no such case is exposed, we still do not know that the proposition is true.)

4. IAI-certified examiners have been implicated in misidentifications (supra Part II.A.4.d).

5. The proposition is false.

Note that this way of reasoning about error does not, contrary to what some might suggest, cause the sky to fall upon forensic fingerprint identification. All we have arrived at is that rather reasonable position that forensic fingerprint identification is not error-free. Fingerprint examiners admit this. But they attempt to have their cake and eat it too, by insisting on some mythical error-free zone that is unsullied by exposed cases of error.

The real danger of attributing error to incompetence is that it works just as well, regardless of the actual accuracy of the technique. In fact, the tragic irony of forensic fingerprint identification is that, even though it may be highly accurate, it adopts modes of reasoning and argumentation so obscurantist that they would work as well even if it were highly inaccurate. (355)

2. Alternate Theoretical Approaches

As a hypothesis, the assignment of all exposed errors to incompetence is unpersuasive. The range of circumstances, even in the very small data set of exposed cases, is extremely broad. Errors have been committed in obscure local law enforcement agencies by unheralded practitioners (Trogden) (356) and by the elite of the profession in the highest profile cases imaginable (Mayfield). (357) These examples suggest that error does not necessarily require an explanation; it is part of normal practice and is hardly surprising. All areas of scientific and technical practice are infused with error and have to confront and try to understand their own sources of error. Indeed, in some areas of science, like astronomy, as Professor Alder has recently eloquently described, the understanding of error is, in some ways, the core of the scientific work. (358)

Thus, one consequence of insisting upon incompetence as the explanation for all errors is that it prevents us from understanding anything about fingerprint errors. In place of the fingerprint community's unhelpful and unsupportable insistence upon assigning all errors to incompetence, I will suggest two sociological frameworks for thinking in a realistic way about forensic errors.

a. The Sociology of Error

One way of understanding the fingerprint community's insistence on the incompetence hypothesis draws from a sociology of science notion called "the sociology of error." (359) This refers to the tendency, in commenting on science, to invoke "external causes," such as sociological or psychological phenomena, asymmetrically, to explain only incorrect results, not correct ones. Correct results are attributed solely to "nature," whereas false results are attributed to bias, ambition, financial pressure, and other such causes. For example, it has become commonplace to attribute Martin Fleischmann and Stanley Pons's premature announcement of having achieved cold fusion to "psychological" and "sociological" explanations--greed, ego, ambition, and the excessive pressure to publish first that pervades contemporary science. (360) However, such explanations cannot explain incorrect results unless it is implausibly assumed that these psychological and sociological forces are not operative when science yields purportedly "correct" results. As Bloor puts it:
   This approach may be summed up by the claim that nothing makes
   people do things that are correct but something does make, or cause
   them to go wrong.

   The general structure of these explanations stands out clearly. They
   all divide behaviour or belief into two types, right or wrong, true
   or false, rational or irrational. They then invoke sociological or
   psychological causes to explain the negative side of the division.
   Such causes explain error, limitation and deviation. The positive
   side of the evaluative divide is quite different. Here logic,
   rationality and truth appear to be their own explanation. Here
   psycho-social causes do not need to be invoked.... The central
   point is that, once chosen, the rational aspects of science are
   held to be self-moving and self-explanatory. Empirical or
   sociological explanations are confined to the irrational....
   Causes can only be located for error. Thus the sociology of
   knowledge is confined to the sociology of error. (361)


We can see the operation of this logic in latent print examiners' self-analysis. Incompetence, prosecutorial pressure, over-haste, a "bad day," vigilantism, and so on are invoked to explain errors. But presumably, if these factors were in force when errors were produced, they were also in force when supposedly "correct" results were produced as well.

As an antidote to the sociology of error, Bloor proposed the principles of "impartiality" and "symmetry." Bloor proposed that sociological explanations of the production of scientific knowledge would have to be capable of explaining the production of both "false" and "correct" beliefs (impartiality). And, the same causes would have to explain both "false" and "correct" beliefs (symmetry). (362)

We might begin to apply an impartial, symmetric analysis to fingerprint misattributions. The fingerprint community's inquiries into its own errors tend to fall, exactly into the sociology of error. Once it is determined that the conclusion was in error, retrospective explanations are sought as causes of the erroneous conclusions. But there is absolutely no evidence that fingerprint misattributions are caused by "the process" gone awry. (Indeed, because latent print examiners do not record bench notes--document what leads to their conclusions--there would be no way of demonstrating this even if it were true.) It is more likely that whatever process it is that produces correct results also sometimes produces incorrect results.

If it were true that fingerprint errors had different superficial attributes from correct conclusions, detecting errors would not be difficult. We could simply devise ways of detecting incompetent examiners, bad days, high-pressure laboratories, and so on. But the insidious thing about fingerprint attributions is that they look just like correct attributions, until we identify them as misattributions.

In short, retrospective explanations of fingerprint misattributions will not help us learn to identify them prospectively. This is the intended meaning of my epigraph--not, as the reader may have initially assumed, to liken latent print examiners to charlatans. The epigraph highlights, with absurd precision, the obvious point that the insurance scam only works because the mark cannot prospectively tell the difference between an honest insurance salesman and an imposter. The same is true of a fingerprint identification. The criminal justice system has no way of prospectively distinguishing between correct latent print attributions and misattributions. But, more importantly, it is true of the latent print examiner as well. A falsely matching known print (an imposter) presumably looks much the same as a truly matching one. What this leaves us with is an empirical question about latent print examiners' ability to detect imposters. (363) All the rest of it--good intentions, the fact that there is only one finger that left the print--is beside the point. Latent print examiners are not the phony insurance salesmen of my epigraph; they are the victims, the unwitting consumers.

For instance, in the wake of the Mayfield case, some latent print examiners have declared that they "do not agree with the identification." (364) But whether a latent print examiner agrees with an identification that is posted on the Internet as a misattribution is of little interest to us. As my epigraph suggests, we want to know whether latent print examiners can distinguish the fake insurance salesmen from the real ones before they know they're phony, not after.

b. Normal Accidents Theory

Another way of looking at this problem is drawn from "normal accidents theory" (NAT). (365) Professor Perrow suggests that many catastrophic failures of technical systems are caused, not by deviation from proper procedure, but from the normal functioning of highly complex and "tightly coupled" (366) systems (hence the term "normal accidents"). Fingerprint analysis is not highly complex, but it is tightly coupled. (367) Similarly, Professor Vaughan suggests that error and misjudgment can be part of normal behavior, not necessarily caused by deviance. (368) NAT would suggest that fingerprint errors are not pathological deviations from normal procedure, but simply consequences of normal activity.

Perrow's analysis of marine accidents is suggestive of the type of "normal accident" that a latent print misattribution might be. These are accidents that are to some extent caused by creating an erroneous image of the world and interpreting all new, potentially disconfirming, information in light of that "expected world." As Perrow puts it:
   [W]e construct an expected world because we can't handle the
   complexity of the present one, and then process the information that
   fits the expected world, and find reasons to exclude the information
   that might contradict it. Unexpected or unlikely interactions are
   ignored when we make our construction. (369)


Now consider Wertheim pere's description of latent print identification:
   [T]he examiner would proceed with experimentation (finding features
   in the latent print, then examining the inked print for the same
   features) until the instant that the thought first crystallizes that
   this is, in fact, an identification.... The examiner continues to
   search for new features until it is reliably proven that each time a
   new feature is found in the latent print, a corresponding feature
   will exist in the latent print. (370)


While Wertheim thinks he has described "science," he has in fact described a process of gradually biasing his analysis of new information based on previously analyzed information. Could this be what happens in a fingerprint misattribution? Could it be that an examiner, having formed a hypothesis that two prints come from a common source, interprets potentially disconfirming information in a manner consistent with this hypothesis? Could this explain why latent print examiners make misattributions that in retrospect seem clearly erroneous?

3. Alternate Causes of Error

I have argued that we need to confront and understand the nature of fingerprint error, rather than minimizing it, dismissing it, or retrospectively blaming it on incompetence. I will now suggest two possible causal mechanisms for fingerprint errors. While I cannot demonstrate a causal relationship between these factors and fingerprint errors, or arbitrate between these two mechanisms, I would suggest that they are at least as likely to be causal mechanisms as incompetence.

a. Natural Confounding

The first alternate hypothesis is that disputed attributions are caused by the existence of areas of friction ridge skin on different persons' fingertips that, while not identical, are in fact quite similar. This possibility has been totally dismissed by the fingerprint community in its insistence upon the absolute uniqueness of all areas of friction ridge skin, no matter how small. This fact is supposed to rest upon a "law" that nature will never repeat the same pattern exactly. Even accepting, for the moment, this flawed argument, it does not hold that nature might not produce confounding patterns. In other words, nature might produce areas of friction ridge skin that, though not identical, are quite similar, similar enough to be confounded when using the current tools of analysis (i.e., "ACE-V').

In some sense this would be analogous to what, in forensic DNA typing, is called an "adventitious" or "coincidental" match. This refers to the fact that, given a certain DNA profile, a certain number of individuals may be expected to match the profile, even though they did not, in fact, leave the crime-scene sample. This expectation is phrased as the "random match probability." (371) There is an important difference between an adventitious match in DNA and the analogous phenomenon in fingerprinting. In a DNA adventitious match, the samples do in fact match. In other words, there is no way of knowing that the match is "adventitious" rather than "true," other than, perhaps, external factors that make the hypothesis that the identified individual is the true source of the match implausible (such as, that the individual was incarcerated at the time).

Because a fingerprint match is a subjective determination, there is a sense in which an adventitious fingerprint match does not "really" match. That is, once the match has been deemed erroneous one can find differences between the two prints. There are always differences between print pairs. In an identification, however, these differences are deemed explainable by the examiner.

b. Bias

A second alternative hypothesis is bias. Bias can come in many forms. The tendency of forensic scientists to suffer from "pro-prosecution bias," which may be more or less conscious, as a consequence either of being law enforcement agents, identifying closely with them, or of simply working closely with them, has been well noted. (372) This certainly might be problematic in fingerprint identification where a high proportion of practitioners are law enforcement officers and virtually all practitioners acquired their expertise through work in law enforcement.

However, by "bias" I also mean to refer to a politically neutral form of psychological bias that has nothing to do with the analyst's conscious or unconscious feelings about the parties in criminal cases. In a groundbreaking article on "observer effects" in forensic science, Professors Risinger, Saks, Thompson, and Rosenthal draw on psychological literature to support the argument that forensic technicians engaged in tasks of determining whether two objects derive from a common source, like latent print identification, are subject to "expectation bias." (373) In other words, the very process of what, in the so-called "ACE-V methodology," is called "Comparison"--going from the unknown to the known print--to see if the ridge detail is "in agreement" may create an expectation bias. Features seen in the unknown print may be more likely to be "seen" in the known print or, even more insidiously, vice versa. These effects will tend to cause observers to expect to see similarities, instead of differences, and may contribute to erroneous source attributions. Risinger et al. note that observer effects are well known in areas of science that are highly dependent on human observation, like astronomy, and these disciplines have devised mechanisms for mitigating, correcting, and accounting for them. Forensic science, however, has remained stubbornly resistant to even recognizing that observer effects may be in force. (374)

Latent print identification--in which no measurements are taken, but a determination is simply made by an examiner as to whether she believes two objects derive from a common source--is a prime candidate for the operation of observer effects. Several factors support the plausibility of the observer effects hypothesis. First, many of the disputed identifications discussed above were confirmed by second, third, and even fourth examiners. (375) Since there is no policy in place for blinding verifiers from the conclusion reached by the original examiner, these examiners almost surely knew that their colleagues had reached conclusions of identification. This suggests that examiners are indeed subject to expectation and suggestibility and that these forces can cause them to corroborate misattributions. If expectation bias causes latent print examiners to corroborate misattributions, could it cause them to generate them as well?

Even more suggestive are the cases in which examiners employed by the defense corroborated disputed attributions. That defense examiners sometimes corroborate disputed attributions would suggest that expectation and suggestion are so powerful they can overcome the defense expert's presumed pro-defendant bias. If anything, we would expect defense examiners to be biased in favor of exclusion because they are working for clients with an interest in being excluded. (376) The work of a defense examiner likely consists mainly of confirming that the state's examiners did in fact reach the right conclusion. This may create a situation in which the defense examiner expects virtually all print pairs put before her to match. The fact that defense examiners have corroborated disputed identifications indicates that expectation bias may be even more powerful than the expert's bias toward the party retaining her.

4. The Mayfield Case

It will be useful to explore the possible roles of natural confounding and observer effects by returning to what is perhaps the richest and most theoretically interesting (as well as the most recent and sensational) misattribution case: the false arrest of Brandon Mayfield. In the wake of the uproar, the FBI promised a review by "an international panel of fingerprint experts." (377) That review is now complete, and the FBI has published a "synopsis" of the International Review Committee's findings. (378)

The report adopts the rhetorical distinction between "human" and "methodological" error, claiming, "The error was a human error and not a methodology or technology failure." (379) The claim that the Mayfield error somehow did not involve "the methodology" as properly practiced is particularly difficult to sustain given the impeccable credentials of the laboratory, the individual examiners, and the independent expert.

The most easily dismissed hypothesis was that the error was caused by the digital format in which the Madrid print was transmitted to the FBI. (380) A second hypothesis posed by an anonymous FBI official in the press was "that the real issue was the quality of the latent print that the Spaniards originally took from the blue bag." (381) But, this explanation can also be dismissed because the Spanish were apparently able to effect an identification to Daoud from the latent print, so the latent print was presumably of adequate quality.

As mentioned above, (382) the report singles out the high-profile nature of the case as an explanation for the error. This conclusion is interesting--and quite damaging to latent print identification's claims to objectivity. If latent print identification is less reliable in high profile cases, then how objective can the analysis be? But, pending further evidence, the conclusion is unpersuasive. The report offers no evidence, such as statements by the examiners, as to how the high-profile nature of the case might have influenced them. Instead, because an error occurred in a high-profile case, the report simply assumes that a causal relationship exists.

There is no reason for us to accept this hypothesis as more persuasive than the NAT hypothesis: that the error was a product of normal operating procedure and that, if anything, the high-profile nature of the case is an explanation for the error's exposure, not its occurrence.

At bottom, blaming the error on the nature of the case is merely a continuation of the rhetorical strategy of seeking to dismiss all errors as exceptional cases. The latent print community's characterization of the error as "the perfect storm" (383) illustrates the effort to portray the case as so exceptional that it remains irrelevant to all other cases.

Ultimately, the report itself identifies the reason that we are unlikely ever to find a persuasive explanation for the error. Because latent print examiners do not keep bench notes (they do not document their findings), (384) it is nearly impossible to retrospectively reconstruct a misattribution.

Given the impossibility of reconstructing the examiner's subjective process, let us explore the possibilities of natural confounding and bias. Could the Mayfield error be due to natural confounding? The FBI press release refers to "the remarkable number of points of similarity between Mr. Mayfield's prints and the print details in the images submitted to the FBI." (385) The possibility that the Mayfield case represents the first exposed "adventitious cold hit" (386) in a latent print database is intriguing.

As I have noted elsewhere, the idea of searching latent prints in some sort of seamless global database has been an unfulfilled dream throughout the twentieth century. (387) Only today is computer technology beginning to make such a "global database" possible, although there are still formidable problems with making national and regional databases technically compatible. Latent print examiners' flawed argument that, in the course of filing and searching fingerprint records, they had never come across two identical prints, was always based on searches of local databases. Since a truly global search was impractical, fingerprint examiners extrapolated from the absence of duplicates in local databases the much broader principle that, were one to search all the world's databases, one would not find duplicates either. Today, functionally global searches are becoming practicable in high profile cases (such as an alleged Al Queda terrorist attack on European soil). Given the nature of the case and the rapidly advancing technology, the Madrid print may have been one of the most extensively searched latent prints of all time. It may be that the Mayfield case demonstrates what may happen when one actually does a global search: one finds very similar, though not completely identical, areas of friction ridge skin. (388) This is analogous to a phenomenon long observed by DNA analysts: as the size of the databases increases, the likelihood of an adventitious cold hit increases as well. (389)

Oddly enough, what makes the natural confounding hypothesis seem less plausible are precisely the misleading "suspicious facts" about Mayfield: his conversion to Islam, his Egyptian spouse, his military service, and his connection to the "Portland Seven." If the Mayfield error were purely an adventitious cold hit--a case of a computer searching for the closest possible match to a latent print created by Daoud and then gulling the examiner into making a misattribution--what is the likelihood that the victim of the adventitious cold hit would be an individual with such seemingly plausible connections to an Al Qaeda operation, as opposed to say an octogenarian evangelical Christian with a criminal record? This suggests that the "suspicious facts" about Mayfield may have been introduced into the latent print identification process at some point, at least "firming up," if not actually generating, the misattribution. (390) If facts about Mayfield did influence latent print examiners, then it was a highly improper introduction of "domain-irrelevant information" (391) into what should have been a technical analysis. While it would be proper for an investigator to use the "suspicious facts" about Mayfield to evaluate the plausibility of the latent print match, it is highly dangerous for a forensic technician to do SO. (392) But an anonymous FBI source has strenuously denied that the latent print analysts knew anything about Mayfield before they made the attribution. (393)

Even without domain-irrelevant information, the possibility of unconscious bias ("observer effects") remains strong. The initial analyst, Green, may have been induced to seek the best possible match among those produced by the database search. The "verifiers," Wieners and Massey, may have been unconsciously influenced by the fact that Green has made an attribution. Then Moses, who did know the domain-irrelevant information, but whose bias ought to have pointed away from attribution, also corroborated the false attribution. (394)

Even in the face of the Mayfield case, the fingerprint community continues to seek to minimize the significance of error. Wertheim pere, for example, advised his colleagues to give the following testimony when asked about the Mayfield case:

A: (turning to the jury) The FBI fingerprint section was formed in 1925. Over the last 79 years, the FBI has made tens of thousands, hundreds of thousands, probably millions of correct identifications. So now they finally made a mistake that led to the arrest of an innocent man, and that is truly a tragic thing. But figure the "error rate" here. Are fingerprints reliable? Of course they are. Can mistakes be made? Yes, if proper procedures are not strictly followed. But I cannot think of any other field of human endeavor with a track record of only one mistake in 79 years of practice. (395)

To interpret Mayfield as showing that the FBI has made only one error in seventy-nine years, as opposed to only having had one error exposed in seventy-nine years, exhibits a complete denial of the exposure problem detailed above (supra Part II.A.4.c).

CONCLUSION

As I have argued elsewhere, the myth of the infallibility of fingerprint identification is in many ways a historical accident. (396) I suggest, with Iain McKie, that it is a burden that fingerprint examiners never ought to have been asked to shoulder and never ought to have assumed. (397) Unable to resist the offer of infallible expert witness status, fingerprint examiners have now painted themselves into a corner in which they must resort to rhetorical gymnastics in order to maintain the claim of infallibility in the face of mounting evidence of error. We can help them out of this corner and give finders of fact a more realistic way of assessing the trustworthiness of latent print attribution, but the examiners will have to leave the "zero methodological error rate" behind.

We need to acknowledge that latent print identification is susceptible to error, like any other method of source attribution, and begin to confront and seek to understand its sources of error. I have drawn some tentative conclusions in this paper based on what is probably a very inadequate data set of exposed errors in the public record. Some of these conclusions may not be sustained once a more complete data set is obtained. One way to begin the process of studying error would be for law enforcement agencies and the professional forensic science community to begin assembling a more complete data set of latent print errors. In addition, the IAI should put in place a regular mechanism for reviewing cases of disputed identifications, as was done in Jackson. This mechanism should be known and publicized without the hesitation that it will expose the fact that latent print attributions are sometimes erroneous. Then latent print error will no longer be "a wispy thing like smoke." (398)

(1) Application for Material Witness Order and Warrant Regarding Witness: Brandon Bieri Mayfield, In re Federal Grand Jury Proceedings 03-01, 337 F. Supp. 2d 1218 (D. Or. 2004) (No. 04-MC-9071).

(2) Id.

(3) Les Zaitz, Transcripts Detail Objections, Early Signs of Flaws, OREGONIAN, May 26, 2004, at A1; Noelle Crombie & Les Zaitz, FBI Apologizes to Mayfield, OREGONIAN, May 25, 2004, at 1; Andrew Kramer, Fingerprint Science Not Exact, Experts Say, ASSOCIATED PRESS, May 21, 2004, available at http://www.msnbc.msn.com/id/5032168; see also Steven T. Wax & Christopher J. Schatz, A Multitude of Errors: The Brandon Mayfield Case, 28 CHAMPION 6 (2004). There is some ambiguity as to whether Moses was retained by Mayfield or by the court. Moses's retention was apparently proposed by Mayfield, but Moses was then appointed by the court so that his report would go directly to the court. Electronic communication from Les Zaitz, Reporter, The Oregonian, to author (Sept. 7, 2004) (on file with the author). In any case, it is clear that Moses's role was to provide an independent assessment of the evidence.

(4) Press Release, Federal Bureau of Investigation, Statement on Brandon Mayfield Case, (May 24, 2004) [hereinafter FBI Press Release].

(5) See Jonathan Saltzman & Mac Daniel, Man Freed in 1997 Shooting of Officer: Judge Gives Ruling After Fingerprint Revelation, BOSTON GLOBE, Jan. 24, 2004, at A1.

(6) Id.

(7) Id.

(8) Id.

(9) The Innocence Project, at http://www.innocenceproject.org/(last visited May 8, 2005).

(10) David Weber & Kevin Rothstein, Man Freed After 6 Years: Evidence Was Flawed, BOSTON HERALD, Jan. 24, 2004, at 4.

(11) See, e.g., FEDERAL BUREAU OF INVESTIGATION, THE SCIENCE OF FINGERPRINTS: CLASSIFICATION AND USES, at iv (1985) ("Of all the methods of identification, fingerprinting alone has proved to be both infallible and feasible.").

(12) Sharon Begley, Despite Its Reputation, Fingerprint Evidence Isn't Really Infallible, WALL STREET JOURNAL, June 4, 2004 at B1; Simon A. Cole, Fingerprints Not Infallible, 26 NAT'L L.J. 22 (Feb. 23, 2004); Kramer, supra note 3.

(13) See, e.g., Commonwealth v. Loomis, 113 A. 428, 430 (Pa. 1921).

(14) Id.; Commonwealth v. Loomis, 110 A. 257 (Pa. 1920); Albert S. Osborn, Proof of Finger-Prints, 26 AM. INST. CRIM. L. & CRIMINOLOGY 587, 587 (1935).

(15) See, e.g., Flynn McRoberts et al., Forensics Under the Microscope, CHI. TRIB., Oct. 17, 2004, at 1.

(16) Id.

(17) Steve Scarborough, They Keep Putting Fingerprints in Print, WEEKLY DETAIL, Dec. 13, 2004, available at http://www.clpex.com/Articles/TheDetail/100-199/TheDetai1174.htm.

(18) 509 U.S. 579, 593 (1993).

(19) Id. at 589.

(20) Id. at 594. The Court's phrasing of its "error rate" requirement was admittedly rather vague. Part of the confusion probably stems from its use by the Daubert Court to demarcate reliable from unreliable science. Id. at 589. In most scientific pursuits, the term "error" usually refers to measurement error, the expected discrepancy between measured values and true values. This is something quite different from an error rate. Since Daubert is commonly read as an effort to describe what is distinctive about science, see, e.g., David S. Caudill & Richard E. Redding, Junk Philosophy of Science?: The Paradox of Expertise and Interdisciplinarity in Federal Courts, 57 WASH. & LEE L. REV. 685, 735-41 (2000), it might have made more sense for the Court to have referred to measurement error than to "error rate."

An error rate would tend to be more commonly associated with a process or technique. A litmus test is an obvious example. Litmus paper turns red when exposed to an acid. One might imagine calculating an error rate for different kinds of litmus paper by measuring how often they fail to turn red when exposed to an acid and how often they turn red when exposed to a substance that is not an acid. A pregnancy test might also be imagined to have an error rate. And birth control devices often have "failure rates" associated with them, although these are obviously highly sensitive to conditions of use.

There is, therefore, some potential confusion in the Court's use of "error rate" as one of its criteria for defining legitimate scientific knowledge. Some knowledge claims produced by areas of inquiry that most people would certainly consider "science," such as physics, would be hard-pressed to provide an "error rate" for their findings, or even to understand what would be meant by such a request. They would, on the other hand, readily understand what was meant by a request for their "measurement error." On the other hand, there are technical processes, like the production of "reliable" litmus paper (as opposed to the chemical principle underlying litmus paper), that could readily comply with a request for an "error rate," but would appear to most observers to be industrial production processes, rather than "science."

As it happens, forensic identification much more closely resembles a technical process than it does an open-ended search for knowledge, like a physics experiment. Forensic identification is a routine, repetitive procedure that yields, not new knowledge, but one of a prescribed set of possible results. As mentioned infra, Kumho Tire applies the Daubert factors, including error rate, to such technical processes that generate expert evidence. The results of such processes are either correct or incorrect, though it may not ever be possible to determine this. Forensic identification techniques, therefore, seem readily amenable to the estimation of error rate, the rate at which it yields correct results.

(21) 526 U.S. 137 (1999).

(22) Id. at 141-42.

(23) Id. at 151.
   At the same time, and contrary to the Court of Appeals' view, some
   of Daubert's questions can help to evaluate the reliability even of
   experience-based testimony. In certain cases, it will be appropriate
   for the trial judge to ask, for example, how often an engineering
   expert's experience-based methodology has produced erroneous
   results....


Id.

Professors Denbeaux and Risinger have pointed out that discussions of "error rate" in debates over applying the Daubert/Kumho standard to forensic science tend to ignore the requirement in Kumho Tire that the discussion be calibrated to the task at hand. Mark Denbeaux & D. Michael Risinger, Kumho Tire and Expert Reliability: How the Question You Ask Gives the Answer You Get, 34 SETON HALL L. REV. 15 (2003). While forensic document examination (Denbeaux and Risinger's principal example) involves a greater range of tasks than latent print identification, the tasks involved in latent print identification do vary greatly. The principal axis of variation for latent print identification concerns the difficulty of the comparison, and the principal component of this is the quality and quantity of information available in the unknown print. Common sense indicates that the "error rate" for very high quality latent prints (or very "easy" comparisons) should be quite different from the "error rate" for marginal latent prints (or very "difficult" comparisons). A rational attempt to assess the error rate of latent print identification should therefore yield not a single "error rate," but many error rates, or, rather, an "error curve" showing the estimated rate of error for different levels of latent print quality and quantity (or comparison difficulty). One key hindrance to generating this sort of information is the lack of an accepted metric for measuring either latent print quality and/or quantity or the difficulty of a comparison. So far, the only possible metric is the number of ridge characteristics in a print, which has been, with some justification, rejected as a metric by the latent print community, as being inconsistent and not derived from empirical research. Christophe Champod, Edmond Locard--Numerical Standards and 'Probable' Identifications, 45 J. FORENSIC IDENTIFICATION 136 (1995).

(24) United States v. Mitchell, 365 F.3d 215 (3d Cir. 2004).

(25) I am grateful to an anonymous reviewer for making this point.

(26) United States v. Havvard, 117 F. Supp. 2d 848, 854 (S.D. Ind. Oct. 5, 2000). Professor Starrs has suggested that "preposterous" or "unsupportable" would have made better word choices here. Online posting (Nov. 4, 2000), at http://onin.com/ bums/messages/3/21.html?SaturdayMarch2320020950am.

(27) Government's Combined Report to the Court and Motions in Limine Concerning Fingerprint Evidence at 22, United States v. Mitchell, 199 F. Supp. 2d 262 (E.D. Pa. 2002) (No. 96-407), available at http://www.clpex.com/Information/USvMitchell/1PreDaubert HearingMotions/US_v_Mitchell_Govt_Response.pdf ("By following the scientific methodology of analysis, comparison, evaluation and verification, the error rate remains zero.").

(28) 60 Minutes: Fingerprints (CBS television broadcast, Jan. 5, 2003). In another interview, Meagher stated flatly that "its [latent print identification's] error rate is zero." Steve Berry, Pointing a Finger at Prints, L.A. TIMES, Feb. 26, 2002, at A1.

(29) GEORGE ORWELL, 1984, at 214 (1949) ("Doublethink means the power of holding two contradictory beliefs in one's mind simultaneously, and accepting both of them.").

(30) Pat A. Wertheim, The Connection: Faulty Forensics (NPR radio broadcast, June 10, 2004), available at http://www.theconnection.org/shows/2004/06/20040610_b_main.asp.

(31) KEITH INMAN & NORAH RUDIN, PRINCIPLES AND PRACTICE OF CRIMINALISTICS: THE PROFESSION OF FORENSIC SCIENCE 123 (2001).

(32) Id. at 133; CHRISTOPHE CHAMPOD ET AL., FINGERPRINTS AND OTHER RIDGE SKIN IMPRESSIONS 24 (2004).

(33) INMAN & RUDIN, supra note 31, at 141.

(34) Int'l Ass'n for Identification, Resolution VII, 29 IDENTIFICATION NEWS 1 (Aug. 1979) ("[A]ny member, officer or certified latent print examiner who provides oral or written reports, or gives testimony of possible, probable, or likely friction ridge identification shall be deemed to be engaged in conduct unbecoming such member, officer, or certified latent print examiner."); Int'l Ass'n for Identification, Resolution V, 30 IDENTIFICATION NEWS 3 (Aug. 1980) (amending the resolution to allow for such testimony, with qualifications, under threat of court sanction).

(35) Scientific Working Group for Friction Ridge Analysis, Study and Technology [hereinafter SWGFAST], Friction Ridge Examination Methodology for Latent Print Examiners [section] 3.3 (Aug. 22, 2002), version 1.01, available at http://www.swgfast.org/ Friction_Ridge Examination_Methodology_for_Latent_Print_Examiners_1.01.pdf [hereinafter SWFAST, Methodology].

(36) Id.

(37) Sarah Kershaw & Eric Lichtblau, Spain Had Doubts Before U.S. Held Lawyer in Madrid Blasts, N.Y. TIMES, May 26, 2004, at A1; David Feige, Printing Problems: The lnexact Science of Fingerprint Analysis, SLATE (May 27, 2004), available at http://slate.msn.corn/id/2101379; see also Application for Material Witness Order and Warrant Regarding Witness: Brandon Bieri Mayfield at 3, In re Federal Grand Jury Proceedings 03-01,337 F. Supp. 2d 1218 (D. Or. 2004) (No. 04-MC-9071).

(38) See supra note 32 and accompanying text. People v. Ballard, No. 225560, 2003 Mich. App. LEXIS 547 (Mich. Ct. App. 2003), is a case in point. The court found that the latent print examiner's "testimony that she was '99 percent' certain that defendant's fingerprint was found in the stolen car ... had no demonstrated basis in an established scientific discipline.... " Id. at *9. The irony is that the examiner's undoing probably lay in naming a figure smaller than 100%.

(39) SWGFAST, Methodology, supra note 35, [section] 3.3.1.

(40) DAVID R. ASHBAUGH, QUANTITATIVE-QUALITATIVE FRICTION RIDGE ANALYSIS: AN INTRODUCTION TO BASIC AND ADVANCED RIDGEOLOGY 22 (1999).

(41) Willam F. Leo, Distortion Versus Dissimilarity in Friction Skin Identification, 48 J. FORENSIC IDENTIFICATION 125-26 (1998).

(42) SWGFAST, Methodology, supra note 35, at [section] 3.3.3.

(43) SWGFAST, Methodology, supra note 35, at [section] 3.3.1.

(44) SWGFAST, Methodology, supra note 35.

(45) ASHBAUGH, supra note 40.

(46) CHAMPOD ET AL., supra note 32, at 200 (recommending that verification should be blind only for especially difficult latent prints).

(47) Robert B. Stacey, A Report on the Erroneous Fingerprint Individualization in the Madrid Train Bombing Case, 54 J. FORENSIC IDENTIFICATION 706, 715 (2004).

(48) Pat A. Wertheim, re: Certification (To Be or Not to Be), 42 J. FORENSIC IDENTIFICATION 279, 280 (1992) [hereinafter Wertheim, re: Certification].

(49) Int'l Ass'n for Identification, Latent Fingerprint Certification, at http://www.theiai.org/certifications/fingerprintJindex.html (last visited May 9, 2005).

(50) Wertheim, re: Certification, supra note 48, at 280. ("The IAI has never taken the position that persons in a particular field should be required to be certified in order to testify. Nor, to my knowledge, have any courts ever required expert witnesses to be certified by the IAI.").

(51) See generally Alexander Volokh, n Guilty Men, 146 U. PA. L. REV. 173, 174 (1997).

(52) In other contexts, one might be more concerned about false negatives than false positives. For example, one might apply the same technology--fingerprint identification--in airports to detect known terrorists. In that setting, false negatives (failing to identify a terrorist who boards an airplane) may be of greater concern than false positives (temporarily detaining an innocent person on suspicion of being a terrorist).

(53) See infra note 222 and accompanying text.

(54) H. M. COLLINS, CHANGING ORDER: REPLICATION AND INDUCTION IN SCIENTIFIC PRACTICE (1985).

(55) Dusty Clark, Latent Prints: A Forensic Fingerprint Impression Evidence Discussion Site, at http://www.latent-prints.com (last visited May 8, 2005); Craig Cooley, LawForensic.corn, at http://www.law-forensic.com (last visited May 8, 2005); Ed German, Problem Idents, at http://onin.com/fp/problemidents.html (last visited May 8, 2005); Michele Triplett, Erroneous Identification, known cases of', in MICHELE TRIPLETT'S FINGERPRINT DICTIONARY, at http://www.nwlean.net/fprints/e.htm (last visited May 8, 2005).

(56) For example, in 1984 Lambourne wrote, "Due to the frank and open policies of our American counterparts we do know that since early 1981 five members of the International Association for Identification have had their certification revoked because of erroneous identifications...." G. T. C. Lamboume, Fingerprint Standards, 24 MED. SCI. & L. 227, 229 (1984). Three of these probably derived from the Caldwell case, infra Part II.A.3.d. Depending on when Lambourne actually wrote that statement, one of the examiners referred to may have been the one implicated in Midwestem, infra Part II.A.3.e The fifth was probably Margaret Matthers, formerly with the Florida Department of Criminal Law Enforcement of Sanford, Florida, whose certification was revoked in August 1980 "for having furnished testimony to an erroneous identification." Certification Revoked, 31 IDENTIFICATION NEWS 2 (Feb. 1981) [hereinafter Certification Revoked, Feb.]. No further information on this erroneous identification was available, and it is unlikely to be among the cases reported here.

Similarly, in 1995 Professor Moenssens referred to "a great number of criminal cases [in which] an expert or consultant on fingerprint for the defense has been instrumental in seriously undermining the state's case by demonstrating faulty procedures used by the state's witnesses or by simply showing human errors in the use of fingerprint evidence." ANDRE MOENSSENS ET AL., SCIENTIFIC EVIDENCE IN CIVIL AND CRIMINAL CASES 516 (4th ed. 1995). It seems unlikely that all of Moenssens's "great number" of cases are represented in my study. In addition, Dr. David Stoney reports having discovered three erroneous attributions in "around 500" fingerprint cases that he has reviewed. David A. Stoney, Challenges to Fingerprint Comparisons, Address at Fingerprints: Forensic Applications, DePaul University Center for Law and Science (April 15, 2002). It is unlikely that all of Stoney's cases are represented in my study.

(57) Professor Gary Edmond points out that our treatment of supposed miscarriages of justice is "asymmetric." That is, once we have decided that the defendant was innocent, we interpret all the evidence in that light, just as the evidence was originally interpreted in light of the theory that the defendant was guilty. Gary Edmond, Constructing Miscarriages of Justice: Misunderstanding Scientific Evidence in High Profile Criminal Appeals, 22 OXFORD J. LEGAL STUD. 53 (2002).

(58) Michael Coit, Santa Rosa Woman Identified as Vegas Slaying Victim Turns Up Alive, SANTA ROSA PRESS DEMOCRAT, Sept. 13, 2002, at A1.

(59) Id.

(60) Saltzman & Daniel, supra note 5.

(61) In addition, there is some ambiguity between cases in which the consensus of latent print examiners is that the proper conclusion was "exclusion"--that is, that a print was attributed to someone who was not, in fact, its source--and cases in which the consensus of latent print examiners is that the proper conclusion was "inconclusive"--that is, a print was attributed to someone who may well have made it, but not enough information was available to make that determination. Obviously, the situation in these two scenarios is quite different, both scientifically and legally, but in many cases it is impossible to determine from the sources available which type of error has occurred.

(62) David L. Grieve, Built By Many Hands, 49 J. FORENSIC IDENTIFICATION 565, 574-75 (1999); David L. Grieve, Forest and Trees, 50 J. FORENSIC IDENTIFICATION 538 (2000); David L. Grieve, Getting Things Right, 50 J. FORENSIC IDENTIFICATION 229, 238 (2000); David L. Grieve, No Free Lunch, 50 J. FORENSIC IDENTIFICATION 426, 432 (2000).

(63) Kasey Wertheim, 2002-2003 Report from the Science and Practice Committee, 53 J. FORENSIC IDENTIFICATION 603, 604 (2003); Malcolm Graham, Your Comments on Fingerprints on Trial, BBC NEWS, May 19, 2002, available at http://news.bbc.co.uk/1/hi/ programmes/panorama/1997258.stm, Letter from David A. Russell, Solicitor, Towells Solicitors, to the Lord Advocate, Crown Office (Apr. 28, 2005) (available at http://shirleymckie.com/documents/LetterRussellversion.pdf).

(64) These cases include: United States v. Alteme, No. 99-8131-CR-FERGUSON (S.D. Fla. 2000) (Hilerdieu Alteme); Commonwealth v. Siehl, 657 A.2d 490 (Pa. 1995) (Kevin Siehl) (Mr. Siehl is currently serving a sentence of life without parole for murder, based in part on fingerprint attribution which two experts have now declared was erroneous); Associated Press, Defendant Is Linked to 2 Prints, MIAMI HERALD, May 1, 1985, at 2D (Michael Lanier); Associated Press, Teen Cleared in Flute Death, MIAMI HERALD, May 5, 1985, at 6D (Michael Lanier); Email communication with Ralph Haber, June 22, 2004 (on file with author) (Jose Arelleno); Ralph Haber & Lyn Haber, Two Latent Prints Matched to Defendant with Absolute Certainty, to the Exclusion of all Others; and an Acquittal in Federal Court (Oct. 8, 2003) (unpublished manuscript) (on file with author) (Thomas Cooley).

(65) These cases may be construed as errors of a sort even if the defendant was in fact the source of the disputed print. This is because of a peculiar attribute that distinguishes latent print evidence for virtually every other type of expert evidence: Latent print examiners are not supposed to disagree about attributions. Simon A. Cole, Witnessing Identification: Latent Fingerprint Evidence and Expert Knowledge, 28 Soc. STUD. OF SCL 687, 700 (1998) [hereinafter Cole, Witnessing Identification]. They are only supposed to go forward with attributions that all other qualified peers would corroborate. David R. Ashbaugh, The Premise of Friction Ridge Identification, Clarity, and the Identification Process, 44 J. FORENSIC IDENTIFICATION 499 (1994) ("Others with equal knowledge and ability must be able to see what you see."); Robert D. Olsen, Sr. & Henry C. Lee, Identification of Latent Prints, in ADVANCES IN FINGEPRINT TECHNOLOGY 41 (Lee and Gaensslen eds., 2001) ("Above all, the experienced examiner knows that the validity of the identification can be demonstrated to the satisfaction of other qualified examiners."). If there is any doubt about whether peers would corroborate an attribution, latent print examiners are supposed to classify the comparison as "inconclusive." This is admittedly a curious practice, one that, if strictly adhered to, would result in the ruthless discarding of potentially probative evidence, but it is, of course, a necessary practice for latent print examiners to sustain their myth of infallibility. Cole, Witnessing Identification, supra, at 702; Simon Cole, What Counts for Identity? The Historical Origins of the Methodology of Latent Fingerprint Identification, 12 SCIENCE IN CONTEXT 139 (1999) [hereinafter Cole, What Counts for Identity?]. In any case, it is a principle to which latent print examiners claim to adhere. This suggests that the latter category of eases are "errors" in that the examiners ought not to have gone forward with them because other qualified examiners declined to corroborate them. Although the prints in question may, in fact, belong to the individual to whom they were attributed, the evidence was not strong enough to constitute an "identification." To draw an analogy with studies of miscarriages of justice, my "misattributions" might be likened to cases of "actual innocence," and my "disputed identifications" might be likened to reversals, in which the defendant may or may not be, in fact, guilty of the crime, but, in either case, ought not to have been convicted.

For this reason, even the "disputed identifications" may properly be considered "errors" of some kind in that it was presumably poor judgment, or perhaps even poor ethics, for the examiner to go forward with the identification if it was so marginal that it would invite dispute. This is true even if the ground truth is that the print does, in fact, belong to the individual to whom it was attributed. Were such cases included, the misattributions data set that I present below would, of course, be significantly larger. Nonetheless, when I discuss errors in this paper, I will limit myself to the cases I have listed as "misattributions."

(66) Pat A. Wertheim, Problem Identifications, Latent Print Examination (June 4, 2000), at http://onin.com/bums/messages/3/16.html? ThursdayAugust320000441pm (describing the McKie case: "the 'identification' is so obviously erroneous that I must believe the four experts knew of the mistake long before the case came to trial"). Wertheim's argument is questionable, though, given that other experts, external to the case, have agreed with the four experts' conclusion. See supra sources cited note 63 and accompanying text.

(67) NELSON E. ROTH, THE NEW YORK STATE POLICE EVIDENCE TAMPERING INVESTIGATION, REPORT TO THE HONORABLE GEORGE PATAKI, GOVERNOR OF THE STATE OF NEW YORK (1997); Boris Geller et al., A Chronological Review of Fingerprint Forgery, 44 J. FORENSIC SCI. 963 (1999); Boris Geller et al., Fingerprint Forgery--A Survey, 46 J. FORENSIC SCI. 731 (2001); Pat A. Wertheim, Detection of Forged and Fabricated Latent Prints: Historical Review and Ethical Implications of the Falsification of Latent Fingerprint Evidence, 44 J. FORENSIC IDENTIFICATION 652 (1994).

(68) See BARRY SCHECK ET AL., ACTUAL INNOCENCE" WHEN JUSTICE GOES WRONG AND How TO MAKE IT RIGHT 160-62 (2003).

(69) Commonwealth v. Loomis, 113 A. 428 (Pa. 1921); Commonwealth v. Loomis, 110 A. 257 (Pa. 1920). For a more complete discussion, see SIMON m. COLE, SUSPECT IDENTITIES: A HISTORY OF FINGERPRINTING AND CRIMINAL IDENTIFICATION 192 (2001) [hereinafter COLE, SUSPECT IDENTITIES].

(70) Loomis, 110 A. at 258.

(71) Id.

(72) Loomis, 113 A. at 431.

(73) Id.

(74) GERALD TOMLINSON, FATAL TRYST (1999); Triplett, supra note 55.

(75) COLE, SUSPECT IDENTITIES, supra note 69, at 181-85.

(76) Triplett, supra note 55.

(77) JOHN WESLEY NOBLE & BERNARD AVERBUCH, NEVER PLEAD GUILTY: THE STORY OF JAKE EHRLICH 295 (1955); R. M. Vollmer, Report of Science and Practice Committee, 6 IDENTIFICATION NEWS 1 (1956).

(78) NOBLE & AVERBUCH, supra note 77, at 295.

(79) Id. at 296.

(80) Id.

(81) Id.; Vollmer, supra note 77, at 1.

(82) NOBLE & AVERBUCH, supra note 77, at 296.

(83) Id.

(84) Id. at 297.

(85) Id.

(86) Id. at 298.

(87) Id.

(88) State v. Caldwell, 322 N.W.2d 574 (Minn. 1982); James E. Starrs, A Miscue in Fingerprint Identification: Causes and Concern, 12 J. POLICE SCI. & ADMIN. 287 (1984); Certification Revoked, Feb., supra note 56; Certification Revoked, 31 IDENTIFICATION NEWS 2 (Sept. 1981) [hereinafter Certification Revoked, Sept.].

(89) Starrs, supra note 88, at 288.

(90) Id. at 288, 292; Certification Revoked, Feb., supra note 56; Certification Revoked, Sept., supra note 88.

(91) Starrs, supra note 88, at 292; Certification Revoked, Feb., supra note 56; Certification Revoked, Sept., supra note 88.

(92) Starrs, supra note 88, at 288.

(93) Id.

(94) Id. at 295.

(95) Ed German, Latent Print Examination: Fingerprints, Palmprints and Footprints, at http://onin.com/fp/problemidents.html (last visited May 9, 2005).

(96) Id.

(97) According to German, id., the examiner had passed the IAI certification examination. He was not one of those who was "grandfathered" into the certification program.

(98) Id.

(99) Id. German's language is ambiguous. If he literally means that the examiner "had not previously caused anyone's incarceration based upon fingerprint evidence," this would be rather surprising for a certified examiner. If, however, he means that the examiner "had not previously erroneously caused anyone's incarceration based upon fingerprint evidence," one would hope not!

(100) Id.

(101) Id.

(102) Cooper v. Dupnik, 963 F.2d 1220 (9th Cir. 1992); James E. Starrs, More Saltimbancos on the Loose? Fingerprint Experts Caught in a World of Error, 12 SCI. SLEUTHING NEWSL. 1, 1 (1988).

(103) Cooper, 963 F.2d at 1228; Starrs, supra note 102, at 6.

(104) Starrs, supra note 102, at 6.

(105) Cooper, 963 F.2d at 1233.

(106) Id. at 1228.

(107) Id. at 1220.

(108) Id. at 1232.

(109) Id.

(110) Id. Although the Supreme Court has ruled that it is permissible for police interrogators to use such tactics as falsely telling a suspect that they have incriminating fingerprint evidence, the significant thing in this case was that McCall's statements were sincerely believed, not deliberate lies. Frazier v. Cupp, 394 U.S. 731 (1969).

(111) Cooper, 963 F.2d at 1232.

(112) Id.

(113) Id.

(114) Starrs, supra note 102, at 6.

(115) Id. at 1.

(116) Id at 5.

(117) Id.

(118) Id.

(119) Id.

(120) Id.

(121) Id.; Barry Bowden, Judge Throws Out Theft Sentence, FAYETTEVILLE OBSERVER (N.C.), Feb. 5, 1988.

(122) Bowden, supra note 121.

(123) Id.

(124) Id.

(125) Id.

(126) Barry Bowden, Law Officials Find Error in Hand Print Matching, FAYETTEVILLE OBSERVER (N.C.), Mar. 31, 1988.

(127) Id.

(128) Id.

(129) Barry Bowden & Mike Barrett, Fingerprint Errors Raise Questions on Local Convictions, FAYETTEVILLE OBSERVER (N.C.), Jan, 15, 1988.

(130) Id.

(131) Stephen Grey, Yard in Fingerprint Blunder, LONDON SUNDAY TIMES, Apr. 6, 1997, at 4.

(132) Id.

(133) Id.

(134) Id.

(135) Id.

(136) The newspaper account of this case does not give the victim of erroneous identification's name. He is identified as Martin Blake by Craig Cooley in Forgettable Science or Forensic Science: Wrongful Convictions and Accusations Attributable to Forensic Science, at http://www.law-forensic.com/cfr_science_myth.htm (last visited May 8, 2005).

(137) Michael Higgins, Fingerprint Evidence Put on Trial, CHL TRIB., Feb. 25, 2002, at 1.

(138) Id.

(139) Id.

(140) Grey, supra note 131.

(141) Id.

(142) Id.

(143) Id.

(144) Id.

(145) Bob Woffinden, Thumbs Down, GUARDIAN, Jan. 12, 1999, at 17.

(146) Bob Woffinden, The Case of the Missing Thumbprint, 12 NEW STATESMAN 28 (Jan. 8, 1999).

(147) Id.

(148) Id.

(149) Id.

(150) Id.

(151) Id.

(152) Id.

(153) Id.

(154) Shelley Jofre, Falsely Fingered, GUARDIAN, July 9, 2001; Michael Specter, Do Fingerprints Lie?, NEW YORKER, May 27, 2002, at 96.

(155) Jofre, supra note 154.

(156) Murder Appeal After Print Error, BBC NEWS, Aug. 17, 2000, available at http://news.bbc.co.uk/hi/english/uk/scotland/newsid_884000/884895.stm.

(157) Id.

(158) Id.

(159) Id.

(160) Id.

(161) Pat Wertheim, David Asbury Case, at http://onin.com/fp/problemidents.html (last visited May 8, 2005).

(162) Jofre, supra note 154.

(163) McKie v. Strathclyde Joint Police Board, Sess. Cas. (Dec. 24, 2003) (Scotland), available at http://www.scotcourts.gov.uk/opinions/A4960.html [hereinafter McKie].

(164) Inquiry Call Into Prints Case, BBC NEWS, June 23, 2003, available at http://news.bbc.eo.uk/1/hi/scotland/3012294.stm (last visited Apr. 11, 2005).

(165) Jofre, supra note 154.

(166) Id.

(167) Id.

(168) Id.

(169) Id.

(170) Id.

(171) McKie, supra note 163.

(172) Inquiry into Fingerprint Evidence, BBC NEWS, Feb, 7, 2000, available at http://news.bbc.co.uk/hi/english/uk/scotland/newsid_634000/634282.stm.

(173) ASS'N OF CHIEF POLICE OFFICERS IN SCOTLAND, PRESIDENTIAL REVIEW OF S.C.R.O. INTERIM REPORT (2000), available at http://www.scottish.police.uk/main/ campaigns/interim/join.pdf; ASS'N OF CHIEF POLICE OFFICERS IN SCOTLAND, REPORT OF THE SCRUTINY OF THE SCRO FINGERPRINT BUREAU AND STRUCTURE OF THE SCOTTISH FINGERPRINT SERVICE (2000), available at http://www.scottish.police.uk/main/ campaigns/interim/report.pdf.

(174) Inquiry Call Into Prints Case, supra note 164.

(175) Neil Mackay, New Concerns Over Fingerprinting, SUNDAY HERALD, Oct. 5, 2003, available at http://www.sundayherald.com/print37266.

(176) Id.

(177) Id.

(178) Rachel Scheier, New Trial Sought in U. Darby Slaying, PHILA. INQUIRER, Aug. 16, 1999, available at www.prisonactivist.org/news/1999/08/0089.html.

(179) Id.

(180) Mary Anne Janco, Release of Convicted Killer Is Sought, PHILA. INQUIRER, Nov. 24, 1999, at B1.

(181) Id.

(182) Scheier, supra note 178.

(183) Id.

(184) Anne Barnard, Convicted in Slaying, Man Wins Freedom: An FBI Investigation Found That Fingerprints at a Murder Scene Were Not Those of Richard Jackson, PHILA. INQUIRER, Dec. 24, 1999, at B1.

(185) Mary Anne Janco, Case Withdrawn Against Pa. Man Convicted, Jailed in 1997 Murder, PHILA. INQUIRER, Mar. 8, 2000, at B1.

(186) Scheier, supra note 178.

(187) Neither the Wallace case nor the McNamara case meets my conservative criteria for inclusion among the misattributions data set, although they are both dubious identifications. Stephen Wallace was tried for burglary in Manchester in 2000. The sole evidence against him was a latent print found at the crime scene. Three latent print examiners attributed the latent print to Wallace. An independent review by retired latent print examiner Mike Armer found that Wallace was not the source of the latent print. Wallace was acquitted. A spokesman for the Greater Manchester Police said, "Fingrprint Evidence is a matter of opinion and is subject to clarification at any time." Joanne Hampson, Fingerprint Blunder Has Left My Life in Ruins, MANCHESTER EVENING NEWS (Eng.), July 12, 2001 at 7. The Wallace case only became publicly known after it was publicized by journalists investigating the McNamara case. See Panorama: Pointing the Finger at Greater Manchester Police, BBC NEWS, available at http://news.bbc.co.uk/1/hi/programmes/panorama/1993373.stm (last visited May 8, 2005) (Wallace case).

The McNamara case is unusual in that the donor of the latent print is not disputed, but, rather, the surface from which it originated (the "substrate") is disputed. Alan McNamara was convicted of burglary in Manchester, England, based on a latent print found on a wooden jewelry box. McNamara's experts, Pat Wertheim and Allan Bayle, agree with the attribution of the print to McNamara, but contended that it was impossible that the substrate from which the latent print was recovered was the wooden jewelry box because the latent print lacked wood grain. The police contended that the wood grain was not reproduced because of the lifting technique. Wertheim and Bayle contended the latent print "came from a smooth, curved surface, such as a vase which was sold at Mr. McNamara's shop." McNamara was convicted of burglary and sentenced to two and half years in prison. He is currently in prison appealing his conviction. See Shelley Jofre, Panorama: Finger of Suspicion, BBC NEWS, July 8, 2001, available at http://news.bbc.co.uk/1/hi/ programmes/panorama/1416777.stm; R. v. McNamara, 2004 EWCA Crim 2818.

(188) Panorama: Pointing the Finger at Greater Manchester Police, supra note 187.

(189) Jofre, supra note 187.

(190) Coit, supra note 58.

(191) Dusty Clark, A Body of a Woman Was Found Out in the Desert near Las Vegas (2003), available at http://www.latent-prints.com/ a_bodyofawoman_was_foundout.htm.

(192) Coit, supra note 58.

(193) Clark, supra note 191.

(194) Id.

(195) Coit, supra note 58.

(196) Id.

(197) Clark, supra note 191.

(198) Michael Vigh, Evidence Bungled in Slaying, SALT LAKE TRIB., Feb. 19, 2003, at D1.

(199) Id.

(200) Matt Canham, Expert Killed in Gun-Lab Accident, SALT LAKE TRIB., Jan. 3, 2003, at 1-A.

(201) Id.

(202) Vigh, supra note 198.

(203) Id.

(204) Id.

(205) Commonwealth v. Cowans, 756 N.E.2d 622 (Mass. App. Ct. 2001).

(206) Trial Transcript at 3-224, Cowans (No. 2000-P-52).

(207) Id. at 3-225.

(208) Jack Thomas, Two Police Officers are Put on Leave: Faulty Fingerprint Evidence is Probed, BOSTON GLOBE, Apr. 24, 2004, at B1.

(209) Weber & Rothstein, supra note 10.

(210) Flynn McRoberts & Steve Mills, U.S. Seeks Review of Fingerprint Techniques," High Profile Errors Prompt Questions, CHI. TRIB., Feb. 21, 2005, at 1.

(211) Saltzman & Daniel, supra note 5.

(212) David S. Bernstein, The Jig Is Up, BOSTON PHOENIX, May 14, 2004, available at http://www.bostonphoenix.com/boston/news-feature/other_stories/multi_4/ documents/03827954.asp. It was also reported that one of the "elimination" cards had been mislabeled. According to a Suffolk County District Attorney's Office disclosure document obtained by the Phoenix:
   The name and signature on one of the fingerprint cards ... were
   not the name and signature of the individual from whom that
   particular set of elimination fingerprints had in fact been taken.
   The set of fingerprints were in fact those of another individual
   from whom elimination fingerprints had been taken (emphasis in
   original).


Id. It is not clear what relationship, if any, this mislabeling may have had with the misattribution of the latent print.

(213) McRoberts & Mills, supra note 210.

(214) Maggie Mulvihill, No Charges vs. Hub Cops in Frame Case, BOSTON HERALD, June 24, 2004, at 2.

(215) Suzanne Smalley, Police Shutter Print Unit, BOSTON GLOBE, Oct. 14, 2004, at B1.

(216) Maggie Mulvihill & Franci Richardson, Unfit Cops Put in Key Evidence Unit, BOSTON HERALD, May 6, 2004, at 2.

(217) Karin J. Immergut, Application for Material Witness Order and Warrant Regarding Witness: Brandon Bieri Mayfield, In re Federal Grand Jury Proceedings 03-01, 337 F. Supp. 2d 1218 (D. Or. 2004) (No. 04-MC-9071).

(218) Id.

(219) Richard B. Schmitt et al., Oregon Attorney Arrested Over Possible Ties to Spain Bombings, L.A. TIMES, May 7, 2004, at A1.

(220) Spanish Investigators Question Fingerprint Analysis, ASSOCIATED PRESS, May 8, 2004.

(221) European Fingerprint Standards, 28 FINGERPRINT WHORLD 19 (2002) (Reporting fingerprint point standards ranging from 8 [Bulgaria] to 16 [Italy, Cyprus, Gibraltar] points, as well as some countries with no set standard).

(222) United States v. Llera Plaza, 188 F. Supp. 2d 549, 566-71 (E.D. Pa. 2002) [hereinafter Llera Plaza II].

(223) Sarah Kershaw, Spain and U.S. at Odds on Mistaken Terror Arrest, N.Y. TIMES, June 5, 2004, at A1.

(224) David Heath, FBI's Handling of Fingerprint Case Criticized, SEATTLE TIMES, June 1, 2004, at A1.

(225) See David L. Grieve, The Identification Process: Traditions in Training, 40 J. FORENSIC IDENTIFICATION 195, 210-11 (1990).

(226) See, e.g., David A. Stoney, Fingerprint Identification: Scientific Status, in MODERN SCIENTIFIC EVIDENCE: THE LAW AND SCIENCE OF EXPERT TESTIMONY [sections 27-2 (David L. Faigman et al. eds., 1997).

(227) See, e.g., David L. Grieve, Rocking the Cradle, 49 J. FORENSIC IDENTIFICATION 719 (1999).

(228) See BUREAU OF JUSTICE STATISTICS, U.S. DEP'T OF JUSTICE, CRIMINAL CASE PROCESSING STATISTICS, available at http://www.ojp.usdoj.gov/bjs/cases.htm (last updated Sept. 28, 2004).

(229) PATRICK LANGAN & DAVID P. FARRINGTON, U.S. DEP'T OF JUSTICE, CRIME AND JUSTICE IN THE UNITED STATES AND IN ENGLAND AND WALES, 1981-96, available at http://www.ojp.usdoj .gov/bj s/pub/html/cjusew96/cpp.htm.

(230) The cities were Peoria, Chicago, Kansas City, and Oakland. Joseph L. Peterson et al., Forensic Evidence and the Police, 1976-1980, NAT'L ARCHIVE OF CRIM. JUST. DATA, Inter-University Consortium for Political and Social Research, Study No. 8186 (1985).

(231) Id.

(232) If homicide is 1% of felony cases, 12 homicide misattributions times 99 equals 1188. This figure is then divided by two to account for the greater prevalence of fingerprint evidence in homicide cases.

(233) Samuel R. Gross et al., Exonerations in the United States 1989 through 2003, 95 J. CRIM. L. & CRIMINOLOGY 523, 533 (2004).

(234) Stacey, supra note 47, at 713.

(235) Id. at 713, 716.

(236) See infra Table 1.

(237) See David Grey, Yard in Fingerprint Blunder, SUNDAY TIMES (London), Apr. 6, 1997; Fingerprint Blunder 'Ruined My Life ', MANCHESTER NEWS (Eng.), July 12, 2001.

(238) See supra Part II(A)(2)(i).

(239) See supra Part II(A)(2)(ii).

(240) See supra Part II(A)(2)(vii).

(241) See supra Part II(A)(2)(x).

(242) Damien Henderson, Expert Highlights McKie Case 'Errors', HERALD (Glasgow, Scot.), Sept. 21, 2004, at 8; Specter, supra note 154.

(243) See supra Part II.A.3.p.

(244) Supra note 187 and accompanying text.

(245) Tamara F. Lawson, Can Fingerprints Lie? Re-Weighing Fingerprint Evidence in Criminal Jury Trials, 31 AM. J. CRIM. L. 1, 3 (2003) ("From my practical experience and scholarly research of the topic, the reliability of fingerprint identification evidence routinely goes unquestioned at all levels of the criminal process and by both sides of the litigation, prosecution, and defense.").

(246) See supra Part II.A.3.q.

(247) See Elizabeth F. Loftus & Simon A. Cole, Contaminated Evidence, 304 SCI. 959 (2004).

(248) Thomas, supra note 208.

(249) Crombie & Zaitz, supra note 3.

(250) Richard B. Schmitt et al., Oregon Attorney Arrested Over Possible Ties to Spain Bombings, L.A. TIMES, May 7, 2004, at A1.

(251) See, e.g., infra note 325 and accompanying text.

(252) This is similar to an argument made in the debate over wrongful convictions: that the exposure of wrongful convictions, even hours before the planned execution of an innocent person, represents that "the system working." See Lawrence C. Marshall, Do Exonerations Prove That the 'The System Works'?, 86 JUDICATURE 83 (2002).

(253) United States v. Havvard, 260 F.3d 597, 599 (7th Cir. 2001) ("Meager also testified that the error rate for fingerprint comparison is essentially zero. Though conceding that a small margin of error exists because of differences in individual examiners, he opined that this risk is minimized because print identifications are typically confirmed through peer review."); United States v. Rogers, 26 Fed. Appx. 171, 173 (4th Cir. 2001) (unpublished decision) ("[T]he possibility of error was mitigated in this case by having two experts independently review the evidence.").

(254) Pat A. Wertheim, Scientific Comparison and Identification of Fingerprint Evidence, 16 PRINT 1, 6 (2000), available at http://www.scafo.org/The_Print/ _THE_PRINT VOL_16_ISSUE_05.pdf [hereinafter Wertheim, Scientific Comparison] ("Erroneous identifications among cautious, competent examiners, thankfully, are exceedingly rare; some might say 'impossible.'").

(255) Lambourne, supra note 56, at 228.

(256) Ed German, Regarding Recent News Articles on Fingerprint Evidence Credibility in Court (2002), available at http://onin.com/fp/stmt_ref_articles.html ("In a worst-case scenario involving an incompetent expert, Defense can easily locate their own expert. And, for less money than it costs to tune up a car, an identification can be independently reviewed.").

(257) Lyn Haber & Ralph Norman Haber, Error Rates for Human Fingerprint Examiners, in AUTOMATIC FINGERPRINT RECOGNITION SYSTEMS 339, 349 (Nalini K. Ratha & Ruud M. Bolle eds., 2003).

(258) See, e.g., United States v. Llera Plaza, 188 F. Supp. 2d 549 (E.D. Pa. 2002); Christophe Champod, Fingerprints: Standards of Proof in ENCYCLOPEDIA OF FORENSIC SCIENCES 884, 889 (Jay A. Siegel et al. eds., 2000).

(259) IAI-certification is virtually unknown outside the United States. Only three of the approximately 800 IAI-certified examiners are located outside the United States. See Int'l Ass'n for Identification, Certified Latent Print Examiners, at http://onin.com/clpe/ clpe_by_state_27nov2004.pdf (last updated Nov. 27, 2004).

(260) Certification for Latent Fingerprint Examiners, 27 IDENTIFICATION NEWS 3 (1977).

(261) It should be noted that three of the examiners counted as non-certified in calculating this figure were "FBI certified." If we include both IAI and FBI certification, then 45% of American examiners implicated in misattributions after 1977 were certified.

(262) Cole, What Counts for Identity?, supra note 65, at 157; European Fingerprint Standards, supra note 221 (reporting fingerprint point standards ranging from eight points [Bulgaria] to sixteen points [Italy, Cyprus, Gibraltar], as well as some countries with no set standard).

(263) The Innocence Project, Causes & Remedies, at http://www.innocenceproject.org/causes/index.php (last visited Mar. 10, 2005). This data set is itself fortuitously generated. Inclusion in it requires a sequence of unlikely events, including: a crime that produces biological evidence, failure to test the biological evidence upon initial investigation, preservation of biological evidence after conviction, and willingness of the court and/or state to allow retesting of evidence.

(264) Id.

(265) See supra note 230.

(266) Since post-conviction DNA testing generally consists of doing DNA analysis of biological evidence that was not DNA tested during the original investigation, it is appropriate to view cases from Peterson et al.'s data collection period (1976-1980) in which biological evidence was collected as a reasonable proxy for cases that would have been eligible for post-conviction exoneration through DNA testing. In my analysis, I am counting as "biological evidence" the following codes from Peterson et al.'s data: "perspiration," "saliva," "urine," "vaginal," "feces," "biological, other," "semen," and "misc. organic."

(267) See, e.g., Anne C. Stone et al., Mitochondrial DNA Analysis of the Presumptive Remains of Jesse James, 46 J. FORENSIC SCI. 173 (2001).

(268) See Ashira Zamir et al., Fingerprints and DNA: STR Typing of DNA Extracted from Adhesive Tape after Processing for Fingerprints, 45 J. FORENSIC SCI. 687 (2000).

(269) Clive A. Stafford Smith & Patrick D. Goodman, Forensic Hair Comparison Analysis: Nineteenth Century Science or Twentieth Century Snake Oil?, 27 COLUM. HUM. RTS. L. REV. 227 (1996).

(270) Houck and Budowle found a false positive rate of 11% or 35%, depending on how one calculates the false positive rate. Richard D. Friedman, Squeezing Daubert Out of the Picture, 33 SETON HALL L. REV. 1047, 1058 (2003); Max M. Houck & Bruce Budowle, Correlation of Microscopic and Mitochondrial DNA Hair Comparisons, 47 J. FORENSIC SCI. 964, 966 (2002); D. Michael Risinger & Michael J. Saks, A House with No Foundation, 20 ISSUES IN SCI. & TECH. 35, 38-39 (2003). But Houck disputes that characterization of their findings, interestingly, by refusing to afford epistemic privilege to a mitochondrial DNA profile. In other words, he refuses to interpret an exclusion under mitochondrial DNA as definitive proof that a microscopic hair comparison inclusion was, in fact, erroneous. Max M. Houck, Forensic Science, No Consensus, 20 ISSUES IN SCI. & TECH. 6, 7 (2004) ("Microscopical and mitochondrial DNA analyses of human hairs yield very different but complimentary results, and one method should not be seen as 'screening for' or 'confirming' the other."). Professors Peterson and Markham found that microscopic hair comparison had a false positive rate of approximately 4%. Joseph L. Peterson & Penelope N. Markham, Crime Laboratory Proficiency Testing Results, 1978-1991, II: Resolving Questions of Common Origin, 40 J. FORENSIC SCI. 1009, 1022-23 (1995).

(271) SCHECK ET AL., supra note 68, at 45-52; Randolph Jonakait, Forensic Science: The Need for Regulation, 4 HARV. J. LAW & TECH. 109, 121 (1991) ("[C]rime labs must be making thousand upon thousands of mistaken physiological fluid analyses each year."). Peterson and Markham found serology false positive rates ranging from 5-7%. Peterson & Markham, supra note 270, at 1014.

(272) 0.04/14 = 0.003; 0.35/14 = 0.025; 0.05/23 = 0.002; 0.07/23 = 0.003.

(273) Jonakait, supra note 271, at 121 n.44.

(274) It should also be noted that using exposed wrongful convictions to estimate the false positive error rate of a forensic technique may risk underestimating the false positive rate because it would fail to detect false positive errors in which the falsely identified individual was in fact guilty of the crime. In a sense, this method fails to account for what might be called the "baserate" of guilt--the rate at which a forensic examiner would be correct if she simply attributed every crime scene sample to the prime suspect.

For example, imagine that 80% of prime suspects are guilty (the baserate of guilt). A forensic examiner could be "correct" 80% of the time, without doing any analysis at all, simply by always attributing crime scene samples to the prime suspect.

Now imagine an examiner who does do analysis. We can try to use exposed cases of actual innocence to estimate her false positive rate, but we may underestimate because 80% of cases are ineligible to become cases of actual innocence. But the examiner may have committed false positives in those cases.

In short, one reason that the number of exposed cases of latent print misattribution is relatively low may be that the baserate of guilt is relatively high. Latent print identification may not be all that discriminating, but it may appear to perform fairly well simply by attributing latent prints to the prime suspect. On baserates, see Michael J. Saks & D. Michael Risinger, Baserates, the Presumption of Guilt, Admissibility Rulings, and Erroneous Convictions, 2003 MICH. ST. L. REV. 1051. I am grateful to Stephen Fienberg for emphasizing this point.

(275) Simon A. Cole, Grandfathering Evidence: Fingerprint Admissibility Ruling from Jennings to Llera Plaza and Back Again, 41 AM. CRIM. L. REV. 1189, 1189 (2004) [hereinafter Cole, Grandfathering Evidence]; Simon A. Cole, Is Fingerprint Identification Valid? Rhetorics of Reliability in Fingerprint Proponents' Discourse, L. & POL'Y (forthcoming) (manuscript on file with author).

(276) Peterson & Markham, supra note 270, at 1009.

(277) Id.; COLLABORATIVE TESTING SERVICES, INC., LATENT PRINTS EXAMINATION REPORT NOS. 9508, 9608, 9708, 9808, 99-516, 01-516, 02-516, 02-517, 03-516 (1995-2003), reports from 2001-2003 available at http://www.collaborativetesting.com/forensics/ forensics_reports.html (summaries or complete reports on file with the author).

(278) I. W. Evett & R. L. Williams, A Review of the Sixteen Points Fingerprint Standard in England and Wales, 46 J. FORENSIC IDENTIFICATION 49, 51 (1996). But see Glenn M. Langenburg, Pilot Study: A Statistical Analysis of the ACE-V Methodology--Analysis Stage, 54 J. FORENSIC IDENTIFICATION 64, 76 (2004).

(279) Testimony of Stephen Meagher & Kenneth Smith, United States v. Llera Plaza, 188 F. Supp. 2d 549, 566 (E.D. Pa. 2002) [hereinafter Llera Plaza II]; Glenn Langenburg, Defending Against the Critic's Curse (Sept. 2002), available at http://www.clpex.com/Articles/CriticsCurse.htm.

(280) Haber & Haber, supra note 257, at 339.

(281) David L. Grieve, Possession of Truth, 46 J. FORENSIC IDENTIFICATION 521 (1996) (describing "shock" and "disbelief" "within the forensic science community" at the results of the 1995 test); James E. Starrs, Forensic Science on the Ropes: Procellous Times in the Citadels of Infallibility, 20 SCI. SLEUTHING REV. 1 (Winter 1996).

(282) Haber & Haber, supra note 257, at 346.

(283) Victoria L. Phillips et al., The Application of Signal Detection Theory to Decision Making in Forensic Science, 46 J. FORENSIC SCI. 294, 295 (2001). I am grateful to John R. Vokey for clarification of this point.

(284) 2002 WL 389163, at *16 (E.D. Pa. Mar. 13, 2002), vacated, 188 F. Supp. 2d 549 (E.D. Pa. 2002).

(285) Trial Transcript at 38, Llera Plaza H (Nos. 98-CR00362-10, 98-CR00362-11, 98-CR00362-12) (testimony of Allan Bayle, retired Scotland Yard examiner).

(286) Id. at 55.

(287) Id. at 74.

(288) United States v. Llera Plaza, 188 F. Supp. 2d 549, 565 (E.D. Pa. 2002).

(289) David Heath, Bungled Fingerprints Expose Problems at FBI, SEATTLE TIMES, June 7, 2004, available at http://seattletimes.nwsource.com/html/localnews/2001949987_ fingerprint07m.html. This article also reports efforts to pressure the examiner to rewrite the memorandum without the charges of collusion because of its legal discoverability. Id.

(290) Chicago Fingerprint Forum Recommendations, 52 J. FORENSIC IDENTIFICATION 643, 644 (2002); James R. McConnell, Certification (To Be or Not to Be), 42 J. FORENSIC IDENTIFICATION 205, 206 (1992); Wertheim, re: Certification, supra note 48, at 279-80.

(291) Haber & Haber, supra note 257, at 339.

(292) Id.

(293) Id.

(294) It should be noted that a decision to "pass" on a latent print may reflect one of two things: (1) a "poor quality" latent, or (2) a latent of acceptable quality that nonetheless may not easily be attributed to or excluded from the given comparison set. I am grateful to John R. Vokey for clarifying this point.

(295) Haber & Haber, supra note 257, at 339; Andy Newman, Fingerprinting's Reliability Draws Growing Court Challenges, N.Y. TIMES, Apr. 7, 2001, at A8.

(296) See supra Part II.B.1.b.

(297) Joseph L. Peterson & Matthew J. Hickman, Census of Publicly Funded Forensic Crime Laboratories, BUREAU OF JUST. STAT. BULL., Feb. 2005, at 6.

(298) Again, one must assume that in some cases a false positive error would implicate someone who was, in fact, guilty of the crime. See supra note 274.

(299) See supra note 280 and accompanying text.

(300) Trial Transcript at 154-56, United States v. Mitchell, Cr. No. 96-407 (E.D. Pa. July 8, 1999).

(301) David L. Grieve, Simon Says, 51 J. FORENSIC IDENTIFICATION 85, 95-96 (2001) ("Mr. Meagher correctly stated that a distinction between methodological error and practitioner error must be noted, and that if the methodology of ACE-V (analysis, comparison, evaluation and verification) is properly applied during an latent print examination, the error rate will be zero.").

(302) Trial Transcript at 202, People v. McGhee [Robert J. Hood], No. 01CR2120 (D. El Paso Co., Colo., Jan. 18, 2002).

(303) Kasey Wertheim, 1 WEEKLY DETAIL, Aug. 1, 2001, available at http://www.clpex.com/Articles/TheDetail/1-99/TheDetail01.htm.

(304) Trial Transcript at 122-23, United States v. Mitchell, Cr. No. 96-407 (E.D. Pa. July 9, 1999). A similar example of the temporal parsing of error may be found in the debate over wrongful convictions. In a debate over the death penalty, Joshua Marquis dismisses cases of wrongful conviction that occurred "15 or 20 years ago" as irrelevant to current practice. Joshua Marquis, Truth and Consequences: The Penalty of Death, in DEBATING THE DEATH PENALTY 117, 127 (Hugo Adam Bedau & Paul G. Cassell eds., 2004) ("When we debated in June 2001 in New York City, Steven Bright repeatedly hurled examples from his own state's past as typical of capital cases. He cited cases involving trials that took place fifteen to twenty-five years ago to stand for the proposition that the death penalty as it is constituted today is fundamentally unfair."). This, of course, misses Bright's (and my) point: the trials that took place fifteen to twenty-five years ago seemed fair at the time (at least to those whose opinions mattered, like judges). The lesson is not that trials were unfair once and are fair today, but rather that our methods for detecting fairness prospectively are rather poor. Our methods for detecting forensic error prospectively are similarly poor.

(305) As of Oct. 28, 2003, the term "methodological error rate" entered into Google yielded exactly one hit, the SWGFAST Guidelines for Proficiency Testing. SWGFAST Guidelines, at http://www.swgfast.org/Guidelines_for_Proficiency_Testing_1_0.pdf ("[p]roficiency testing is not a measure of methodological error rate") (search on file with the author). The term "methodological error rate" comes close to being a "Googlewhack," a combination of two real words that, when combined in a Google search, yield one, and only one, hit. "Methodological error rate" was not a true Googlewhack because it is three words and uses quotation marks. See Googlewhack.com, at http://www.googlewhack.com (last visited Mar. 10, 2005).

Performed more recently (July 13, 2004), the exercise yielded five hits, two of which reference the present author's own published critique of the notion. Simon A. Cole, The Fingerprint Controversy, 20 ISSUES IN SCI. & TECH. 10 (2004), available at http://www.issues.org/issues/20.2/forum.html. There are also two hits to actual scientific publications unrelated to fingerprint identification. My point, however, is still that the rarity of the term on the Internet is indication that it is far from a widespread scientific concept.

(306) Daubert v. Merrell Dow Pharm., 509 U.S. 579, 589 (1993).

(307) Trial Transcript at 270, People v. Gomez, No. 99CF0391 (Cal. Sup. Ct. Orange Cty. 2002).

(308) Pat Wertheim, Don't Panic--BUT ..., 30 WEEKLY DETAIL, Mar. 4, 2002, available at http://www.clpex.com/.

(309) Memorandum of Law in Support of Mr. Mitchell's Motion to Exclude the Government's Fingerprint Evidence at 62, United States v. Mitchell, Cr. No. 96-407 (E.D. Pa. 1999):
   The government submits that, in contrast to handwriting evidence,
   "it is well established that fingerprints are unique to an
   individual and permanent." Again, however, the government simply
   misses the point. The question is not the uniqueness and permanence
   of entire fingerprint patterns, but the scientific reliability of a
   fingerprint identification that is being made from a small distorted
   latent fingerprint fragment.


Id. (internal citations omitted).

(310) Trial Transcript at 203, People v. McGhee [Robert J. Hood], No. 01CR2120 (Col., D.C. El Paso Cty. Div. 3 Jan. 18, 2002).

(311) For a particularly poor pregnancy test, see http://web.archive.org/web/ 20010614020755/geocities.com/mypregnancytest/, which is, unfortunately, no longer live on the Internet.

(312) Whether mathematics is "a science" is actually a subject of extensive debate. See generally PHILLIP KITCHER, THE NATURE OF MATHEMATICAL KNOWLEDGE 3 (1983) ("Virtually every philosopher who has discussed mathematics has claimed that our knowledge of mathematical truths is different in kind from our knowledge of the propositions of the natural sciences."). Even to a non-philosopher, however, it should be clear that fingerprint identification, which deals with actual patterns on biological objects called skin or measured abilities of analysts to make judgments is quite different from the manipulation of abstract quantities.

(313) Debate on Fingerprint Evidence (WHYY radio broadcast, Apr. 21, 2001), available at http://www.whyy.org/rameta/RT/RT20010427_20.ram. Meagher offered essentially the same argument in sworn testimony in Hood. Trial Transcript at 202-04, McGhee [Hood] (No. 01CR2120).

(314) Wertheim, Scientific Comparison, supra note 254, at 5.

(315) German, supra note 256. Of course, this does not mean that aviation safety should not be investigated and improved.

(316) Heath, supra note 289.

(317) United States v. Havvard, 117 F. Supp. 2d 848, 854 (S.D. Ind. 2000).

(318) United States v. Havvard, 260 F.3d 597, 599 (7th Cir. 2001).

(319) 324 F.3d 261, 269 (4th Cir. 2003).

(320) 246 F. Supp. 2d 700, 703 (E.D. Ky. 2003).

(321) Id.

(322) See supra note 286 and accompanying text.

(323) Sullivan, 246 F. Supp. 2d at 703 ("While the defendant is correct that the party submitting the evidence has the burden of establishing its reliability under Daubert, the defendant has failed to submit any evidence to dispute the plaintiff's evidence of a minimal error rate.").

(324) United States v. Llera Plaza, 2002 WL 27305, 14 (E.D. Pa. Mar. 13, 2002), vacated, 188 F. Supp. 2d 549 (E.D. Pa. 2002) [hereinafter Llera Plaza I] ("Assuming, for the purposes of the motions now at issue before this court, that fingerprint 'methodology error' is 'zero,' it is this court's view that the error rate of principal legal consequence is that which relates to 'practitioner error.'").

(325) United States v. Llera Plaza, 188 F. Supp. 2d 549, 566 (E.D. Pa. 2002).

(326) Cole, Grandfathering Evidence, supra note 275, at 1189.

(327) See supra note 224 and accompanying text.

(328) Cole, Grandfathering Evidence, supra note 275, at 1258-59.

(329) Michele Triplett, Steve Meagher's Additions to "Anatomy of Error," Sept. 17, 2004, at http://www.clpex.com/board/threads/2004-Sep-17/2230/2230.htm.

(330) United States v. Mitchell, 365 F.ad 215, 240 n.20 (3d Cir. 2004).

(331) Jonathan J. Koehler, On Conveying the Probative Value of DNA Evidence. Frequencies, Likelihood Ratios, and Error Rates, 67 U. COLO. L. REV. 859, 873-74 (1996).

(332) Mitchell, 365 F.3d at 241 n.20.

(333) Id.

(334) Id. at 241. The Third Circuit's view of error rate is actually even more complex and even less sustainable than indicated by my summary of the highlights. In addition to the small number of exposed cases of error, the court cites two other pieces of evidence from the Mitchell record in support of its assertion that the false positive rate is "very low." First, the court cites the results of a survey of fifty-one crime laboratories conducted by the government. Id. at 240. Among the survey items were the two latent prints at issue in Mitchell itself, which FBI examiners had attributed to Mitchell. One component of the survey asked laboratories to search Mitchell's latent prints in their fingerprint computer databases and to have a "court qualified" latent print examiner perform manual comparisons between the two latents and Mitchell's ten-print card. Some agencies attributed one or both of the latents to Mitchell. Some agencies, however, reported "no match" to one or both.

The FBI then sent an additional packet to the agencies that declined to attribute one or both latents to Mitchell. This second packet contained enlarged photographs of the latent prints and the areas of the ten-print they purportedly "matched." These photographs were enclosed in plastic sleeves with red dots marking the supposed corresponding ridge characteristics. A cover letter asked recipients to "[p]lease test your conclusions against these enlarged photographs with the marked characteristics." All the agencies that previously failed to corroborate the FBI's attribution now did so.

The Third Circuit treats the above exercise as measuring the error rate of latent print identification. It notes, correctly, that while a significant number of false negatives occurred, no false positives occurred in any phase of the exercise. Id. at 239-41.

Treating such an exercise as any sort of measurement of error rate is obviously highly problematic. The flaws in the proficiency tests described above (supra Part II.B.1) pale in comparison to those in this exercise. Only one known exemplar (ten-print card) was provided to compare to the latent prints. This, in itself, cues participants to what the expected answer is. Moreover, the test-giver then further cued the participants with the plastic sleeves with red dots. Finally, the test was unproctored. The administrator of the survey, Agent Meagher, himself denied that the survey should be construed as any sort of scientific experiment:

Q: And the surveys themselves, did you author them and design them to be a scientific experiment?

A: No, this was just a survey.

Trial Transcript at 129, Mitchell (No. 96-407) (July 8, 1999).

But it gets worse. The FBI survey can only be construed as evidence that the error rate of latent print identification is "very low" by assuming the conclusion--that the FBI was correct that Mitchell was the source of both latent prints. In other words, in ruling on an evidentiary issue relevant to Mitchell's conviction, the Third Circuit simply assumes Mitchell's guilt for the very conviction he is appealing!

It should be further noted that if one assumes that Mitchell is the source of the latent prints, it is hardly surprising that no false positives were committed during the exercise; since Mitchell's ten-print card was the only one provided, there would be no way to commit a false positive (except perhaps by matching one of the latents to one of Mitchell's other fingers).

The second piece of evidence the Third Circuit used to support its contention that the error rate of latent print identification is "very low" was a computer exercise the government put on record at the Mitchell Daubert hearing. 365 F.3d at 225. This exercise, widely known as the "50K x 50K Study," consisted of computer searching a database of 50,000 print images against itself. Id. The still unpublished "study" upon which the court relies has now been severely criticized in the academic literature by at least five different authors. Christophe Champod & Ian W. Evett, A Probabilistic Approach to Fingerprint Evidence, 51 J. FORENSIC IDENTIFICATION 101, 112 (2001) ("[W]e are amazed it was admitted into evidence. It is entirely insupportable."); David H. Kaye, Questioning a Courtroom Proof of the Uniqueness of Fingerprints, 71 INT'L STAT. REV. 521, 524 (2003) ("If the government presented this study ... without qualification, its behavior is disturbing."); Sharath Pankanti et al., On the Individuality of Fingerprints, 24 IEEE TRANSACTIONS ON PAMI 1010, 1015 (2002) ("This model grossly underestimates the probability of a false correspondence...."); David A. Stoney, Measurement of Fingerprint Individuality, in ADVANCES IN FINGERPRINT TECHNOLOGY 327, 383 (Henry C. Lee & R.E. Gaensslen eds., 2001) ("extraordinarily flawed and highly misleading"); James L. Wayman, When Bad Science Leads to Good Law." The Disturbing Irony of the Daubert Hearing in the Case of U.S.v. Byron C. Mitchell, BIOMETRICS IN THE HUM. SERVICES USER GROUP NEWSL., Feb. 2, 2000, at http://www.engr.sjsu.edu/biometrics/publications_daubert.html ("[T]he government is comfortable with predicting the fingerprints of the entire history and future of mankind from a sample of 50,000 images, which could have come from as few as 5,000 people. They have disguised this absurd guess by claiming reliance on 'statistical estimation'."). The Third Circuit cites none of this literature, all of which emerged after the government entered the study into evidence in the Mitchell Daubert heating. If nothing else, this serves an illustration of the usefulness of Daubert's emphasis on "peer review and publication." Daubert v. Merrell Dow Pharm., 509 U.S. 579, 593 (1993).

Even setting these criticisms aside, how the court can interpret the "50K Study" as measuring the false positive error rate is not clear. Even one of the study's authors, Agent Meagher, denounces as "ill-informed" and "inappropriate" any effort to construe the "50K Study" as measuring error rate: "First, let me state what the study is not about and that may assist in clarifying some of the criticism. This is not a study on error rate or an effort to demonstrate what constitutes an identification." Letter from Stephen Meagher, FBI Agent, to James Randerson (Jan. 29, 2004) (on file with the author) (in response to James Randerson and Andy Coghlan); see also James Randerson & Andy Coghlan, Forensic Evidence Stands Accused, 181 NEW SCIENTIST 6 (2004).

Since the study simply measured the similarity scores generated by a computer and the prints were never submitted to a human latent print examiner (who always has the final word on a latent print attribution), it is difficult to see how the study could be construed as measuring false positives. Even if one were interested in the computer's tendency to commit a "false positive" (i.e., reporting a higher similarity score for prints of different origin than for prints from the same finger), the study was very poorly deigned to measure this because it compared each print image to itself(rather than comparing to different impression from the same source finger). (This, among other things, is the reason for the academics' criticisms cited above.) On the several occasions where two different prints from the same finger were (accidentally) in the database, the computer did commit "false positives" on several occasions. That is, similarity scores for prints originating from different fingers were within the range of similarity scores for prints originating from the same finger. Robert Epstein, Fingerprints Meet Daubert: The Myth of Fingerprint "Science "' is Revealed, 75 So. CAL. L. REV. 605, 631 (2002); Stoney, supra, at 380-83.

(335) Mitchell, 365 F.3d at 239.

(336) Id. at 245.

(337) Id. at 245-46.

(338) See supra notes 299, 303 and accompanying text.

(339) See generally ELIZABETH F. LOFTUS, EYEWITNESS TESTIMONY (2d ed. 1996).

(340) State v. Quintana, 103 P.3d 168, 171 (Utah Ct. App. 2004) (Thome, J., concurring).

(341) See Denbeaux & Risinger, supra note 23.

(342) This argument is, in some sense, isomorphic with an argument about "errors of justice" in general, and the U.S. legal system's notorious, and oft-remarked, reluctance to examine and investigate cases of error, such as miscarriages of justice. See SCHECK ET AL., supra note 68.

(343) See Cole, Witnessing Identification, supra note 64.

(344) David R. Ashbaugh, The Premise of Friction Ridge Identification, Clarity, and the Identification Process, 44 J. FORENSIC IDENTIFICATION 499, 514 (1994).

(345) Wertheim, Scientific Comparison, supra note 254, at 6. Wertheim goes on to say, "Clerical errors, however, are not uncommon."

(346) Kasey Wertheim, 54 WEEKLY DETAIL, Aug. 19, 2002, available at http://www.clpex.com/Articles/TheDetail/1-99/TheDetail54.htm.

(347) Kasey Wertheim, 2 WEEKLY DETAIL, Aug. 13, 2001, available at http://www.clpex.com/Articles/TheDetail/TheDetail2.htm; see also Jofre, supra note 187 ("The system of fingerprint identification is infallible. The expert individually is not.").

(348) Mary Beeton, The Fingerprint Controversy, 20 ISSUES IN SCI. & TECH. 9, 10 (2004) (Editorial).

(349) A word should perhaps be added here about decertification. Other than criminal charges, which as far as can be determined have never been successful for misattributions that were not clearly intentional, decertification is the only official sanction available as a response to a misattribution. In recent years, it has appeared that decertification is automatic for any misattribution. Cole, Witnessing Identification, supra note 65, at 701. (This policy may be severely tested in the Mayfield case because Mr. Moses is such a prominent figure in the field.) However, decertification is a sanction available only to IAI-certified examiners. For example, in the Jackson case Creighton was decertified, but it is not clear what, if any, sanctions were leveled against Paparo and White. Scheier, supra note 178. Oddly, only the more highly qualified examiners are more vulnerable to sanction.

As I have noted elsewhere, the sanction of decertification for a single error is an extremely harsh sanction that is unusual among professional groups. Cole, Witnessing Identification, supra note 65, at 701. My argument might be interpreted as being critical of this policy, but, in fact, it is not. Given the current state of affairs in which latent print identification is essentially unregulated, untested, and offers highly inflated confidence levels in sworn testimony--the threat of decerfification is essentially the only quality control measure in place. My argument, however, is that the selective threat of decertification is inferior to, say, validation studies or measuring the error rate as a method of properly presenting the accuracy of the technique to the finder of fact.

The threat of decertification is supposed to force certified latent print examiners to treat every identification as potentially career-ending if it turns out to be erroneous. Id. at 702. This may well be an effective mechanism for raising the accuracy of fingerprint identification. The data presented here, however, demonstrate that it is certainly not entirely effective. And, the mere existence of the sanction certainly does not give us warrant to neglect measuring its effectiveness.

(350) Franci Richardson, O'Toole Eyes Penalty vs. Print Technican, BOSTON HERALD, June 25, 2004, at 10.

(351) McRoberts & Mills, supra note 210.

(352) 60 Minutes: Fingerprints, supra note 28.

(353) Id.

(354) Among the cases discussed, only perhaps in the case of the anonymous, still-practicing examiner implicated in the Midwestern Case, could a defendant in a new case faced with this expert's testimony expose the expert's history of error. But since the examiner remains anonymous, this would require the defendant's attorney to embark on a "fishing expedition."

(355) Simon A. Cole, Fingerprinting: The First Junk Science?, 28 OKLA. CITY U. L. REV. 73 (2003). For example, consider the following:

1. The proposition is urged that The Ordeal (sealing accused witches in gunny sacks weighted with rocks and hurling them into a body of water, with sinking indicating guilt as a witch) is 100% accurate and error-free when performed by a competent "ordealist."

2. A case of error is exposed. (The purported victim of witchcraft turns up alive and well.)

3. The implicated ordealists are deemed incompetent.

4. The proposition has not been falsified.

5. The proposition can never be falsified.

See also Jane Campbell Moriarty, Wonders of the Invisible World." Prosecutorial Syndrome and Profile Evidence in the Salem Witchcraft Trials, 26 VT. L. REV. 43 (2001).

(356) Starrs, supra note 102.

(357) Stacey, supra note 47.

(358) KEN ALDER, THE MEASURE OF ALL THINGS: THE SEVEN-YEAR ODYSSEY AND HIDDEN ERROR THAT TRANSFORMED THE WORLD 307 (2002).

(359) DAVID BLOOR, KNOWLEDGE AND SOCIAL IMAGERY 12 (2d ed. 1991).

(360) HARRY COLLINS & TREVOR PINCH, THE GOLEM: WHAT EVERYONE SHOULD KNOW ABOUT SCIENCE 57 (1993).

(361) BLOOR, supra note 359, at 8-12 (citations omitted).

(362) Id. at 7.

(363) My use of the term "imposter," and its connection with my epigraph, is deliberate. "Imposter" is a technical term used by psychologists who study matching tasks (of which latent print identification is one) for an item that should not be matched.

(364) Wertheim, supra note 30.

(365) CHARLES PERROW, NORMAL ACCIDENTS: LIVING WITH HIGH-RISK TECHNOLOGIES (1984).

(366) Systems designed to act as checks on one another are, in fact, highly dependent on one another.

(367) For example, "verification," the process by which an examiner checks a colleague's work, is often performed under conditions in which the examiner knows--and therefore may be influenced by--the original examiner's conclusion. Hence the high rate at which disputed attributions were confirmed by the verifier.

(368) DIANE VAUGHAN, THE CHALLENGER LAUNCH DECISION (1996).

(369) PERROW, supra note 365, at 214.

(370) Wertheim, Scientific Comparison, supra note 254, at 7.

(371) David H. Kaye & George F. Sensabaugh, Jr., DNA Typing: Scientific Status, in SCIENCE IN THE LAW: FORENSIC SCIENCE ISSUES 697, 726 (Faigman et al. eds., 2002).

(372) See generally Paul Giannelli, The Abuse of Scientific Evidence in Criminal Cases: The Need for Independent Crime Laboratories, 4 VA. J. SOC. POL'Y & L. 439 (1997).

(373) D. Michael Risinger et al., The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion, 90 CAL. L. REV. 1, 12 (2002).

(374) Id.

(375) See supra Part II.A.I.

(376) It should be noted that defense experts are typically paid the same amount no matter what their findings. On the other hand, a defense expert who disagrees with the government expert's conclusion is likely to bill more hours because the dispute will probably engender more protracted litigation over the fingerprint evidence. In the main, however, the defense expert's bias is probably less pecuniary than it is the natural tendency of all experts to become polarized by the adversarial process and become more sympathetic toward the party retaining them. Maggie Bruck, The Trials and Tribulations of a Novice Expert Witness, in EXPERT WITNESSES IN CHILD ABUSE CASES 85 (Ceci & Hembrooke eds., 1998).

(377) FBI Press Release, supra note 4.

(378) Stacey, supra note 47, at 708. The International Review Committee's original report has not been released or published. What we have instead is the FBI's "synopsis" of an external committee's report about an FBI error. Suffice it to say that this is an unusual way of conducting an external review.

(379) Id. at 712.

(380) Id. at 714 ("All of the committee members agree that the quality of the images that were used to make the erroneous identification was not a factor."). It is certainly true, however, that digital images may exacerbate the possibility of misattribution. Michael Cherry et al., Does the Use of Digital Techniques by Law Enforcement Authorities Create a Risk of Miscarriage of Justice?, CHAMPION, Nov. 2004, at 24.

(381) Kershaw, supra note 223, at A13.

(382) See supra note 47 and text accompanying supra note 234.

(383) Posting of Mike, mike98070@yahoo.com, to CLPEX Message Board (Sep. 11, 2004), at http://www.clpex.com/board/threads/2004-Sep-11/2200/2200.htm.

(384) See Stacey, supra note 47, at 717.

(385) FBI Press Release, supra note 4.

(386) In DNA parlance, an "adventitious cold hit" is an adventitious match generated by a database search. Again, the analogy is not exact. See infra Part III.C.3.a.

(387) COLE, SUSPECT IDENTITIES, supra note 69, at 219.

(388) Wertheim pere acknowledges this concern. Kramer, supra note 3.

(389) David J. Balding, Errors and Misunderstandings in the Second NRC Report, 37 JURIMETRICS 469, 470-71 (1997).

(390) Indeed, even Wertheim pere has endorsed this hypothesis. See Kramer, supra note 3.

(391) Risinger et al., supra note 373, at 31.

(392) Id. at 28.

(393) Kershaw & Lichtblau, supra note 37.

(394) A final point to be made is the way that these two possible causes of error may interact: the way in which computer databases may, in fact, facilitate observer effects. In the era before the introduction of computer databases with rapid database-searching capabilities, latent print analysis could be roughly divided into two types:

* Those rare cases in which no suspect was identified, and the case was serious enough that the agency could justify assigning an examiner to undertake a "cold manual search" of the entire fingerprint database (or a portion thereof).

* Those cases in which the examiner would be presented with a limited list of possible suspects who had been identified as suspects by other means. Such a situation presents a potential for biasing, of course; the analyst may unconsciously be tempted to think that one of the suspects did the crime and become convinced that the closest available match is indeed the source of the print. But, in many cases, this biasing may have led to a conclusion that was, in fact, correct. In a large number of cases, one of suspects may well have done the crime, and, therefore, the number of misattributions may have been relatively limited.

Today, the situation is quite different. Computer-assisted database searching may be undertaken in the most routine of cases. In a computer-assisted search, the human examiner is in some sense being presented with the most potentially confounding prints the computer can find. This may be highly dangerous if the examiner tends to pick the best available match, raising the possibility of a misattribution. If this is true, and latent print examiners are working blindly, we should expect some of these false attributions to generate implausible suspects. This may or may not be occurring; we have no way of knowing. Imagine, for example, that the FBI examiners, instead of identifying Mayfield, had identified an implausible suspect. Would we (the public) ever have learned about the misattribution? Presumably, the latent print examiners could have been quietly informed that they had identified an implausible suspect and the entire false attribution quietly swept under the rug, with no consequences for anyone. How often this occurs is anyone's guess, but information about it would be highly relevant to estimating the error rate of latent print identification.

(395) Posting of Pat A. Wertheim, May 26, 2004, at http://www.clpex.com/board/threads/2004-May-26/1358/1358.htm.

(396) See Cole, Witnessing Identification, supra note 65.

(397) Iain A. J. McKie, Fingerprints in Print--An Opportunity Missed?, 175 WEEKLY DETAIL, Dec. 19, 2004, at http://www.clpex.com/Articles/TheDetail/100-199/ TheDetail175. htm ("Infallibility has turned out to be a curse for fingerprint examiners.").

(398) Supra note 304 and accompanying text.

SIMON A. COLE, Assistant Professor of Criminology, Law & Society, University of California, Irvine; Ph.D. (science & technology studies), Cornell University; A.B., Princeton University. This project was funded in part by the National Science Foundation (Award #SES-0347305). I am indebted to Lyndsay Boggess for research assistance and to Max Welling and Rachel Dioso for assistance with the graphics. For information on misattribution cases, I am indebted to Rob Feldman and the New England Innocence Project, Peter Neufeld and Barry Scheck, Robert Epstein, Ed German, Dusty Clark, Michele Triplett, Craig Cooley, and, especially, James E. Starrs and Lyn and Ralph Haber. I am grateful to Joseph L. Peterson for facilitating and commenting on the use of his data. This paper benefited greatly from discussions with William C. Thompson. For critical comments, I am grateful to Laura S. Kelly, Jane C. Moriarty, John R. Vokey, Sandy L. Zabell, and two anonymous reviewers. A preliminary version of this paper was presented at the annual meeting of the American Association of Law Schools and at the Sixth International Conference on Forensic Statistics. I am grateful to the audiences at those conferences for their comments. I am also grateful to the editors of the Journal of Criminal Law & Criminology for their meticulous editing. None of these acknowledgments should be interpreted as an endorsement of the opinions in this article by any of those whose contributions are acknowledged. Responsibility for all material, opinions, and, yes, errors rests with the author.
Figure 2
Known Misattributions by Offense

Homicide investigation    9%
Terrorist bombing         9%
Homicide                 36%
Attempted murder          5%
Rape                      9%
Narcotics                 5%
Burglary                 18%
Unknown                   9%

Note: Table made from pie chart.

Figure 3
Known Misattributions by Offense with Unknown Cases Removed

Homicide investigation   10%
Terrorist bombing        10%
Homicide                 40%
Attempted murder          5%
Rape                     10%
Narcotics                 5%
Burglary                 20%

Note: Table made from pie chart.

Table 1
Fingerprint Misattributions

No.   Name of victim of   Year of    Jurisdiction    Crime
      misidentification   exposure

 1.   Robert Loomis       1920       Pennsylvania    Murder

 2.   William Stevens     1926       New Jersey      Murder

 3.   John Stoppelli      1948       California      Narcotics

 4.   Roger Caldwell      1982       Minnesota       Murder

 5.   Anonymous           1984       Midwest         ?

 6.   Michael Cooper      1986       Arizona         Rape

 7.   Bruce Basden        1987       North           Murder
                                     Carolina

 8.   Maurice Gaining     1988       North           Burglary
                                     Carolina

 9.   Joseph Hammock      1988       North           Larceny
                                     Carolina

10.   Darian Carter       1988       North           Larceny
                                     Carolina

11.   Neville Lee         1991       England         Rape

12.   Martin Blake        1994       Illinois        Murder

13.   Andrew Chiory       1997       England         Burglary

14.   Danny McNamee       1998       England         Terrorist
                                                     bombing;
                                                     murder

15.   Shirley McKie       1999       Scotland        Perjury
                                                     (murder
                                                     investigation)

16.   Richard Jackson     1999       Pennsylvania    Murder

17.   Anonymous           2000       England         ?
      ("Manchester")

18.   David Asbury        2002       Scotland        Murder

19.   Kathleen Hatfield   2002       Nevada          Murder
                                                     investigation

20.   David Valken-       2003       Utah            Murder
      Leduc

21.   Stephan Cowans      2004       Massachu-       Attempted
                                     setts           murder

22.   Brandon Mayfield    2004       United States   Terrorist
                                     (FBI)           bombing

No.   # of examiners      # of claimed         Consequence of
      implicated in    corresponding ridge   misidentification
      misattribution     characteristics

 1.         2                   ?            Convicted

 2.         3                   ?            Acquitted

 3.         1                  14            Convicted; served two
                                             years

 4.         3                  11            Convicted; served
                                             approximately three
                                             years

 5.         1                  14            Suspect held

 6.   [greater than            12            Illegally interrogated;
      or equal to] 2                         identified publicly as
                                             suspect

 7.   [greater than             ?            Jailed for thirteen
      or equal to] 1                         months

 8.   [greater than             ?            Convicted
      or equal to] 1

 9.   [greater than             ?            Convicted; sentenced
      or equal to] 1                         to ten years

10.   [greater than             ?            Convicted; sentenced
      or equal to] 1                         to ten years

11.   [greater than            16            Jailed; assaulted in
      or equal to] 1                         jail; home wrecked by
                                             vigilantes

12.         1                   ?            Questioned for three
                                             days

13.         3                  16            Charged

14.   [greater than            11            Convicted; served
      or equal to] 2                         eleven years

15.         4                  16            Fired from police;
                                             acquitted

16.         3                   ?            Convicted; served two
                                             years of life sentence

17.         3                  16?           None

18.         4                  16            Convicted; served
                                             five years

19.         1                   ?            Daughter notified that
                                             mother is deceased;
                                             error exposed very
                                             near to date of funeral

20.         1                   ?            Charged

21.         4                  16            Exonerated after
                                             serving six years

22.         4                  15            Held for two weeks as
                                             material witness

No.   Method of exposure       Exposed during
                              normal course of
                              criminal justice?

 1.   New trial               No

 2.   Review by defense       Yes
      experts

 3.   Special appeal by       No
      prosecutor to
      reexamine evidence

 4.   Trial of co-            No
      conspirator

 5.   Independent review      Yes

 6.   Reexamination           Yes

 7.   Defense motion for      Yes
      discovery of
      fingerprint
      evidence

 8.   FBI reappraisal of      No
      Fayetteville
      laboratory's work
      product

 9.   FBI reappraisal of      No
      Fayetteville
      laboratory's work
      product

10.   FBI reappraisal of      No
      Fayetteville
      laboratory's work
      product

11.   Confession by           No
      someone else

12.   Review by other         Yes
      law enforcement

13.   ?                       ?

14.   Appeal of               No
      conviction

15.   Review by defense       No
      experts
      Pat Wertheim,
      David Grieve

16.   Testimony of            Yes
      defense experts
      Vernon McCloud,
      George Wynn

17.   Suspect had alibi;      ?
      suspect did not
      match description
18.   McKie case; review      No
      by defense experts
      Pat Wertheim,
      Allan Bayle

19.   Reexamination of        No
      evidence

20.   Review in               No
      preparation for trial

21.   DNA exclusion           No

22.   Identification of       No
      print to another
      individual
      (Ouhnane Daoud)
      by Spanish
      National Police

Table 2
Frequency of fingerprinting for death-related offenses versus other
offenses

                           n of total   n fingerprint   % total cases
Offense                      cases        evidence       with finger(N
=2857)      (N = 504)     print evidence

Homicide and other death
investigations             248          98              39.5

All other offenses
                           2634         409             15.5
(excluding homicide)

Burglary                   699          168             24.0

Rape                       196          146             23.5

Source. Joseph L. Peterson et al., Forensic Evidence and the Police,
1976-1980, National Archive of Criminal Justice Data, Inter-University
Consortium for Political and Social Research, Study No. 8186, (1985).

Table 3
Percent evidence cases that include evidence with either hair
of biological or fingerprint evidence IN = 1713)

Evidence type   n     Cases that include      % cases that include
                      biological evidence *   biological evidence

Hair             155   133                     85.8%
Fingerprint      504   144                     28.5%

* Biological evidence includes blood, perspiration, saliva, urine,
vaginal, and feces.

Source: Joseph L. Peterson et al., Forensic Evidence and the Police,
1976-1980, National Archive of Criminal Justice Data, Inter-University
Consortium for Political and Social Research, Study No. 8186, (1985).

Table 4
False Positive Results on All Reported External Proficiency Tests

Test #       # of test    # of test       Total       False
              takers       items       comparisons   positives

83-4         24           21           504           13 (a)
84-5         28           21           588           22 (a)
85-7         37           21           777           5 (a)
86-7         43           25           1075          12 (a)
87-7         52           13           676           13 (a)
88-7         62           12           744           2 (a)
89-7         56           12           738 (b)       6 (a)
90-7         74           12           1622 (b)      18 (a)
91-8         88           12           1723 (b)      76 (a)
93H          103          28           2884          6
9408         130          4            520           0
9508         156          7            1092          48
9608         184          11           2024          20
9708         204          11           2244          26 (c)
9808         219          11           2409          21
99-516       231          12           2772          16 (c)
00-516       278          10           2780          13
01-516       296          11           3256          10
01-517       120          11           1320          2
02-516       303          11           3333          15
02-518 (d)   31           12           372           0
02-517       146          10           1460          7
03-516       336          10           3360          5
03-518 (d)   28           12           336           2
03-517       188          9            1692          1
04-516       306          12           3672          14
Totals       3723         341          43973         373

Test #         Comparison      # of test takers making     Examiner
             false positive   [greater than or equal to]     false
                rate              1 false positive         positive
                                                             rate

83-4         2.6%             NR                           NR
84-5         3.7%             NR                           NR
85-7         0.6%             NR                           NR
86-7         1.1%             NR                           NR
87-7         1.9%             NR                           NR
88-7         0.3%             NR                           NR
89-7         0.8%             NR                           NR
90-7         1.1%             NR                           NR
91-8         4.4%             NR                           NR
93H          0.2%             6                            5.8%
9408         0.0%             0                            0.0%
9508         4.4%             34                           21.8%
9608         1.0%             14 (c)                       7.6%
9708         1.2%             21 (c)                       10.3%
9808         0.9%             14                           6.4%
99-516       0.6%             14 (c)                       6.1%
00-516       0.5%             11                           4.0%
01-516       0.3%             8                            2.7%
01-517       0.2%             2                            1.7%
02-516       0.5%             13                           4.3%
02-518 (d)   0.0%             0                            0.0%
02-517       0.5%             5                            3.4%
03-516       0.1%             4                            1.2%
03-518 (d)   0.6%             2                            7.1%
03-517       0.1%             1                            0.5%
04-516       0.4%             12                           3.9%
Totals       0.8%             161                          5.5%

Test #         Habers'
             "consensus"
             false positive
               rate

83-4         16.1%
84-5         19.3%
85-7         8.0%
86-7         10.6%
87-7         13.9%
88-7         5.2%
89-7         9.0%
90-7         10.5%
91-8         21.0%
93H          4.6%
9408         0.0%
9508         21.0%
9608         9.9%
9708         10.8%
9808         9.3%
99-516       7.6%
00-516       6.8%
01-516       5.5%
01-517       3.9%
02-516       6.7%
02-518 (d)   0.0%
02-517       6.9%
03-516       3.9%
03-518 (d)   7.7%
03-517       2.4%
04-516       6.2%
Totals       9.2%

Sources: Peterson, Joseph L., and Penelope N. Markham. "Crime
Laboratory Proficiency Testing Results, 1978-1991, 11: Resolving
Questions of Common Origin." Journal of Forensic Sciences 40, no. 6
(1995): 1009-29; Collaborative Testing Services, Collaborative Testing
Services, Inc., Latent Prints Examination Report Nos. 9508, 9608, 9708,
9808, 99-516, 01-516, 02-516, 02-517, 03-516 (1995-2003), summaries or
complete reports on file with the author, reports from 2001-2003
available at
http://www.collaborativetesting.com/forensics/forensics_ reports.html
(last visited June 2, 2004); Kenneth O. Smith, Latent Prints
Proficiency Test Comparison Study (Feb. 8, 2002), submitted into
evidence in United States v. Llera Plaza as Government Exhibit R-I,
on file with the author; Catherine Brown, Forensic Program Manger,
Collaborative Testing Services, Inc., electronic communication, Aug.
27, 2004, on file with  the author.

(a) It is not entirely clear how to derive a false positive rate from
Peterson & Markham's presentation of the data for 1983-1991. Peterson &
Markham are often reported (e.g., Haber & Haber, supra note 257) as
having found an overall false positive rate of 2%. This number derives
from Peterson & Markham's Table 2, column 8, which indicates the number
of false attributions of prints for which a true matching print was not
provided (target-absent false positive). This figure does not appear to
include cases in which a true matching print was provided, but the
examiner still made an incorrect attribution (target-present false
positive) to some other print. This figure appears to be given in
Peterson & Markham's column 10. My "false positive" count represents
the sum of these two types of error and is therefore greater than the
false positive count generally reported from these tests. Peterson &
Markham included another column (column 9) that represents cases in
which target-present false positives were made by attributing a print
to the wrong card (as opposed to column 10, which indicates
attributions to the right card but the wrong finger). In the interest
of conservatism, I have not included these cases because it was
impossible to determine whether or not these cases were also included
in the cases in column 10. If not, then I have undercounted false
positives. Ambiguities like this emphasize that the error rates
presented here should be treated only as estimates.

(b) In all other cases, I have calculated the number of comparisons as
the product of the number of test items and the number of test-takers.
Peterson & Markham's report of the number of comparisons (which should
be the sum of the denominators in columns 5 and 6) corresponds pretty
closely with product of the number of test items and the number of
test-takers. (Slight discrepancies presumably derive from laboratories
advertently or inadvertently skipping test items.) From 1989 through
1991, however, this correspondence breaks down significantly by up to
almost a factor of 2. I have been unable to explain the discrepancy.
In the interest of being conservative, I have used the higher figure
for the number of comparisons for these three years. However, if the
lower figure is correct, then I have overcounted comparisons and
underestimated the false positive rate for these years and overall.

(c) For some tests, there is a discrepancy between the number of
"erroneous identifications" reported by Smith and those reported
in the "PAC [Proficiency Advisory Committee] Comments" reported at the
beginning of the Summary Report of each CTS test. In all cases, Smith
reported more false positives than the PAC. The one test containing
discrepancies for which I had access to the complete test (rather than
just the Summary Report) was Test No. 99-516. I did a manual count of
the number of false positives, which confirmed Smith's report that 16
erroneous identifications were made by 14 examiners. But the PAC
Comments state, "Eleven erroneous identifications were reported by nine
participants." That the PAC has underreported false positives is
disturbing because for some tests (9608, 9708, 9808) I have been able
to obtain only the summary sheets, rather than the complete test
results. (Although the most recent tests are published on the internet,
older tests may be obtained only through subpoena.) Thus, I have been
forced to rely upon the PAC Comments to give the number of false
positives. Because Smith's numbers were accurate on Test No. 99-516, I
have used them whenever they differ from the PAC numbers.

(d) Chronologically, tests ending in -518 are administered before
tests ending in -517.

Table 5
Results of Internal FBI Proficiency Testing, 1995-2001

        # of           # of         # of comparison   # of comparisons
Year    participants   test items   items             per examiner

1995    61             11           42                404
1996    63             7            20                140
1997    60             8            20                160
1998    69             13           33                270
1999    68             10           32                222
2000    57             10           33                204
2001    53             l0           33                204

Total   431            69           213               1604

        Max total # of
        comparisons for   False       False       Total
Year    all examiners     positives   negatives   errors

1995    24,644            0           2           2
1996    8,820             0           0           0
1997    9,600             0           0           0
1998    18,630            0           0           0
1999    15,096            0           0           0
2000    11,628            0           1           1
2001    10,812            0           0           0

Total   99,230            0           3           3

Source: FBI Laboratory Latent Print Unit, Assessment of Proficiency
Tests by the FBI Latent Print Units, 1995-2001, submitted into evidence
in United States v. Llera Plaza, Cr. No. 98-362 (E.D. Pa. 2002), on file
with the author.
COPYRIGHT 2005 Northwestern University, School of Law
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Cole, Simon A.
Publication:Journal of Criminal Law and Criminology
Date:Mar 22, 2005
Words:38443
Previous Article:Missouri v. Seibert: two-stepping towards the apocalypse.
Next Article:The cruikshank redemption: the enduring rationale for excluding the Second Amendment from the courts's modern incorporation doctrine.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |