Printer Friendly



In 1998, Charles Johnson, Troshawn McCoy, LaShawn Ezell, and Larod Styles were convicted of the murder of the two owners of a car lot in Chicago. (1) Twenty-three fingerprints and four palm marks were recovered from cars that the perpetrators stole or had encountered on the day of the murder. (2) The marks were compared to the victims and suspects and searched in the local Automated Fingerprint Identification System ("AFIS"). (3) Neither the suspects nor the victims were identified as the sources of any of these marks. (4) No source was identified for twenty-six of the marks. (5) Latent print examiners reported a source for only one mark: a man who had traded in a car at the lot on the day of the crime. (6) This man was eliminated as a suspect. (7) Some of the marks were reported to be of no value for comparison. (8) Others were reported to be of value but not suitable for AFIS searching, and the AFIS lacked the capability to search palm marks at that time. (9) At Johnson and Styles's trial, the prosecutor argued that Johnson might be the source of a mark that had been deemed insufficient for comparison. (10)

Eleven years later, in 2009 on Johnson's motion, the court ordered the marks analyzed and searched in the Illinois State AFIS. (11) As a result of this search, latent print examiners identified five different men, one being the victim's, as the sources of ten of the marks. (12) Perhaps the most incriminating mark was found on the adhesive side of a window sticker that had been removed from a car that had been stolen from the lot at the time of the murder. (13) Latent print examiners reported that a man named Davion Allen was the source of this mark. (14) The fact that the mark was on the adhesive side of the sticker made it less plausible to claim that the mark had been left during an innocent visit to the lot, rather than after the stealing of the car, which was believed to have followed the murder. (15) Latent print examiners also identified Allen as the source of marks from the examined cars at the lot. (16) Moreover, Allen's home was a short walk from the site where the stolen car had been abandoned. (17) Finding "Mlle fact that the new fingerprint and palm print evidence conclusively matched other people, and now excludes defendant as a match, is significant [,]" (18) the Illinois Appellate Court ruled that Johnson had "made a substantial showing of an actual innocence claim... that... would probably change the result on retrial." (19) Johnson and Styles's convictions were reversed in 2013. (20) Johnson, McCoy, Ezell, and Styles, the "Marquette Park Four," were finally exonerated in 2017. (21)

The Marquette Park Four's exonerations are an example of a case in which fingerprint evidence played a role in both the wrongful conviction and the exoneration. It did not, however, play the role that is perhaps most often discussed with regard to wrongful conviction and fingerprint evidence--in which an erroneous fingerprint individualization contributes to a wrongful conviction. (22) The classic such case is that of Stephan Cowans, the only case in which a conviction obtained in part through a fingerprint inference of common source was overturned through post-conviction DNA testing. (23)

The emphasis on erroneous individualizations in the wrongful convictions literature was perhaps a result of the fact that at the time that research into fingerprint evidence and innocence began in the early 1990s the possibility of erroneous identifications was still controversial. (24) The fingerprint profession was at that time claiming--and the courts were accepting--that erroneous identification could not occur and that the error rate of latent print identification was "zero." (25) In this context, it is not surprising that a great deal of energy went into debunking these claims of infallibility. Today, however, even the fingerprint profession itself disavows infallibility claims, and at least one judge has issued a public mea culpa of behalf of the bench for taking them seriously. (26)

The Marquette Park Four case illustrates that an erroneous individualization is only one of many ways in which latent print error can be implicated in wrongful convictions. In this article, we seek to move beyond erroneous individualizations to discuss more of the various fact patterns through which fingerprint evidence can play a role in wrongful convictions. In Part II, we discuss the methods and materials we used for conducting this study. In Part III, we summarize the nature of latent print analysis and the types of "reports" (or conclusions) this analysis can yield. In Part IV, we introduce a framework for categorizing different types of fingerprint error. In Part V, we discuss each error type. In Part VI, we show that the occurrence of these error types is unsurprising and is, to some extent, an expected product of the way that latent print analysis is structured. In Part VII, we conclude by discussing what we see as the most important criminal justice system reforms for reducing miscarriages of justice contributed to by these "other" fingerprint error types: access to evidence. Most extensively, we discuss the need for AFIS access for defendants both pre- and post-conviction. It should be clear from our capsule description of the Marquette Park Four case that a key turning point in the case was the fact that when Johnson moved that the latent marks be searched in the state AFIS database, the court ordered that this be done. While this may seem an eminently sensible and low-cost decision, in fact, the Marquette Park Four were extremely fortunate to live in one of only five states in the United States with a statutory provision allowing for post-conviction AFIS searching. Without that provision, as we will discuss below, the Marquette Park Four might still be in prison today. We conclude by asking why this should be the case.


For this study, we conducted an unsystematic survey of known innocence cases in which fingerprint evidence was involved. We obtained our case examples from five sources: (1) the data set of post-conviction DNA exonerations maintained by the Innocence Project; (27) (2) the National Registry of Exonerations, the most comprehensive source of information on U.S. exonerations; (28) (3) a compilation of latent print error cases prepared and maintained by Dr. Erin Morris, a Behavioral Sciences Research Analyst at the Los Angeles County Public Defender; (29) (4) Michele Triplett's Fingerprint Terms, which contains a listing of a number of error instances; (30) and (5) cases of which we had become aware through research and communication with other scholars, lawyers, journalists, and activists, one of us as a post-conviction attorney and innocence advocate and one of us as a researcher on the validity of fingerprint identification. We make no claim that these cases amount to any sort of representative or comprehensive sample of cases. We are also confident that we have missed many cases due to our opportunistic method of data collection. Our discussion of cases is illustrative only.


Friction ridge (or "latent print" or "fingerprint") analysis concerns the comparison of impressions of friction ridge skin--the corrugated skin on the fingers, palms, and soles--in order to make inferences about whether such impressions may have derived from a common source area of friction ridge skin. (31) For purposes of criminal investigations, such comparisons typically involve the comparison of an accidentally deposited impression, which we shall call a "mark," with a deliberate impression, which we shall call a "print." (32)

Inferences of source of the mark are effected by comparing the "friction ridge details" of the mark with those of the print. (33) Features that are "in agreement"--that is, in the same relative spatial location within some subjectively defined degree of "tolerance"--support an inference of common source. (34) Features that are not in agreement support an inference of different sources. (35)

To what extent friction ridge features support inferences of common source--and what inference can be made from various findings of agreement of various configurations of features--has long been a vexing question. While some research on this question has recently been published, (36) the current practice in the discipline is to have the latent print analyst intuit the value--that is, the relative rarity within the relevant population of friction ridge skin --of the friction ridge features observed. (37) While we, and many others, continue to find this practice problematic, (38) that is not our topic here.

A. Fingerprint Reports

At least four categories of "reports" (39) of friction ridge analyses are possible: (1) individualization; (2) exclusion; (3) inconclusive; (4) no value. (40) However, actual reporting practices have varied across both agency and time. (41) For instance, many agencies historically reported only identified or not identified, thus compressing the latter three categories into one. (42) Most attention has been focused on the individualization report. (43) The purpose of this Article, however, is to draw attention to the probative value of the other report types. Therefore, we will briefly discuss each.

1. Determinations of Value

Recent efforts have greatly enhanced the latent print discipline's description of the process of latent print analysis. (44) Of particular importance for our purposes is the fact that latent print analysis is supposed to begin with a determination of value. (45) In other words, the analyst is supposed to decide whether the mark contains enough information to make an inference. If not, there is (perhaps) no reason to continue further with the analysis. However, even this determination has been muddied by recent discussions which make clear that latent print units are not consistent in how they use the term of value. Recently, the discipline has noted that there are some marks that may have "value for exclusion only." (46) There may be so few features that, even if they were all consistent with a print, it would not support a report of "individualization," but if that arrangement of features did not appear anywhere in a complete set of prints it might support a report of "exclusion." (47) Conversely, it may be that there are some marks that may be of value for individualization, but not for exclusion. (48) For some units, a mark only should be deemed "of value" if the examiner believes that a conclusion of individualization from that mark would be possible. (49) The Scientific Working Group for Friction Ridge Analysis, Study and Technology ("SWGFAST"), a former standard-setting body for latent print analysis in the U.S., (50) called this "of value for individualization." (51) Other units contend that a mark that examiner believes could never support a conclusion of individualization but could support a conclusion of exclusion should also be considered "of value." (52) SWGFAST called this 'of value for comparison.'" (53) This produces a confusing situation for litigants in which different agencies mean different things by "of value." And, even with this nuance, the of value determination is overly crude. Ulery et al. sensibly observe that "the value of latent prints is a continuum that is not well described by binary (value vs. no value) determinations." (54)

As we will discuss in Parts 0 and 0, an erroneous "no value" report can be potentially damaging to an innocent suspect.

2. The "Inconclusive" Report

"Inconclusive" refers to a conclusion that the mark is "of value" but that the examiner is unable to reach a conclusion of either individualization or exclusion. (55) The inconclusive report gives rise to a number of difficult issues. One is whether "inconclusive" responses can be considered "error[s]." (56) On the one hand, since the ground truth is always either that the mark and print came from the same source or from different sources, an inconclusive report is always wrong. On the other hand, given that inconclusive is essentially a decision not to make an inference, an inconclusive report never can be wrong. Those charged with maintaining quality control in fingerprint practice, not surprisingly, find neither of these positions helpful--they would like to believe that there are prints for which any competent examiner should or should not conclude "inconclusive." (57) One way of dealing with this is to speak of "nonconsensus" reports, as SWGFAST does. (58)

A second issue is that the term "inconclusive" is used inconsistently across different forensic disciplines. (59) Third, even within the latent print discipline, the term is used inconsistently. One latent print examiner commented "that many agencies and latent print analysts struggle with the documentation and reporting of inconclusive results." (60) The "inconclusive" report actually covers a number of different factual scenarios that may actually be quite different in terms of their probative value. (61) A SWGFAST document in (apparently permanent) draft form notes that "inconclusive" covers at least three distinct factual situations:

1. Lack of Comparable Areas ("LCA")

This inconclusive conclusion results from a lack "of complete and legible known prints." (62) This means comparisons were made to the extent possible, however additional clear and completely recorded exemplars, to include the required anatomical areas, are needed for reexamination. (63)

2. Lack of Sufficiency for Individualization ("LSI")

"[C]orresponding features are observed but not sufficient to individualize." No substantive dissimilar features are observed. (64)

3. Lack of Sufficiency for Exclusion ("LSE")

"[D]issimilar features [are] observed but not sufficient to exclude." No substantive correspondence is observed. (65)

Moreover, SWGFAST notes that this list is not exhaustive and that "[t]here may be other instances where agencies have adopted procedures to report inconclusive conclusions." (66)

Because the three categories described above were only recently articulated by SWGFAST, post-conviction litigants with "inconclusive" reports in their case files may not know which of the above scenarios led to that "inconclusive" report (Figure 1). And yet, which of the above scenarios obtained might matter a great deal to the post-conviction litigant. (67) However, "[h] istorically, the inconclusive decision has been used[,]" in both scenarios: when the mark is of value but there is insufficient consistent or inconsistent detail to render decisions of individualization or exclusion and "when the exemplars are incomplete or of low quality." (68) Recent debates on latent print examiner discussion boards show that the issue of how to report inconclusive results is far from resolved within the latent print discipline. (69)

Since conclusions of "inconclusive" in some sense cannot be wrong, examiners who report this conclusion can neutralize potentially exculpatory evidence without risking the possibility of making an erroneous individualization or exclusion. The easiest path for an examiner who is, consciously or unconsciously, primed to believe that a conclusion of individualization or exclusion would harm the investigators' case would be to change that conclusion to "inconclusive." Even if that conclusion were later found to be contradicted by consensus or ground truth, as in some of our case examples cited in the Appendix, the inconsistency would not be termed an "error" but merely a "non-consensus" result or a failure of discrimination. (70)

Many post-conviction litigants may have "inconclusive" reports in their case files that are ambiguous in the way described above. Depending on how much time has passed and the state of documentation in the laboratory that issued the report, the true meaning of the inconclusive report may be unrecoverable. Moreover, the above categories describe only the current state of affairs. As Ray and Dechant point out, this state of affairs is more recent than one might assume. (71) As recently as 1997, Technical Working Group on Friction Ridge Analysis Study and Technology ("TWGFAST"), the predecessor organization of SWGFAST, promulgated conclusions only of: "Identification," "Non-identification," and "Incomplete or unclear known impressions." (72) This third category would cover what SWGFAST now calls LCA, but the TWGFAST guidelines contained no category that would accommodate what SWGFAST now calls LSI and LSE. (73) In addition, Ray and Dechant note that "many examiners still operated under" a subtly different scheme, "the identification-no identification paradigm," even after the TWGFAST guidelines were promulgated. (74) For example, until 2008 the Arizona Department of Public Safety allowed only two reports: (1) "identification" and (2) "no identifications were effected." (75) The latter report encompassed no fewer than three (or even five) distinct reports under current SWGFAST nomenclature: "exclusion," "no value," and "inconclusive" (with its three subtypes). (76)

3. Exclusion

It may have come as some surprise to learn in the preceding section that the exclusion report does not exist, or has not existed, in many agencies. A 2009 survey found that twenty-three percent of U.S. latent print analysts surveyed reported that their agencies did not even use the term "exclusion," and those that did use it meant different things by it. (77) Agencies which do not use the term may use a term like not identified, a term which might obscure the difference between what SWGFAST calls an inconclusive and an exclusion. (78) This practice may well deprive defendants of evidence with probative value. To take the most obvious example, a report that a defendant was excluded as the source of an incriminating mark is far more probative of innocence than a report that a comparison was inconclusive. The latter would allow the prosecutor to argue that the defendant could be the source of the incriminating mark. (79)

In contrast to individualizations, exclusions rely on the quality and completeness of the provided known prints. (80) In order to exclude an individual as being the source of a mark, the examiner must be able to eliminate all relevant areas of friction skin possessed by that person as sources of the mark. (81) As a simple example, consider a case in which, for some reason, prints of only nine fingers are provided. The examiner cannot exclude this individual as the source of the mark because the missing finger could be the source.

This determination depends heavily on whether the examiner is able to correctly a priori determine the anatomical source of the mark. (82) If the examiner is able to correctly a priori determine that a mark derives from a fingertip and the examiner has access to clear and complete prints of all the suspect's fingertips, then the examiner can exclude the suspect. (83) However, if, as is sometimes the case, the examiner cannot a priori determine the anatomical source of the mark--so it might be from a fingertip, a palm, or a phalange finger joint--then the examiner will require clear and complete prints of all of these areas in order to exclude. (84)

4. The Probative Value of Fingerprint Reports

Because fingerprint individualizations purport to have such enormous probative value, lawyers may be accustomed to overlooking the probative value of other fingerprint reports. However, depending on the fact pattern of the case, all fingerprint reports potentially have probative value. A report that excludes a suspect as the source of a mark that because of its location or medium (such as blood) is highly incriminating has very high probative value in favor of innocence. (85) As FBI forensic scientists have commented,: "[Occlusions are a very useful investigative tool and are currently underutilized." (86) And, yet exclusions have historically been neglected as a topic in the fingerprint discipline: "Mlle comparison of friction ridge skin has always been focused on the identification decision." (87) And, historically, there has been much greater focus on individualization than on exclusion in latent print analysis. (88) Even reports of "exclusion" for marks that are not necessarily highly incriminating because of location or medium can still be what Liebman et al. call "non-exclusionary non-matches": "small" evidence of innocence that may aggregate with other such "small" evidence to become powerful evidence of innocence. (89) Conclusions of inconclusive or no value, if, in fact, other examiners might have excluded the suspect, deprive suspects of that probative value. (90) Likewise, reports of individualization to third parties have obvious probative value.

In a 2011 study, Neumann et al. re-analyzed marks that were not recovered during the original laboratory processing, that were deemed of "no value," and that were deemed "inconclusive." (91) They showed that some of these marks did, indeed, have probative value. (92) The overall portion of marks analyzed that did have probative value, however, was small, suggesting that such analyses might not be cost-effective. Neumann et al. suggest, reasonably, that laboratories might assess the potential probative value of such marks before deciding whether to order the analysis. (93)


A common framework in science for characterizing correct and incorrect results by any sort of detection system, such as fingerprint identification, is Signal Detection Theory ("SDT"). (94) Table 1 illustrates this approach, which is based on a matrix of combinations of examiner response and ground truth (or the state of the world). (95)

While SDT provides a useful starting point, it turns out that the universe of fingerprint error is too complex to be adequately captured by this simple scheme. Actual fingerprint practice does not limit examiners to the kind of binary decision framework suggested by Table 1. (96) Because standard latent print analysis can yield at least four--or, if "inconclusive" is differentiated, at least six---different conclusions, the SDT matrix cannot fully account for the relationship between latent print reports and ground truth. (97) A more nuanced matrix is needed.

In Table 3, we propose a comprehensive conceptual framework of fingerprint error for use in this paper. We adapted this matrix from two primary sources. First, the National Institute of Standards and Technology ("NIST") and National Institute of Justice ("NIJ") Expert Working Group on Human Factors in Latent Print Analysis published a matrix more nuanced than the standard SDA matrix (Table 2). (98) The NIST/NIJ matrix accommodates the additional "inconclusive" and "insufficient" reports. (99) Second, we drew on the terminology proposed in SWGFAST Document #15, "Standard for the Definition and Measurement of Rates of Errors and Non-Consensus Decisions in Friction Ridge Examination." (100) SWGFAST describes errors at two different levels, one general and one more specific. (101) At the general level, SWGFAST divides errors into two categories: "missed individualizations" and "missed exclusions." (102) "Missed exclusions" is a general category encompassing various types of error in which the ground truth was that some print was from a different source than the mark. (103) "Missed individualizations" is a general category encompassing various types of error in which the ground truth was that some print was from the same source as the mark. (104) In keeping with conventional scientific usage, we have used the label "type I" error for missed exclusions and "type II" error for missed individualizations; however, we have added letters to subdivide these two general types.

At the more specific level, SWGFAST offers terms for most, though not all, of the specific types of error contained in the matrix. (105) Some of these are well known, such as "erroneous individualization." (106) Others are less well known, such as: "non-consensus individualization." (107)

In Figure 1, we overlay these error types onto a process map of the ACE-V (Analyze, Compare, Evaluate--Verify, currently the preferred way of describing friction ridge analysis) process published by the NIST/NIJ Expert Working Group on Human Factors in Latent Print Analysis. (108) This allows the reader to visualize where in the process each of these error types might occur.

1. Not Reported, Not Recovered, and Not Analyzed

Both the NIST/NIJ and SWGFAST matrices were conceived in the context of proficiency testing scenarios--that is, a latent print unit seeking to measure the extent to which its examiners reached what its "best experts" viewed as the "correct answers." (109) The matrices are aimed at assessing the performance of latent print examiners on materials provided to them by police investigators. (110) These matrices miss problems that originate "upstream" in the provision of materials to the laboratory. (111) Since we are interested in actual cases, we need to account for these extra-laboratory results. Therefore, we added three additional "reports" (rows F-H) to describe situations in which latent prints are not recovered from the crime scene, are recovered but not analyzed, or are analyzed but the results not reported. We have coined our own terms for these error types, seeking to remain consistent with SWGFAST's terminology.

2. Of Value, but No Suitable Candidate

Some readers may be puzzled by the response category titled "Of Value, but no Suitable Candidate." We drew this category from SWGFAST Document #15. According to SWGFAST, this category "means no subjects were compared, or all AFIS candidate images that were compared were excluded." (112) Although this sounds like an odd scenario, it turns out to be quite a useful category for describing the sequence of events in actual cases, as will be discussed below. We labeled four cells, all pertaining to this category, "N/A" because we were not able to conceive of sensible scenarios indicated by these combinations of examiner report and ground truth.

3. Case Examples

Table 3 also shows, in the lower right corner of each cell, the number of actual cases of each error type we were able to document. These numbers are illustrative only. We do not think they necessarily represent the relative frequency of error types. They may, for example, merely represent the relative frequency of exposure of different error types. For error types for which actual cases are known, we provide one example of each in an Appendix. (***) Most, but not all, of these cases are wrongful conviction cases. In many cases, we believe that actual cases of these types probably do exist, but that, for various reasons, the error may not have been exposed. Table 4 names all the known cases of each error type.

4. Empirical Accuracy Data

Finally, we superimpose data from what is widely considered the best empirical study of the accuracy of latent print analysis onto our error type matrix. (113) This study by Ulery et al. had 169 self-selected professional American latent print examiners perform 100 comparisons each. (114) One-third of the comparisons were what researchers called "nonmated pairs," images that in fact derived from different sources. (115) These appear in column I of Table 3. Two thirds of the comparisons were what the researchers called "mated" pairs, different images which do in fact derive from the same source. (116) These appear in column II. The rates of various responses in the Ulery et al. study are shown in the lower left corners of the appropriate cells of Table 3. This gives an idea of how often latent print examiners gave each response in a simulated study. We see that for both "mated" (from the same source) and "non-mated" (from different sources) pairs, examiners reached the full range of conclusions and that conclusions that deviated from ground truth were extremely common.


In this Part, we proceed through Table 3, discussing each "type I" and "type II" error type and two "type III" error types.

A. Missed Exclusions

This general category contains cases in which the ground truth was that some relevant individual was not the source of a mark and latent print evidence somehow did less than it might have to help the justice system reach that conclusion.

1. Type I-B. Missed Exclusion: Erroneous Individualization

In some cases, examiners have concluded that an individual was the source of a mark, and subsequent events have provided strong evidence that was not the case. As we remarked above, many errors of this type have been discussed in the literature, including the well-known Mayfield and McKie cases. (117)

Discussions of accuracy studies have focused on these types of errors because for a period of time the latent print discipline denied that they occurred. (118) They are also, of course, the most devastating type of error for an innocent defendant. However, the accuracy studies found that they are also the rarest types of error, though perhaps less rare than many believed. (119)

2. Type I-C. Missed Exclusion: Non-Consensus Inconclusive

An examiner may make an "inconclusive" report for a comparison for which a "consensus" of latent print examiners might reach conclusions of "exclusion." Non-consensus "inconclusive" reports might affect defendants by depriving them of the probative value of an exclusion. As Ulery et al. notes, "[o]ccasionally, the distinction between an inconclusive and an exclusion may be important for exculpatory evidence, if the latent is of high probative value (e.g., on the handle of a knife), or if the latent indicates that another person was present at a crime scene." (120) The Ulery et al. accuracy study found a high rate of these types of error (18 percent of non-mated comparisons). (121)

3. Type I-E. Missed Exclusion: Non-Consensus Determination of No Value

If latent print analysis reports that a mark is of "no value," then a jury might reasonably conclude that a defendant could be the source of that mark. If the ground truth is that the defendant is not the source of that mark, then the defendant has been deprived of a report of "exclusion." If the mark is highly probative, because of its location or medium, for example, then a finding of exclusion might have been highly probative of innocence. If it turns out the "consensus" of latent examiners is for a report of "exclusion," but the jury has been presented with a report of "no value," then the latent print analysis has deprived the defendant of what we might call the "marginal probative value" (122) of that report.

Ulery et al. found a substantial rate of these types of error (10 percent of non-mated comparisons). (123) In addition, a recent study found a high rate of error in value determinations: 34 percent of marks discarded by laboratory practitioners as having "no value" were found to have value. (124) As the authors note, "discarding a good quality fingermark will lead to the potential loss of identifying evidence which, at the extreme, could lead to a guilty person not being detained or convicted." (125) This could affect innocent individuals who are suspected for the crime for which the guilty person is "not... detained or convicted." (126) Another study found that "crowdsourced" expert value determinations were "more robust" than individual expert determinations. (127)

In addition, Fraser-Mackenzie et al. found that suitability (value) determination can be affected by cognitive bias. (128) Examiners were less likely to find a mark of value when presented with a comparison print from a different source. (129) Thus, a mark of value might be called no value simply because the examiner is presented with the wrong candidate print. However, the mark might be potentially identifiable if the appropriate comparison print were produced. It is easy to see the pernicious effects this might have in a post-conviction context: if a new third-party suspect is developed, potentially incriminating latent print evidence may be lost if an old "no value" determination is not revisited.

4. Type I-F. Missed Exclusion: Failure to Report

In these cases, the failure to exclude derives not from any error on the part of the latent print examiner, but because of the failure of the latent print examiners' findings to reach the fact-finder, perhaps through suppression by the police or prosecutor, perhaps through negligence by the defense attorney. In these cases, the defendant would be deprived of the probative value of an exclusion report.

5. Type I-G. Missed Exclusion: Failure to Conduct an Analysis

Depending on the fact pattern of a case, a failure to even conduct a latent print analysis can have counter-probative value for defendants. If a mark is very likely to derive from the perpetrator, the failure to exclude a suspect as the source of that mark can deprive the suspect of significant probative value.

6. Type I-H. Missed Exclusion: Failure to Recover Probative Evidence

Depending on the fact pattern of a case, a failure to recover latent print evidence can have counter-probative value for defendants. If a mark is very likely to derive from the perpetrator, the failure to recover that mark can deprive the suspect of the exculpatory evidence that might have derived from excluding him as the source of that mark.

B. Missed Individualization

This general category contains cases in which the ground truth was that some relevant individual was the source of a mark and latent print evidence somehow did less than it might have to help the justice system reach that conclusion.

In the Ulery et al. study, examiners not infrequently reported "no value," "inconclusive," or "exclusion" for "mated" (meaning the ground truth was that the impressions were from the same source) pairs. (130) Indeed, in the case of two mated pairs, no examiner reported that they were from the same source. (131) All of these erroneous--or, if you prefer, non-consensus--conclusions potentially deprive a suspect of probative evidence depending on the fact pattern of the case.

1. Type II-A. Missed Individualization: Erroneous Exclusion

Because until recently "exclusions" were often not explicitly reported, Ulery et al. have gone so far as to call the erroneous exclusion "a new type of error." (132) An erroneous exclusion of the true perpetrator as the source of a mark can diminish or eliminate suspicion of that true perpetrator. This, in turn, can allow suspicion, and eventually conviction, to fall on an innocent person.

Ulery et al. found these types of errors occurred not infrequently (5 percent of mated comparisons). (133) Moreover, they found, to their surprise, that these errors were not concentrated in particularly difficult mated print pairs; they were distributed widely across mated print pairs. Indeed, "[e]rroneous exclusions were made by at least one examiner on 46%...[,]" of mated print pairs. (134) And, "85% of examiners made at least one erroneous exclusion --although 65% of participants said that they were unaware of ever having made an erroneous exclusion after training." (135) Another study found an erroneous exclusion rate of 7.9 percent for experts and 25.5 percent for novices. (136) A follow-up study by the same authors, found erroneous exclusion rates of 27.8 percent for experts, 30.6 percent for both intermediate trainees and novices, and 50.9 percent for new trainees. (137) The results of two other studies are also consistent with this pattern: erroneous exclusions occur at a much higher rate than erroneous individualizations. (138) A qualitative analysis of erroneous exclusions found a number of contributors: cases with a high number of comparisons, examiners rushed or under stress, distortion or color reversal (ridges are white, furrows are black), (139) low contrast or background interference, left/right reversal, true source is little finger, incomplete or low quality known prints, and misjudged orientation. (140)

As two latent print examiners comment, "[t]he high rate of erroneous exclusions in recent studies demonstrates that additional training in exclusions is necessary." (141) They also comment that "these high rates of erroneous exclusions could be underestimating the problem." (142) In addition, it has been noted that the "verification" phase of the ACE-V process appears to be better at detecting erroneous individualizations than erroneous exclusions. (143) Some latent print examiners have been disciplined or reassigned, not because of making erroneous individualizations, but because of "often miss[ing] useful prints and identifications that could have led to convictions." (144)

2. Type II-C. Missed Individualization: Non-Consensus Inconclusive

As we discussed above in Part 0, non-consensus inconclusive reports can damage innocent defendants when the ground truth is exclusion, depriving them of the benefit of an exclusion. However, they can damage such defendants even more when the ground truth is that a third party is the source of the mark because that person might make a plausible alternate suspect.

Ulery et al. found these types of errors occurred at an astonishingly high rate: about one-third of all mated comparisons. (145)

A recent technical paper has suggested that a phenomenon it calls "fingermark ridge drift" may cause an examiner to "erroneously report an inconclusive result where a positive identification may be justified." (146) Fingermark ridge drift is the appearance of movement of a single ridge impression in a mark as the mark ages. (147) The authors hypothesize that "drift" may be caused by actual "microscopic movement of the ridge over the non-porous surface by a diffusion (sliding) effect." (148) Or, "drift" may merely reflect the appearance of movement through "selective degradation" of a ridge impression as the mark ages. (149)

3. Type II-D. Missed Individualization: Failure to Provide Suitable Candidate

We use SWGFAST's "failure to provide suitable candidate" category to describe a rather large category of cases in which a suitable candidate donor of the mark existed but their prints were not provided to the latent print analyst. In such cases, a mark from the true perpetrator might be underexploited simply because the true perpetrator was not known to investigators and their prints, therefore, were not made available to latent print examiners. As noted above, latent print examiners can only generate conclusions about the sources of marks when provided with a comparison set of prints. Comparison sets of prints can be generated in two ways: (1) the police identify people as suspects; (2) database searches. (150) Prior to development of computerized search aids, "manual" database searches were costly in terms of human resources, and therefore rare. (151) Thus, a mark may be unexploited because the true perpetrator has not been identified as a suspect and AFIS searching was not in regular use at the time of the investigation. Or, the true perpetrator may not be in the database at the time of the search. Or, the database search may fail for any number of technical reasons. The Marquette Park Four case, discussed in the Introduction is an example of this type of case. (152)

4. Type II-E. Missed Individualization: Non-Consensus Determination of No Value

As with an erroneous inconclusive result, an erroneous determination of no value could deprive an innocent defendant of probative evidence, as, for example, if the mark actually was "of value" and a consensus of expert opinion held that a plausible thirdparty suspect was the source of the mark.

Ulery et al. found these types of errors occurred at a very high rate: 29 percent of all mated comparisons. (153)

5. Type II-F. Missed Individualization: Failure to Report

In some cases, a failure to fully exploit an individualization may result not from any misconduct on the part of the latent print examiner, but from prosecutorial suppression of the latent print examiners' findings. (154) An undisclosed consensus attribution of an incriminating mark to plausible third-party could easily contribute to a wrongful conviction.

6. Type II-G. Missed Individualization: Failure to Conduct an Analysis

In some cases, police have neglected to even request an analysis of a mark later reported to have been left by the true perpetrator. (155) Even if AFIS searching was not available at the time of the initial investigation, the failure to analyze the mark should constitute negligence. In addition to missing an opportunity, however slim, to provide evidence against the true perpetrator, an innocent defendant would be deprived of the probative value of the fact that they were excluded as the sources of incriminating marks.

7. Type II-H. Missed Individualization: Failure to Recover Probative Evidence

Even further "upstream" than failing to analyze an incriminating mark would be failing to even recover such a mark in the first place.

A recent study endeavored to measure how much evidence of probative value is lost through the non-recovery of marks, non-consensus determinations of no value, and non-consensus determinations of inconclusive (error types I-C, I-E, and I-H) in actual casework. (156) Using additional human resources not typically available in a casework context, the study found that it was able to recover additional marks in 85 percent of 178 cases. (157) In seventeen (10 percent) cases, the researchers were able to associate newly recovered marks with an individual. (158) However, in fourteen of those seventeen cases, some other mark from that case had already been associated with an individual. (159)

Of seventy marks that the crime laboratory initially reported as inconclusive or no value, the researchers were subsequently able to make associations with twenty of them, a surprisingly high rate. (160) Put another way, in fourteen cases (8 percent), the researchers associated marks previously reported as no value or inconclusive with an individual. (161) However, again, in twelve of those fourteen cases, some other mark from that case had already been associated with an individual. (162)

This study supports the suspicion that evidence with probative value is lost through "Non-consensus inconclusive[s]," "Non-consensus determinations of no value," "[F]ailure to conduct an analysis," and "Failure to recover probative evidence" (rows C, E, G, and H of Table 3). (163) As the authors of the Operational Benefits and Challenges of the Use of Fingerprint Statistical Models study note, "[g]enerally, these results confirm that some of the pieces of evidence, which are currently not recovered, or not fully exploited, may potentially be highly informative." (164) However, the authors reasonably note that the costs of recovering this evidence may be very high. (165) They suggest, however, that focusing on better exploiting marks reported as inconclusive or no value would provide far more bang for the buck than focusing on unrecovered marks. (166) The authors further recommend that consideration of the potential probative value of the mark--as was done for some time by the Forensic Science Service in the United Kingdom--might further enhance the cost-effectiveness of efforts at better exploitation. (167)

C. Missed Inconclusives

Although we will not go into detail on what we call error types III through V, we will discuss some examples of what we call a type III error. This general category contains cases in which the there is so much disagreement among latent print experts, that it is reasonable to say that the "ground truth" is that the comparison should be deemed inconclusive. (168) These error types show that even nonconsensus reports about latent print evidence which cannot ultimately be definitely resolved as an individualization or exclusion can contribute to wrongful convictions.

1. Type III-A. Missed Inconclusive: Non-consensus Exclusion

In some cases, an examiner might report an exclusion that consensus opinion in the discipline cannot support. Therefore, the correct report should have been inconclusive. An erroneous exclusion of this type can damage an innocent defendant by falsely excluding a plausible third-party suspect.

2. Type III-B. Missed Inconclusive: Non-consensus Individualization

In other cases, an examiner might report an individualization that consensus opinion in the discipline cannot support. Given such disagreement, the proper report should have been "inconclusive." The damage of such a report to an innocent defendant is obvious.

A. Summary

The above cases suggest that there is a wide array of ways in which fingerprint evidence can contribute to inaccurate criminal justice outcomes. As illustrated by cases like Cowans, (169) Mayfield, (170) McKie, (171) and Dandridge, (172) latent print examiners can report that suspects are sources of marks when in fact they are not or, as in Williamson/Fritz, that victims are sources of marks when in fact they are not. (173) But there is a much wider range of potential errors. All of the possibilities below occurred, or may have occurred, in at least one of the cases we list in Table 4:

1. Investigators can fail to recover marks (Butler).

2. Investigators can fail to request analyses of recovered marks (Carter, Houser).

3. Investigators can fail to request comparisons of marks to the prints of plausible suspects (Grimes).

4. Law enforcement agencies can fail to conduct database searches of marks whose sources have not been reported (Grimes).

5. The true perpetrator may not be included in the AFIS database (Marquette Park Four)

6. The AFIS may not be usable on a particular print, e.g., a poor quality mark or a palm, proximal or middle phalanx, or sole mark (Marquette Park Four).

7. AFIS can "miss"--that is, fail to list the true source of mark as a candidate, even when the true candidate is in the database (Newsome)

8. Latent print examiners can "miss"--that is, commit "false negatives," report that the source of a mark is not the source (Armstrong).

9. Latent print examiners can report that marks are of "no value" when, in fact, they are "of value," depriving defendants of potentially probative evidence (Allen).

10. Latent print examiners can "fail to exclude"--that is, report that a comparison was "inconclusive" when, in fact, it excluded the suspect (Showers).

11. Latent print examiners can suppress dissenting expert conclusions, perhaps in an effort not to weaken the case against the prime suspect (Bibbins).

12. Investigators can fail to report the results of analyses to prosecutors (Waters).

13. Prosecutors can fail to discover the results of analyses to defendants (Heins).

14. Defense attorneys can fail to present latent print evidence with probative value at trial (Heins).


As the preceding section demonstrates, accuracy studies show other types of fingerprint error occurring at a far higher rate than erroneous individualizations. Consideration of the structure of latent print analysis suggests that we should not be surprised by these findings. SDT teaches that examiners can reduce erroneous individualizations by calibrating themselves with a conservative bias at the cost of committing a greater proportion of other types of error. (174) As an extreme example, if an examiner reports exclusion for all comparisons, she will commit zero erroneous individualizations at the price of committing a large number of erroneous exclusions. Latent print examiners are supposedly trained to behave conservatively--that is, to seek to minimize erroneous individualizations at the expense of increased erroneous exclusions or non-consensus inconclusive determinations. (175) In other words, to the extent that latent print examiners behave as they claim to, we should expect missed individualization rates to be much higher than missed exclusion rates. Indeed, it has been suggested that the effect of conservative bias is less to shift examiners toward the exclusion decision--which could be false--but rather toward the inconclusives--which, as discussed above in Part 0, at least in some sense, can never be false. (176)

Thus, latent print examiners are consciously biased toward the sort of other errors we discuss above. However, unconscious bias may also push them in that same direction. It has been suggested that forensic experts who are made aware of investigators' theory of the case may be vulnerable to conscious or unconscious bias to make their findings consistent with that theory. (177) Psychological research, principally the series of studies by Dror et al., has shown that latent print examiners who are primed to expect a certain outcome will sometimes shift their conclusions in that direction. (178) Interestingly, while the Dror et al. studies have primarily been used to debunk claims about the infallibility of fingerprint analysis with regard to erroneous individualizations, (179) the seminal Dror et al. study found erroneous exclusions and non-consensus inconclusives, but no erroneous individualizations. (180)

Because bias studies have found more erroneous exclusions than erroneous individualizations, some have argued that, if bias exists, it is a conservative bias. (181) A study by Thompson et al. study offers support for this claim. Thompson et al. found that fingerprint experts were not only much more discriminating (that is, more skilled) than novices, but also much more conservative. (182) In other words, part of what distinguishes a fingerprint expert from a novice is not merely their skill but also their decision to bias themselves against erroneous individualizations at the expense of committing more erroneous exclusions. (183) It is this "conservative response bias" that accounted for the surprising finding that intermediate trainees committed erroneous exclusions as often as novices, and experts committed them almost as often. (184) The professional latent print examiners' presumably greater skill was counter-balanced by their conservative response bias. (185)

However, even seemingly conservative bias may be detrimental to innocent defendants depending on the fact pattern of the case. For example, if an incriminating mark does not appear to derive from the prime suspect, examiners may be unconsciously biased against identifying a third party as a source--a potential erroneous exclusion.


How can increased awareness of other types of fingerprint errors be harnessed to reduce miscarriages of justice? In this section, we will argue that the most important potential remedy is increased access to fingerprint evidence.

Post-conviction litigants may encounter a number of significant obstacles to full exploitation of print evidence. Most importantly, they may not have access to the original print evidence. In a post-conviction posture, the litigant typically has no legal basis to obtain access to original print evidence. (186) The litigant can simply ask the agency with custody of the evidence, typically a law enforcement agency, for access to the evidence. Although such agencies sometimes cooperate, they often do not. (187) Agencies have little incentive to expend scarce resources on voluntary cooperation with requests from convicts in closed cases. Post-conviction attorneys have experienced cases in which law enforcement agencies simply decline to allow post-conviction attorneys and expert access to original latent print evidence. (188) Even when the state is inclined to cooperate, sometimes marks have been destroyed, as in the case of Mr. S., a client of the Northern California Innocence Project ("NCIP"), with a plausible claim of innocence. (189) As the NCIP notes, his innocence claim has been rendered essentially unprovable due to the destruction of the fingerprint evidence.

Litigants might seek access to the original evidence based on a Freedom of Information/Privacy Act ("FOIPA") or open records requests. (190) Such requests are cumbersome and time consuming, especially for post-conviction litigants without counsel. (191) However, some innocence projects have had success obtaining fingerprint images through FOIA requests. (192)

Because the resource burdens of allowing access to evidence are relatively small, we see little reason that such access should not be facilitated. Either a public policy or judicial solution to the access problem would enhance the utilization of latent print evidence.

A. AFIS Access

AFIS searches can be a powerful tool for finding candidate sources of unidentified marks. In many cases, this tool may have been underexploited because, for example: (1) AFIS was not in widespread use at the time of the initial investigation; (2) AFIS was not searched because investigators had already identified known individuals as suspects; (3) AFIS was not searched because one or marks was incorrectly deemed not of AFIS quality; (4) one or more marks not of AFIS quality at the time of the initial investigation later became searchable because of technological

improvements in AFIS; or (5) the AFIS database has grown to include more potential sources; (6) the AFIS now includes palm prints as well as fingertips. In addition, Next Generation Identification ("NGI"), the FBI's new AFIS platform which has replaced the old Integrated Automated Identification System ("IAFIS") platform, has new capabilities that the old system lacked that may yield information of probative value that would not have been generated by IAFIS. (193) For example, while IAFIS searched marks against "rolled" prints, (194) NGI searches them against both rolled and "flat" prints. (195) In some cases, the flat prints may contain detail or clarity that the rolled prints lack. (196) In addition, NGI searches both manually labeled minutiae (197) and auto-extracted minutiae. (198) These minutiae may differ and yield different results in AFIS searches.

Crime-scene marks whose source is not identified are typically stored in an unidentified latent file ("ULF") in AFIS. (199) When a new set of prints (say, from an arrestee) is entered into the system, they are automatically searched against the ULF. (200) This can produce "hits" which suggest that the new entrant may have been the source of a previously unidentified crime-scene mark. What is less known, however, is that these routine ULF searches require a relatively high match score threshold to trigger a review by a human latent print analyst, (201) presumably to avoid wasting human resources on false alarms. What this means, however, is that many potential links between unidentified marks and new entrants may be missed. Different results may be expected if the agency chooses to relaunch the search of the unidentified mark. In a relaunched search, without the high match score threshold, the analyst may find candidate sources that otherwise would have gone overlooked. Some, but by no means all, agencies annually relaunch searches of unidentified marks. (202) A litigant, therefore, for whom an unidentified mark has probative value, should not necessarily be placated by the assurance that the mark is in the ULF and, therefore all new entrants to the system are searched against it. Such a litigant may be well served by requesting that a relaunched search of this unidentified mark be conducted.

Many attorneys have experienced being told that a particular mark was "not of AFIS quality." (203) While that may have been a legitimate designation at one time, it is worth noting that Ulery et al. now state that the results of a recent NIST study of AFIS performance "do not support the need for a distinct 'of value for AFIS' determination that is a subset of VID [of value for individualization], for current state-of-the-art latent matchers." (204)

AFIS searches can potentially provide great probative value in criminal cases. For example, a case in which a mark from a crime scene was unidentified can become a mark which has been attributed to a known individual. If there are other, independent reasons to think that known individual might have been the perpetrator of the crime, the AFIS search can provide strong evidence of third-party guilty. The post-conviction litigant can shift from arguing that some unknown third-party committed to crime to arguing that a specified third-party committed the crime. Perhaps the archetypal example of such a case is the Marquette Park Four case discussed above. Note also, however, that in the Seri case, Seri's uncle, an FBI agent, unsuccessfully sought to have the mark reported as "inconclusive" searched in AFIS. (205) Only the emergence of modus operandi evidence led to the identification of the putative source of the mark. (206)

1. Legal Basis for AFIS Searching

Despite the potential probative value of AFIS searching, the idea of defense (or convict) access to fingerprint database searching is controversial. (207) AFIS databases were built by law enforcement agencies and access to them remains controlled by those agencies. (208) For example, the National Commission on Forensic Science's recent recommendation on AFIS inter-operability made no mention of access for defendants or post-conviction litigants. (209) Nor did the NRC report. (210) For practical purposes, this means that police investigators may request AFIS searches, but criminal defendants and convicts have no legal rights to request such searches. (211) Defendants and convicts literally do not have physical access to AFIS database, which can be accessed only by law enforcement agencies. (212) Independent, non-government forensic laboratories specializing in latent prints cannot initiate AFIS searches. (213) One rare exception to this rule may be found in the Grimes case, in which the North Carolina Innocence Inquiry Commission used its unique subpoena power to have marks searched in AFIS. (214) In cases without suspects, of course, police investigators have strong incentives to request AFIS searches of incriminating marks. In cases with suspects, however, the incentive may be less strong. (215) We are aware of no case in which a criminal defendant has successfully persuaded a judge to order AFIS searching over the objection of a prosecutor prior to or during a trial. This is similar to the situation that obtains for post-conviction DNA database searches. Roth and Ungvarsky note that they are aware of "[n]o law enforcement agency... [that] has voluntarily agreed to such a database search at the request of the defense, even where the prosecution has agreed to the search." (216) To be sure, a pre-conviction defendant does have one key advantage: the trial has not yet taken place. Therefore, defendants have the implicit threat of cross-examining prosecution experts about the failure to search one or more marks in AFIS if the case goes to trial.

In a post-conviction context, law enforcement agencies have even less incentive to conduct AFIS searches. A conviction has been obtained, and an open case has been closed. AFIS searching can do little except upset that conviction.

Post-conviction litigants, therefore, have met with little success by simply requesting that law enforcement agencies conduct AFIS searches on unidentified marks. (217) Agencies may claim resource constraints or argue that such requests are fishing expeditions. Most importantly, however, most law enforcement agencies are under no legal obligation to offer any reason at all for denying a request for an AFIS search from a post-conviction litigant. (218)

2. The DNA Analogy

We would argue that defendants should have a right to AFIS searching analogous to the right to DNA database searching that some have advocated. (219) Every state, the District of Columbia, and the federal government now have DNA access statutes with provisions that allow convicts to request the searching of unidentified DNA profiles in local and national DNA databases. (220) Many of these statutes are, in the view of many, overly restrictive in allowing convicts access to post-conviction DNA testing. (221) Provisions such as those setting unreasonably high burdens of persuasion as a condition of allowing DNA testing verge on turning these statutes into empty promises, or catch-22s, rather than access to justice. (222) But, nonetheless, the fact remains that every convict in the United States has a right, at least in principle, to post-conviction DNA database searching. (223) In addition, some courts have ordered the FBI to perform CODIS searches at defendants' behest. (224)

By contrast, only five states included fingerprint database searching in their DNA access statutes. (225) In 1997, the second state to pass a DNA access statue (after New York), Illinois, included fingerprint database searching. (226) In 2007, Illinois amended its DNA access statute to include searching of the Integrated Ballistic Identification System ("IBIS"), (227) due in part to the efforts of Patrick Pursley, who was trying to prove his innocence after being convicted of murder in part based on expert testimony that his gun was the murder weapon "to the exclusion of all other firearms." (228) Before amendment of the statute, the Appellate Court of Illinois had denied Pursley access to IBIS testing on the argument that the Illinois legislature wished to reserve for itself the right to decide which forensic techniques were reliable enough to warrant such testing. (229) "This limit," the court noted without irony, "ensures that a motion for forensic testing will not be filed alongside every scientific advancement in the years to come." (230) After Pursley gained access to testing, IBIS failed to report that his gun was the source of the bullets from the murder for which he was convicted to his gun. (231)

The other four states to include fingerprint database searching in their DNA access statutes are Arkansas, Idaho, Minnesota, and Virginia. (232) It is no surprise, then, that the Marquette Park Four case, perhaps the paradigmatic case demonstrating the power of post-conviction fingerprint testing, occurred in Illinois. (233) Johnson invoked the Illinois statute, and post-conviction AFIS searching generated evidence of third-party guilt that the court found convincing. (234) A second case from Illinois invoked the statute as well. (235) Courts in the other four states have upheld post-conviction requests for AFIS searching as well. (236) But how many other wrongful convictions remain unexposed because of excessive legal obstacles to post-conviction AFIS searching in the majority of U.S. jurisdictions, which do not recognize a post-conviction right to fingerprint database searching?

The paucity of jurisdictions that included fingerprint databases in their DNA access statutes is unfortunate. Fingerprint database searching serves the purposes of truth, justice, and science just as well as DNA database searching does. (237) The resource demands of such searches are relatively minimal, considering that they would ultimately fulfill the purposes for which these databases were built, at taxpayer expense. An AFIS search is certainly less costly than a DNA analysis. (238) While opponents predicted frivolous claims and spurious results when DNA access statutes were being passed around the country, they do not appear to have emerged as a serious problem, as evidenced by the fact that DNA access statutes remain on the books nationwide. (239) We would suggest that there is no greater cause for such concerns for fingerprints.

The arguments advanced against post-conviction DNA database searches are no more persuasive for AFIS searches than they are for DNA searches. The first argument against DNA database searches was that samples developed at unaccredited laboratories could not be entered in the CODIS DNA database. (240) There does not appear to be any comparable requirement for IAFIS or the replacement system to which it is transitioning, Next Generation Identification ("NGI"). The second argument was that an association between a DNA sample and a third party might not be relevant. (241) Many DNA access statutes, however, require a showing of relevance to authorize a post-conviction database search. (242) The same or even lower, due to lower costs--standards could be applied to post-conviction fingerprint searches.

Nor is there a distinction to be drawn because of some supposed greater "reliability" of DNA profiling. As one commentator notes, "Postconviction statutes nationwide often distinguish between DNA, fingerprint, and ballistics testing[, but] there is no valid reason to do so since all three testing methods are reliable and used as investigative tools to help solve cases." (243) We would not put it quite that way. Reliability (or accuracy) is not a dichotomous category. Forensic techniques are not simply reliable or unreliable. They have accuracy rates associated with them. (244) We would put it this way: Both DNA profiling and fingerprinting are forensic techniques that generate evidence with probative value. How much probative value to assign to reports generated by those techniques--whether by measuring their accuracy or by estimating the rarity of the observed features--is a difficult, and as yet not fully solved, problem. And, yet our justice system assigns probative value to these conclusions every day. We are not overly concerned that courts will increase injustice by similarly assigning probative value to the results of post-conviction AFIS searches.

We are well aware that in making this argument we, like other innocence advocates, might be accused of engaging in "the collection paradox." (245) The utility of criminal identification databases for proving innocence increases with their size because the probability of the true perpetrator being included in the database increases with their size. (246) Advocating for defense access to criminal identification database searching, then, would seem to imply arguing for database expansion. And, yet we are both on record against database expansion. (247)

The position we are taking here is not that paradoxical. We are relatively comfortable with including persons with felony convictions in criminal identification databases, and we recognize that even narrower criteria (such as limiting to violent felonies) are probably no longer politically realistic. We are, however, opposed to more inclusive criteria, including that of arrest. We recognize that limiting databases to convicts will somewhat diminish their utility to defendants and convicts seeking to prove their innocence. We, unlike some, (248) do not endorse the privacy costs of greater database expansion even in the service of proving innocence.

What is to be done about this situation? As a policy matter, we would argue that the failure to include provisions for post-conviction AFIS searching in DNA access statutes was an oversight. Ideally, these statutes could be amended to include provision for postconviction AFIS searching. Our argument is consistent with the call for "digital innocence," which holds that the government is under an obligation to exploit the exonerating potential of its already-collected surveillance data. (249) The idea of "similar legislative change... in reaction to digital evidence" has been discussed. (250)

3. An Implicit Post-Conviction Right to Fingerprint Database Searching

Short of statutory amendment, however, "some statutes may provide for [non-DNA forensic] testing under their broad language." (251) Such an argument has been proposed for allowing IBIS searching under DNA access statutes. (252) For AFIS searching, such an argument was also mounted unsuccessfully by the Innocence Project in In re Morton. (253) Morton argued for access and searching of both DNA and fingerprint evidence. (254) Morton had been convicted of the murder of his wife in Texas in 1987. (255) Post-conviction, he requested that unidentified marks from the crime scene be compared to unidentified marks from the scene of the murder of Mildred McKinney, which had occurred in the same neighborhood six years earlier, and be searched in AFIS. (256) Texas argued that fingerprint evidence was not covered by the DNA statute because it was not "biological material." (257) Paradoxically, Morton's earlier request for access to the fingerprint evidence under the Public Records Act had been denied on the basis that it was a "biometric identifier" and therefore exempt from the Public Records Act. (258) Thus, Morton's fingerprint was supposedly biometric but not biological. (259)

Morton argued that if a fingerprint was biometric, then it was biological" and thus fell under the Texas statute authorizing post-conviction testing of "biological material." (260) Morton contended:
To the extent there is any ambiguity in the statutory text as to
whether fingerprints include "biological material" covered by Chapter
64 [the Texas DNA access statute], the undeniable exculpatory potential
of this evidence should tip the balance in Appellant's favor--given the
Legislature's clear intent to grant access to previously-unavailable
forensic technologies where, as here, there is no dispute that it can
prove innocence. (261)

Although we think this argument should have been persuasive, it failed to persuade the Texas Court of Appeals. (262) The court agreed with the State that the DNA access statute covered only biological material, and it found that Morton was not seeking to test the biological attributes of the fingerprint evidence (such as skin cells) but only the identifying characteristics. (263) Of course, post-conviction DNA testing is similarly focused on identifying characteristics rather than biological attributes. (264) The same court subsequently followed the same reasoning in another case in which a convict sought to use the DNA access statute to obtain an AFIS search. (265)

While this may be a plausible literal reading of the statute, it defies common sense. There is nothing relevant about the distinction drawn between biological and non-biological material for post-conviction database searching. Despite its ruling, the court seemed to agree. The Morton court went on to remark that the lack of a legal basis for an AFIS search was "distinct from the practical question of whether such an analysis should be done." (266) The court remarked "one immediately wonders what such an analysis might yield." (267) This dictum--though hardly creating a legal precedent --illustrates the commonsensical appeal to judges of post-conviction AFIS searches and the lack of any rational basis to oppose them, other than legislatures not having explicitly created a right to them.

The Morton case itself provides the strongest reason why legislators and jurists should wish to remedy this oversight. Consider: the arbitrary result of the state of the law in Texas was that Morton's request for post-conviction DNA database searching succeeded while his request for AFIS searching failed. (268) Fortunately for Morton, the DNA search yielded a hit to a convicted felon who had lived in the vicinity at the time of the murder. (269) This felon was also implicated in another murder (though not the McKinney murder). (270) The case clearly illustrates the potential value of AFIS searching: Texas law allowed Morton the opportunity to try to use a post-conviction DNA database search to identify the true perpetrator of the crime for which he had been convicted. (271) Consider now the hypothetical scenario that post-conviction DNA database searching had not--for any of a host of reasons, such as degradation or discarding of the original sample or absence of the true perpetrator from the database--revealed the identity of the true perpetrator. The law would have arbitrarily denied Morton --who is now legally viewed as factually innocent--the opportunity to try to use fingerprint evidence to do precisely the same thing.

Simon A. Cole (*) Barry C. Scheck (**)

(*) Professor of Criminology, Law & Society, University of California, Irvine; Ph.D. (science & technology studies), Cornell University; A.B., Princeton University. This work was partially funded by the Center for Statistics and Applications in Forensic Evidence ("CSAFE") through Cooperative Agreement #70NANB15H176 between the National Institute of Standards and Technology ("NIST") and Iowa State University, which includes activities carried out at Carnegie Mellon University, University of California Irvine, and University of Virginia. An earlier version of this paper was presented at the NIST International Symposium on Forensic Science Error Management in Arlington, Virginia, July 23, 2015. For assistance in researching cases and other technical matters, we are grateful to: Matt Barno, Locke Bowman, Glinda Cooper, Steven Drizin, Itiel Dror, Keith Findley, Jennifer Friedman, Ed German, Rosa Greenbaum, Christopher Greenwood, Carey Hall, Samuel Leonard, Tori Marlan, Vanessa Meterko, David Moran, Michelle Moore, Erin Morris, Christine Mumma, Teresa Newman, Margaret O'Donnell, Maurice Possley, Eric Ray, Tom Riley, Michele Triplett, and Rob Warden. For helpful comments on drafts of the manuscript, we are grateful to: Samuel Gross and to Hal Stern, William Thompson, Padhraic Smyth, Charless Fowlkes, Becky Grady, and other colleagues who are part of the University of California, Irvine branch of CSAFE. We are grateful to the student editors of the law review for their hard work editing this Article.

(**) Professor of Law, Benjamin N. Cardozo School of Law, Yeshiva University; J.D., M.C.P., University of California, Berkeley; B.S., Yale University.

(1) See Maurice Possley, Charles Johnson, NAT'L REGISTRY OF EXONERATIONS (Mar. 1, 2017),

(2) See id.

(3) See People v. Johnson, 2013 IL App (1st) 120201-U, [paragraph][paragraph] 11-12 (Ill. App. Ct. 2013).

(4) See Possley, supra note 1.

(5) See Johnson, 2013 IL App (1st) 120201-U, [paragraph] 13.

(6) See id.

(7) See id.

(8) See id. [paragraph][paragraph] 11-12.

(9) See id.

(10) See id. [paragraph] 13.

(11) See id. [paragraph] 27,1 30; Possley, supra note 1.

(12) See Possley, supra note 1.

(13) See Johnson, 2013 IL App (1st) 120201-U, [paragraph][paragraph] 31-32; Possley, supra note 1.

(14) See Johnson, 2013 IL App (1st) 120201-U, [paragraph][paragraph] 31-32; Possley, supra note 1.

(15) See Johnson, 2013 IL App (1st) 120201-U, [paragraph][paragraph] 31-32.

(16) See id. [paragraph][paragraph] 31-32.

(17) See Possley, supra note 1.

(18) Johnson, 2013 IL App (1st) 120201-U [paragraph] 68.

(19) Id. [paragraph] 1.

(20) Id. [paragraph][paragraph] 73-74 (citations omitted).

(21) See Gaynor Hill, Marquette Park 4' Exonerated of 1995 Double Murder by New Evidence, WGN9 (Feb. 15, 2017)

(22) See Simon A. Cole, The Prevalence and Potential Causes of Wrongful Conviction by Fingerprint Evidence, 37 GOLDEN GATE U.L. REV. 39, 39-40 (2006).

(23) See id. at 41; Simon A. Cole, More than Zero: Accounting for Error in Latent Fingerprint Identification, 95 J. CRIM. L. & CRIMINOLOGY 985, 986-87 (2005).

(24) See Michael J. Saks, Implications of the Daubert Test for Forensic Identification Science, in 1 SHEPARD'S EXPERT & SCI. EVIDENCE Q. 427, 430-31, 432 (Bert Black et al. eds., 1994); David A. Stoney, Fingerprint Identification: Scientific Status, [section] 21-2.1.2(3)(b), in 2 MODERN SCIENTIFIC EVIDENCE: THE LAW AND SCIENCE OF EXPERT TESTIMONY 55, 64-65 (David L. Faigman et al. eds., 1997); Margaret A. Berger, Procedural Paradigms for Applying the Daubert Test, 78 MINN. L. REV. 1345, 1354 (1994); James E. Starrs, Judicial Control over Scientific Supermen: Fingerprint Experts and Others Who Exceed the Bounds, 35 CRIM. L. BULL. 234, 244 (1999).

(25) Cole, supra note 23, at 990; see, e.g., United States v. Haward, 260 F.3d 597, 601 (7th Cir. 2001) (citing Kumho Tire Co. v. Carmichael, 526 U.S. 137, 150 (1999)).

(26) See Press Release, Robert J. Garrett, President, Int'l Ass'n for Identification (Feb. 19, 2009), (on file at; see also Harry T. Edwards, The National Academy of Sciences Report on Forensic Sciences: What it Means for the Bench and Bar, 51 JURIMETRICS J. 1, 7-8, 15 (2010) (contending that more improvement is still needed in the assessment of forensic science before the bench).

(27) See Post-Conviction DNA Exonerations, INNOCENCE PROJECT, (last visited Mar. 27, 2018).

(28) See Browse Cases, NAT'L REGISTRY OF EXONERATIONS, (last visited Feb. 16, 2018).

(29) See Laura Spinney, Science in Court: The Fine Print, 464 NATURE 344, 344 (2010).

(30) Michele Triplett, Michele Triplett's Fingerprint Terms, NW. LEAN NETWORKS (last updated Apr. 22, 2017),

(31) See CHRISTOPHE CHAMPOD ET AL., FINGERPRINTS AND OTHER RIDGE SKIN IMPRESSIONS 1 (2004); SWGFAST Standard Terminology of Friction Ridge Examination, Ver. 3.0, in THE FINGERPRINT SOURCEBOOK D-1, D-3 (2014) [hereinafter SWAGFAST Standard Terminology].

(32) See CHAMPOD ET AL., supra note 31, at 183. In the United States, the "mark" is often called a "latent fingerprint" because, very often, its medium consists of oils from the skin and physical or chemical development is necessary to render it more visible. See id. at 106, 183. The print is often called the "'known' print" or a "ten-print" because, very often, it consists of a set of impressions of (usually) ten fingers deliberately taken by state agents from a person in custody or otherwise cooperating with those agents. See id. at 183; Christophe Champod & Paul Chamberlain, Fingerprints, in HANDBOOK OF FORENSIC SCIENCE 57, 63-64 (Jim Fraser & Robin Williams eds., 2009).

(33) See John R. Vanderkolk, Examination Process, in THE FINGERPRINT SOURCEBOOK, supra note 31, at 9-3, 9-13.

(34) Id.

(35) Id. at 9-19.

(38) See, e.g., C. Neumann et al., Quantifying the Weight of Evidence from a Forensic Fingerprint Comparison: A New Paradigm, 175 J. ROYAL STAT. SOC'Y 371, 394 (2012).

(37) See Christophe Champod, Fingerprint Identification: Advances Since the 2009 National Research Council Report, 370 PHILOSOPHICAL TRANSACTIONS B. ROYAL SOC'Y 1, 6 (2015).


(38) Very recently, the latent print community has begun using the term "decision" where it would once have used terms like "conclusion" or "determination." While we agree that "decision" is a better description of the outcome of latent print analysis than the earlier terms, we still think the "decision" language raises a number of issues that have yet to be resolved by the discipline. See Simon A. Cole, Individualization is Dead, Long Live Individualization! Reforms of Reporting Practices for Fingerprint Analysis in the United States, 13 L. PROBABILITY & RISK 117, 140 (2014). In this article, where possible, we use the less value-laden and more descriptive term "report" to describe the output of latent print analyses.

(40) See Bradford T. Ulery et al., Accuracy and Reliability of Forensic Latent Fingerprint Decisions, 108 PROC. NATI ACAD. SCI. 7733, 7733 (2011).

(41) See, e.g., Pooja Chaudhuri, Note, A Right to Rational Juries? How Jury Instructions: Create the 'Bionic Juror" in Criminal Proceedings Involving DNA Match Evidence, 105 CALIF. L. REV. 1807, 1835-36, 1836 n.166 (2017).


(43) See Cole, supra note 39, at 117.


(45) See Bradford T. Ulery et al., Measuring what Latent Fingerprint Examiners Consider Sufficient Information for Individualization Determinations, 9 PLOS ONE 1, 2 (2014).

(46) See, e.g., id.

(47) See id. at 4 ("Exclusions may be based on pattern class information when there is insufficient information for individualization, or they may result from an examiner's determination that a single feature was discrepant despite otherwise having sufficient information for individualization.")

(48) But see Ulery et al., supra note 40, at 7735. ("The amount of information necessary for an exclusion decision is typically less than for an individualization decision.").

(49) See Peter E. Peterson et al., Latent Prints: A Perspective on the State of the Science, 11 FORENSIC SCI. COMM., no. 4, Oct. 2009,

(50) Now replaced by the Friction Ridge Subcommittee of the Organization of Scientific Area Committees ("OSACs"), administered by the National Institute of Standards and Technology. See Melissa R. Gishe, Message from the SWGFAST Chair, SWGFAST, (last visited Mar. 28, 2018).


(52) See id.


(54) Bradford T. Ulery et al., Understanding the Sufficiency of Information for Latent Fingerprint Value Determinations, 230 FORENSIC SCI. INT'', 99, 105 (2013).



(57) See id.

(58) See id.


(60) Alice Maceo, Documenting and Reporting Inconclusive Results, 61 J. FORENSIC IDENTIFICATION 226, 226 (2011).


(62) Id.

(63) See id.

(64) Id.


(66) Id.

(67) For example, consider a case in which a mark was in a location and/or medium which made it highly likely that the source of the mark was the perpetrator of the crime. If the case file contained a report of "inconclusive" for a comparison of the mark to the print of the convict, it would make a great deal of difference whether that report meant: (1) there was some friction ridge features in agreement between the mark and the print of the convict, but not enough to support a report of "individualization" (LSI); or (2) the convict's fingertips were excluded as the source of the mark, but his palms could not be excluded because impressions of his palms had not been obtained and submitted for comparison by the police or defense counsel (LCA). See SWGFAST STANDARDS FOR EXAMINING FRICTION RIDGE IMPRESSIONS, supra note 44. The first scenario implies some probative value in support of the inference that the convict is the source of the mark. The second scenario implies no probative value in support of the inference that the convict is the source of the mark. Moreover, the second scenario indicates that fuller utilization of latent print analysis might have definitely excluded the convict as the source of the mark, a report that would have had very significant probative value.

Now consider the same scenario but assume that the convict has been excluded as the source of mark and a comparison was effected between the mark and the print of an alternate suspect that yielded a report of "inconclusive." It would make a great deal of difference to the convict whether that report meant that: (1) there were some friction ridge features in agreement between the mark and the print of the alternate suspect, but not enough to support a report of "individualization" (LSI); or (2) the alternate suspect's fingertips were excluded as the source of the mark, but his palms could not be excluded because impressions of his palms were not available (LCA). See id. For example, the alternate suspect could be deceased, and his palm prints unobtainable. The first scenario constitutes evidence far more probative of third party guilt than the second.

(68) See Eric Ray & Penny J. Dechant, Sufficiency and Standards for Exclusion Decisions, 63 J. FORENSIC IDENTIFICATION 675, 690 (2013).

(69) See, e.g., Complete Latent Print Examination: Inconclusive Decisions, CLPEX (Aug. 19, 2014),


(71) See Ray & Dechant, supra note 68, at 676.

(72) See id. at 678.

(73) Compare id. (discussing identification and non-identification) with SWGFAST STANDARDS FOR EXAMINING FRICTION RIDGE IMPRESSIONS, supra note 44 (discussing LCA).

(74) Ray & Dechant, supra note 68, at 678; see Bradford T. Ulery et al., Factors Associated with Latent Fingerprint Exclusion Determinations, 275 FORENSIC SCI. INT. 65, 65 (2017).

(75) See Ray & Dechant, supra note 68, at 675-76.

(76) See id. at 676, 691; Ulery, supra note 40, at 7733 (2011).

(77 See Ulery et al., supra note 74, at 66.

(78) See id. at 65, 66.

(79) See Ray & Dechant, supra note 68, at 690.

(80) See id. at 682.

(87) See id. at 683.

(82) See id. at 682 (discussing the standard for discrepancies and exclusion).

(83) See id. at 683.

(84) See id. at 688, 684, 686.

(85) See id. at 681.

(88) Bruce Budowle et al., Review of the Scientific Friction Ridge Comparisons as a Means of Identification: Committee Findings and Recommendations, 8 FORENSIC SCI. COMM., no. 1, June 2006, https://archiyesibi.goviarchiyes/about-us/labiforensic-science-communications/fsc/jan2006/research.

(87) Ray & Dechant, supra note 68, at 676.

(88) Ulery et al., supra note 74, at 65.

(89) See James S. Liebman al., The Evidence of Things Not Seen: Non-Matches as Evidence of Innocence, 98 IOWA L. REV. 577, 587, 588 (2013).

(90) See Ulery et al., supra note 54, at 103, 105.

(91) See Cedric Neumann et al., Operational Benefits and Challenges of the Use of Fingerprint Statistical Models: A Field Study, 212 FORENSIC SCI. INT'L 32, 39 (2011).

(92) See id.

(93) See id. at 45. Such a procedure would, of course, raise bias issues, but these could presumably be dealt with by use of a case manager. See, e.g., William C. Thompson, What Role Should Investigative Facts Play in the Evaluation of Scientific Evidence?, 43 AUSTL. J. FORENSIC SCI. 123, 125-26, 132-33 (2011).

(94) See Victoria L. Phillips et al., The Application of Signal Detection Theory to Decision-Making in Forensic Science, 46 J. FORENSIC SCI. 294, 294 (2001).

(95) See id. app at 306, tb1.1.

(96) See KAYE ET AL., supra note 42, at 25-26, 28-29.

(97) See id. at 29 tb1.2.5.

(98) See id. at 31 tb1.2.7.

(99) Id.


(101) See id.

(102) See id.

(103) See id. ("The failure to make an exclusion when in fact the friction ridge impressions are non-mated (includes false positive, non-consensus inconclusive, and non-consensus no value).").

(104) See id. ("The failure to make an individualization when in fact both friction ridge impressions are mated (includes false negative, non-consensus inconclusive, and nonconsensus no value)."). Notice, however, that SWGFAST has not proposed general terms for the other columns. We could, if we chose to, adopt analogous terms for the other columns (e.g., "missed inconclusives"), but we did not do so because, as it turns out, in this article we are primarily concerned with the first two columns.

(105) See id.

(106) See id. ("The incorrect determination that two areas of friction ridge impressions are mated.").

(107) See id. ("When an examiner reaches a decision of individualization that conflicts with the consensus, exclusive of false positive errors."). In theory, we should further complicate our matrix by subdividing "inconclusive" into the three subtypes described by SWGFAST. See SWGFAST STANDARDS FOR EXAMINING FRICTION RIDGE IMPRESSIONS, supra note 44. But we decided not to further complicate an already complex framework.

(108) KAYE ET AL., supra note 42, at 3.

(109) See id. at 30.

(110) See id. at vii, 30.

(111) See Jennifer E. Laurin, Remapping the Path Forward: Toward a Systemic View of Forensic Science Reform and Oversight, 91 TEX. L. REV. 1051, 1055 (2012).


(***) Published separately at

(113) See PRESIDENT'S COUNCIL OF ADVISORS ON SCI. AND TECH., FORENSIC SCIENCE IN CRIMINAL COURTS: ENSURING SCIENTIFIC VALIDITY OF FEATURE-COMPARISON METHODS 9 (2016), [hereinafter PCAST]; Ulery et al., supra note 40, at 7733; see also IGOR PACHECO ET AL., MIAMI-DADE RESEARCH STUDY FOR THE RELIABILITY OF THE ACE-V PROCESS: ACCURACY & PRECISION IN LATENT FINGERPRINT EXAMINATIONS 5 (2014), (discussing two different methodologies). The following studies contain data that might be treated as accuracy data though the studies were not designed to measure accuracy. See Philip J. Kellman et al., Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty, 9 PLOS ONE 1, 5 (2014) (noting that accuracy was a primary focus, but not the only focus); Sarah V. Stevenage & Christy Pitfield, Fact or Friction: Examination of the Transparency, Reliability and Sufficiency of the ACE-V Method of Fingerprint Analysis, 267 FOR. SCI. INT. 145 (2016); Jason M. Tangen et al., Identifying Fingerprint Expertise, 22 PSYCHOL. SCI. 995, 997 (2011); Matthew Thompson et al., Human Matching Performance of Genuine Crime Scene Latent Fingerprints, L. & HUM. BEHAV. 84, 86 (2013).

Despite being widely considered "best," the Ulery et al. study is not without its critics. See, e.g., generally Ralph Norman Haber & Lyn Haber, Experimental Results of Fingerprint Comparison Validity and Reliability: A Review and Critical Analysis, 54 SM. & JUST. 375 (2014) (critiquing the study). But see generally R. Austin Hicklin et al., In Reponse to Haber and Haber, 54 SCI. & JUST. 390 (2014) (debating Haber & Haber). But see generally Ralph Norman Haber & Lyn Haber, Can Fingerprint Casework Accuracy Be Evaluated by Experiments?, 54 SCI. & JUST. 395 (2014) (considering further the problems of Ulery et al.'s study). Of particular importance is the question of whether the fact that the study was not "blind" (i.e., the respondents were aware they were participating in a study designed to estimate the accuracy of latent print analysis) might have encouraged examiners to: (1) adjust their decision threholds so as to avoid erroneous individualizations at the expense of commiting a larger number of erroneous exclusions; and/or (2) take refuge in the "safe" response of "inconclusive" which can, in some sense, never be false. See supra Part III.A.2.; see also Stevenage & Pitfield, supra, at 146 ("[Some experts may have] the tendency... to exercise caution under scrutiny."). Some support for the latter possibility may be found in Murrie and Kelley's study of the disposition of latent print evidence in the Houston Forensic Science Center. Only twelve percent of the marks deemed "of value" in Houston were reported as "inconclusive" compared to thirty-seven percent of the "of value" marks in the Ulery et al. study. See Daniel Murrie & Sharon Kelley, Case Processing and Human Factors at Crime Laboratories, CTR. FOR STAT. & APPLICATIONS IN FORENSIC EVIDENCE (May 4, 2017), It is important to note that, while Ulery et al. had some control over the frequency of the "identification" and "exclusion" response (because they determined the mix of "mated" and "non-mated" prints used in the study), they had no control over the number of "inconclusive" responses. This supports the notion that examiners may make greater use of the "inconclusive" response in non-blind studies than they do in casework.

(114) See Ulery et al., supra note 40, at 7734.

(115) See id. at 7736.

(116) See id.

(117) See 1 ANTHONY CAMPBELL, THE FINGERPRINT INQUIRY REPORT 31 (2011); OFFICE OF THE INSPECTOR GEN.: U.S. DEP'T OF JUSTICE, A REVIEW OF THE FBI'S HANDLING OF THE BRANDON MAYFIELD CASE 1 (2006), [hereinafter A REVIEW OF THE FBI'S HANDLING OF THE BRANDON MAYFIELD CASE]; Cole, supra note 23, at 986, 999; Cole, supra note 22, at 90, 94; Robert B. Stacey, A Report on the Erroneous Fingerprint Individualization in the Madrid Train Bombing Case, 54 J. FORENSIC IDENTIFICATION 706, 712 (2004); see also Lyn Haber & Ralph Norman Haber, Error Rates for Human Fingerprint Examiners, in AUTOMATIC FINGERPRINT RECOGNITION SYSTEMS 339, 340, 342 (Nalini Ratha and Ruud Bolle eds., 2004) (generally discussing error rates). Although the topic of the article is errors other than erroneous individualization, we are often asked how many erroneous individualizations we know about. In 2005, we documented twenty-two known cases in the United States and the United Kingdom from 1920-2005. See Cole, supra note 23, at 999. Since that time, at least eight more erroneous individualizations have been exposed: (1) Beniah Dandridge, discussed in this article and the Appendix. (2) Lana Canen. See Kathleen Bright-Birnbaum, The Erroneous Fingerprint Identification of Lana Canen, 37 CHAMPION 44, 45 (2013). (3) Raymond Page. See Memorandum from Douglas County Attorney's Office on Omaha Police Dep't's Crime Lab and Douglas Cnty. Sheriffs Dep't Crime Lab to Defense Counsel (2015) (on file with author); see also Todd Cooper, Omaha Prosecutor's Memo Delves into Crime Lab Controversies over Suspended Director, Fingerprint Evidence, OMAHA WORLD-HERALD (Mar. 30, 2015), [hereinafter Cooper, Omaha Prosecutor's Memo] (identifying Douglas County Attorney Don Kleine as the compiler of the memo). It is perhaps worth noting that five years earlier this same prosecutor called "laughable" one of the authors' testimony that the uniqueness of human friction ridge skin did not preclude the possibility that Omaha police latent print analysts could make erroneous individualizations. See Todd Cooper, Palm Print Evidence Challenged as Trial Resumes, OMAHA WORLD-HERALD, Oct. 5, 2010, at 02B. (4) Arturo Avina. See Criminal Case Summary, SUPER. CT. OF CAL.: COUNTY OF L.A., (last visited Apr. 2, 2018) (type "PA060145" into the "Case Number" tab, then select "San Fernando LAS" in the "Courthouse" dropdown menu, then click "Submit"). (5) John Buzia. See Rene Stutzman, Orlando-Area Ax Murderer Asks Judge to Toss His Conviction, Death Sentence, L.A. TIMES (June 24, 2008), (6) Maria Maldonado. See Richard Winton, Errors Trigger Reviews of LAPD Fingerprint Files, L.A. TIMES (Jan. 15, 2009), (7) Latonya McIntyre. See Joel Rubin, LAPD Will Brief Panel Weekly on Fingerprint Unit, L.A. TIMES, (Nov. 18, 2008), mefingerprints18. (8) Siyabonga Shabalala. See Michelle Triplett, Michele Triplett's Fingerprint Terms: E, MICHELE TRIPLETT'S FINGERPRINT DICTIONARY, (last visited Apr. 2, 2018). In addition, seven additional erroneous individualizations that occurred before 2005 have been discovered: (1) Dwight Gomas. See John Marzulli, Botched Fingerprints Put Innocent Man in Jail for 17-month Rikers 'Nightmare', N.Y. DAILY NEWS (Sept. 3, 2009), (2) Khumalo. See S. v. Khumalo 1969 (1) PH 35 (TPD) at 37 (S. Mr.). (3) Timothy Paul Volkmar. See Tom Howlett, Fingerprint Error Stirs Concern, DALL. MORNING NEWS, Jan. 18, 1985 at 21A. (4) Manuel Quinta Guerra. See Moises Mendoza & James Pinkerton, Fingerprint Error Led to Four Months Behind Bars, HODS. CHRON. (June 19, 2010), (5) Anthony Clapham. See Triplett, supra. (6) Carl Harp. See Bellevue Sniper Case Reopened, DAILY RECORD, Mar. 31, 1977, at 24. (7) Ron Williamson and Dennis Fritz. JOHN GRISHAM, THE INNOCENT MAN: MURDER AND INJUSTICE IN A SMALL TOWN 130 (2006). This brings the total number of documented erroneous individualizations to thirty-seven. If, in order to be consistent with scope of this Article, we narrow our scope to U.S. cases for which we have names of defendants, the total drops to twenty-three.

This, however, is surely a gross underestimate because it is restricted to documented cases that have become public. Various latent print examiners have claimed to know of many more than twenty-three or even thirty-seven erroneous individualizations, but they have not been at liberty to provide details on the cases. For example, in 2006 U.S. Postal Inspection Service Examiner Ken Smith reported that during his fourteen-year tenure on the International Association for Identification Latent Print Certification Board he "encountered 25 to 30 erroneous identifications." A REVIEW OF THE FBI'S HANDLING OF THE BRANDON MAYFIELD CASE, supra, at 137. We have estimated that at most seven of these can be included in twenty-two that we originally documented. See Cole, supra note 23, at 999. Ron Smith, an independent latent print examiner, recently stated that he encountered numerous erroneous individualizations, including thirty-eight in a single year. This is greater than the number of all known, documented erroneous individualizations. He also noted that he believes most erroneous individualizations do not become publicly known. See Ron Smith, Address at the Innocence Network Annual Conference: Don't Forget to Look Under the Small Rocks! (Mar. 25, 2017). In the press coverage on the Page case, Michele Triplett noted that she had documented more than fifty erroneous individualizations. See Cooper, Omaha Prosecutor's Memo, supra.

(118) See, e.g., Simon A. Cole, Can Courts Accommodate Accelerating Forensic Scientific and Technological Change?, 57 JURIMETRICS J. 443, 456-57 (2017).

(119) See PCAST, supra note 113, at 95, 96.

(120) Ulery et al., supra note 74, at 66.

(121) See Ulery, et al., supra note 40, at 7736; see also Bradford T. Ulery et al., Accuracy and Reliability of Forensic Latent Fingerprint Decisions Appendix: Supporting Information, 108 PROC. NAT'L ACAD. SCI. 7733 (2011), app. at 5, [hereinafter Ulery et al., Accuracy and Reliability Appendix] (Noting that unanimous decisions were reached on less than half of both mated and nonmated pairs).

(122) That is: the difference between the probative value to the defendant of the report of exclusion and the probative value to the defendant of a report of no value. See Ulery, et al., supra note 74, at 66.

(123) See Ulery, et al., supra note 40, at 7736; Ulery et al., Accuracy and Reliability Appendix, supra note 121, at 13 tbl. S5.

(124) See Helen Earwaker et al., Fingermark Submission Decision-Making Within a UK Fingerprint Laboratory: Do Experts Get the Marks that They Need?, 55 SCI. & JUST. 239, 243 (2015).

(125) Id. at 239.

(126) See id.

(127) Tarang Chugh et al., Latent Fingerprint Value Prediction: Crowd-based Learning, MSU TECH. REPORT, June 2016, at 12-13,

(128) See Peter A.F. Fraser-Mackenzie et al., Cognitive and Contextual Influences in Determination of Latent Fingerprint Suitability for Identification Judgments, 53 Sci. & JUST. 144, 145, 152 (2013).

(129) See id. at 152.

(130) See Ulery et al., supra note 54, at 99-100.

(131) See Ray & Dechant, supra note 68, at 684; Ulery et al., supra note 40, at 7735 fig. 3.

(132) See Ulery et al., supra note 74, at 66.

(133) See id. at 67 tb1.1.

(134) Id. at 68-69.

(135) Id. at 74.

(136) See Tangen et al., supra note 113, at 996-97.

(137) See Thompson et al., supra note 113, at 88.

(138) See PACHECO ET AL., supra note 113, at 10; Kellman et al., supra note 113, at 7.

(139) See JAMES F. COWGER, FRICTION RIDGE SKIN: COMPARISON AND IDENTIFICATION OF FINGERPRINTS 182, 188 (1983); Ray & Dechant, supra note 68, at 694, 695 tbl. 1.

(140) Ray & Dechant, supra note 68, 694, 695 tbl. 1. No further elaboration of these contributors is offered by the analysis.

(141) Id. at 680.

(142) Id.

(143) See Chris Lennard, Fingerprint Identification: How Far Have We Come?, 45 AUSTL. J. FORENSIC SCI. 356, 360 (2013).

(144) See Caitlin Doornbos, Personnel File Shows Extent of Orange County Fingerprint Examiner's Alleged Mistakes, ORLANDO SENTINEL, Mar. 19, 2017,

(145) See Ulery et al., supra note 74, at 71.

(146) Josep De Alcaraz-Fossoul et al., Fingermark Ridge Drift, 258 FORENSIC SCI. INT'L 26, 31 (2016).

(147) See id. at 28, 30.

(148) Id. at 30.

(149) Id. at 30 fig. 1.

(150) See Ulery et al., supra note 40, at 7733.

(151) See Kenneth R. Moses et al., Automated Fingerprint Identification System (AFIS), in in THE FINGERPRINT SOURCEBOOK, supra note 31, at 6-1, 6-4.

(152) See supra Part I; People v. Johnson, 2013 IL App (1st) 120201-U, [paragraph][paragraph] 28-29 (Ill. App. Ct. Mar. 29, 2013).

(153) See Ulery et al., supra note 54, at 102 fig.3.

(154) See Meghan White, Latent Fingerprints: Fighting Unreliable Scientific Evidence, 26 FOR THE DEFENSE 1, 4 (2016),

(155) See, e.g., Jackson v. United States, 768 A.2d 580, 585-86 (D.C. 2001 (citation omitted) (demonstrating a case where the police failed to attempt to retrieve fingerprints from a piece of evidence and the defense argued the police officers had a duty to preserve the evidence so prints could be taken by prosecutors at a later date).

(156) See Cedric Neumann et al., Operational Benefits and Challenges of the Use of Fingerprint Statistical Models: A Field Study, 212 FORENSIC SCI. INT'L 32, 33, 39 tbl. 3 (2011).

(157) Id. at 39 tbl. 3.

(158) Id.

(159) Id.

(160) Id. at 40

(161) Id. at 39 tbl. 3.

(162) Id.

(163) See Neumann at al., supra note 156, at 36, fig.2; see app. tb1.3.

(164) See Neumann et al., supra note 156, at 43.

(165) See id. at 44.

(166) See id. at 45-46.

(167) See Neumann, supra note 156, at 45; Christopher J. Lawless & Robin Williams, Helping with Inquiries or Helping with Profits? The Trials and Tribulations of a Technology of Forensic Reasoning, 40 SOC. STUD. OF SCI. 731, 732 (2010).

(168) See, e.g., Cole, supra note 23, at 999 (discussing the Kevin Siehl case).

(169) See id. at 987.

(170) See id. at 985-86; A REVIEW OF THE FBI'S HANDLING OF THE BRANDON MAYFIELD CASE, supra note 117, at 6-7.

(171) See Cole, supra note 23, at 999.

(172) See Maurice Possley, Beniah Alton Dandridge, NAT'L REGISTRY OF EXONERATIONS (Oct. 12, 2015),; Beniah Dandridge Exonerated and Released After 20 Years Wrongly Imprisoned in Alabama, EQUAL JUST. INITIATIVE (last visited Apr. 3, 2018),

(173) See Dennis Fritz, INNOCENCE PROJECT (last visited Apr. 3, 2018), https://www.innocence; Ron Williamson, INNOCENCE PROJECT (last visited Apr. 3, 2018),

(174) See Phillips et al., supra note 94, at 294, 295, 296.

(175) See David Charlton et al., Emotional Experience and Motivating Factors Associated with Fingerprint Analysis, J. FORENSIC SCI. 1, 7 (2010); Glenn Langenburg et al., Testing for Potential Contextual Bias Effects During the Verification Stage of the ACE-V Methdology when Conducting Fingerprint Comparisons, 54 J. FORENSIC SCI. 571, 577 (2009).

(176) See Ray & Dechant, supra note 68, at 690.

(177) See, e.g., D. Michael Risinger et al., The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion, 90 CALIF. L. REV. 1, 15-16 (2002).

(178) See, e.g., Itiel E. Dror et al., Contextual Information Renders Experts Vulnerable to Making Erroneous Identifications, 156 FORENSIC SCI. INT'L 74. 76-77 (2006).

(179) See Itiel E. Dror & David Charlton, Why Experts Make Errors, 56 J. FORENSIC IDENTIFICATION 600, 612-13 (2006); Gretchen Gavett, Can Unconscious Bias Undermine Fingerprint Analysis?, PBS (Apr. 16, 2012),

(180) See Dror et al., supra note 178, at 76; Dror & Charlton, supra note 179, at 613.

(181) See Langenburg et al., supra note 175, at 577; see also Ray & Dechant, supra note 68, at 690 ("This inherent conservatism [defaulting to an inconclusive decision] has been demonstrated in recent research into the biasability of latent print examiners.").

(182) See, e.g., Thompson et al., supra note 113, at 6-8.

(183) See id. at 8.

(184) See id. at 9.

(185) See id.

(186) Cf. Jason Kreag, Letting Innocence Suffer: The Need for Defense Access to the Law Enforcement DNA Database, 36 CARDOZO L. REV. 805, 808-09 (2015) (referencing post-conviction DNA evidence, but it is the same for latent print evidence).

(187) See id.

(188) See id.; see, e.g., Access to Post-Conviction DNA Testing, INNOCENCE PROJECT (last visited Apr. 4, 2018),

(189) See The Confounding Case of Mr. S, N. CA. INNOCENCE PROJECT UPDATE (N. Ca. Innocence Project at Santa Clara Law School, Santa Clara, CA), Winter 2009, at 6.

(190) See Freedom of Information/ Privacy Act, FBI, https://www.fbi.cov/services/records-management/foipa (last visited Apr. 3, 2018).

(191) See Tiffany R. Murphy, Futility of Exhaustion: Why Brady Claims Should Trump Federal Exhaustion Requirements, 47 U. MICH. J.L. REFORM 697, 722 n.158 (2014).

(192) See, e.g., Letter from Bethany Goodwin, Assistant FOIA Coordinator, Mich. State Police, to Imran Syed, Mich. Innocence Clinic (Nov. 1, 2012) (on file with State of Mich. Dep't of State Police).

(193) See Next Generation Identification (NGI), FBI, (last visited Apr. 3, 2018) [hereinafter NGI].

(194) See Anil K. Jain & Jianjiang Feng, Latent Fingerprint Matching, 33 IEEE TRANSACTIONS ON PATTERN ANALYSIS & MACHINE INTELLIGENCE 88, 88 (2011).

(195) See Ernest J. Babcock, Privacy Impact Assessment for the Next Generation Identification (NGI) Palm Print and Latent Fingerprint Files, FBI (Jan. 20, 2015),

(196) See Jain & Feng, supra note 194, at 88.

(197) See PAUL C. GIANNELLI & EDWARD J. IMWINKELRIED, 2-16 SCIENTIFIC EVIDENCE [section] 16.07 (2017) (discussing how a human analyst locates friction ridge characteristics and labels them using a computer interface, like a mouse).

(198) See id. (discussing how a computer uses a pattern recognition algorithm to label friction ridge characteristics).

(199) See Fingerprints: An Overview, NAT'L INST. OF JUST., (last visited Apr. 3, 2018) [hereinafter Fingerprints].

(200) See id.

(201) See Jianjiang Feng et al., Latent Fingerprint Matching: Fusion of Rolled and Plain Fingerprints 2, 6 (2009) (conference paper),


(203) See, e.g., Petillo v. Worldand, No. CV 11-5005-CJC (JPR), 2012 U.S. Dist. LEXIS 62260, at *28 n.13 (C.D Cal. 2012) (citations omitted).

(204) Ulery et al., supra note 54, at 100.

(205) See Maurice Possley, Michael Caesar Seri, NAT'L REGISTRY OF EXONERATIONS (Sept. 14, 2016),

(206) See id.

(207) See Brandon L. Garrett, Big Data and Due Process, 99 CORNELL L. REV. 207, 210 (2014).


(209) See Press Release, Nat'l Comm'n on Forensic Sci., Directive Recommendation: Automated Fingerprint Information Systems (AFIS) Interoperability (2015) (on file at


(211) See PRIVACY IMPACT ASSESSMENT, supra note 208.

(212) See id.

(213) See id.; Email from Ron Smith, President of Ron Smith and Associates, Inc., to Simon A. Cole, Professor of Criminology, Law and Society at UCI School of Social Ecology (Apr. 4, 2017) (on file with author);.

(214) See State v. Grimes, 87 CRS 13540-13542; 13544, slip op. at 1, 2 (N.C. Cty. Catawba Super. Ct. Apr. 5, 2012); Robert P. Mosteller, N.C. Innocence Inquiry Commission's First Decade: Impressive Success and Lessons Learned, 94 N.C. L. REV. 1725, 1736, 1796 (2016).

(215) Compare DAVID A. HARRIS, FAILED EVIDENCE: WHY LAW ENFORCEMENT RESISTS SCIENCE 104, 105 (2012); with Keith A. Findley & Michael S. Scott, The Multiple Dimensions of Tunnel Vision in Criminal Cases, 2006 WIS. L. REV. 291, 292-93. Consider, for example, a case with strong circumstantial evidence against a suspect. The suspect (along the victim and other "elimination prints" from individuals with legitimate access to the crime scene) is excluded as the source of an incriminating mark. Police investigators may have very little incentive to conduct an AFIS search because, at best, such a search would place a third-party at the scene. This might merely undermine the case against individual against whom they have already built a strong circumstantial case.

(216) Andrea L. Roth & Edward J. Ungvarsky, Data Sharing in Forensic Science: Consequences for the Legal System, 2009 AM. STAT. ASS'N PROC. JOINT STAT. MEETING 469, 473 (2009).

(217) See PRIVACY IMPACT ASSESSMENT, supra note 208; Roth & Ungvarsky, supra note 216, at 473.

(218) See Roth & Ungvarsky, supra note 216, at 473.

(219) See Justin Brooks & Alexander Simpson, Blood Sugar Sex Magik: A Review of Postconviction DNA Testing Statutes and Legislative Recommendations, 59 DRAKE L. REV. 799, 804-05 (2011); Brandon L. Garrett, Claiming Innocence, 92 MINN. L. REV. 1629, 1717 (2008); Kreag, supra note 186, at 858-59.

(220) See Access to Post-Conviction DNA Testing, supra note 188.

(221) See id.; Kreag, supra note 186, at 810.

(222) See Access to Post-Conviction DNA Testing, supra note 188; see, e.g., BRANDON L. GARRETT, CONVICTING THE INNOCENT: WHERE CRIMINAL PROSECUTIONS GO WRONG 215 (2011).

(223) See Access to Post-Conviction DNA Testing, supra note 188.

(224) See Roth & Ungvarsky, supra note 216, at 473.

(223) See Kathryn E. Carso, Amending the Illinois Postconviction Statute to Include Ballistics Testing, 56 DEPAUL L. REV. 695, 712 n.153 (2007) (counting four states in the article). However, Carso only lists five states in the citation, and one of those states, Maryland, does not appear to include, or even mention, fingerprint database searching. See MD. CRIM. PROC. [section] 8-201 (LexisNexis 2018).

(226) See Gregory W. O'Reilly, A Second Chance for Justice: Illinois' Post-Trial Forensic Testing Law, 81 JUDICATURE 114, 114 (1997). The current Illinois statute states: "A defendant may make a motion before the trial court that entered the judgment of conviction in his or her case for the performance of fingerprint, Integrated Ballistic Identification System, or forensic DNA testing." 725 ILL. COMP. STAT. ANN. 5 / 116-3(a) (LexisNexis 2018); see also id. 5 / 116-3(d) ("If evidence previously tested pursuant to this Section reveals an unknown fingerprint from the crime scene that does not match the defendant or the victim, the order of the Court shall direct the prosecuting authority to request the Illinois State Police Bureau of Forensic Science to submit the unknown fingerprint evidence into the FBI's Integrated Automated Fingerprint Identification System (AIFIS) for identification.").

(227) Compare 2007 ILL. LEGIS. SERV. P.A. 95-688 (West), with 2003 ILL. LEGIS. SERV. P.A. 93-605 (West).

(228) See Ivan Moreno, Patrick Pursley Case Could Pave Path for Other Convicts, RRSTAR.COM (Aug. 9, 2017),

(229) People v. Pursley, 792 N.E.2d 378, 383-84 (Ill App. Ct. 2003) (citation omitted).

(230) Id.

(231) See Moreno, supra note 228.

(232) See ARK. CODE ANN. [section] 16-112-202 (2018); IDAHO CODE [section] 19-4902(b) (2012); MINN. STAT. [section] 590.01(1a)(a) (2005); VA. CODE ANN. [section] 19.2-327.1(A) (2013); Carso, supra note 225, at 712 n.153.

(233) See People v. Johnson, 2013 IL App (1st) 120201-U, [paragraph] 1 (111. App. Ct 2013); Sarah Schulte, Marquette Park 4' Sue CPD After Wrongful Convictions, ABC 7 (Feb. 12, 2018),

(234) E-mail from Steven Drizin, Clinical Professor of Law, Northwestern Pritzker School of Law, to Simon A. Cole, Professor of Criminology, Law and Society at UCI School of Social Ecology (Mar. 21, 2013) (on file with author).

(235) Maurice Possley, Norman McIntosh: Other Cook County Cases with Official Misconduct, NAT'L REGISTRY OF EXONERATIONS (October 2016),; Email from Jennifer Blagg, Attorney, Law Office of Jennifer Blagg (Feb. 25, 2017) to Simon A. Cole, Professor of Criminology, Law & Society, Univ. of Cal. (on file with author).

(236) See, e.g., Rucker v. State, No. CR 02-145, 2004 WL 1283985, at *1, *2 (Ark. June 10, 2004) (citations omitted) (reversing trial court's denial of hearing to determine whether AFIS search for source of unidentified marks might yield evidence of innocence.)

(237) See e.g., People v. Urioste, 736 N.E. 2d 706, 712 (Ill. App. Ct. 2000) ("The clear purpose of section 116-3 was to provide convicted defendants with a means by which to establish actual innocence through advances in forensic technology."); O'Reilly, supra note 226, at 116


(239) O'Reilly, supra note 226, at 117.

(240) Roth & Ungvarsky, supra note 216, at 473.

(241) Id.

(242) See Carso, supra note 225, at 698.

(243) Id. at 713.

(244) See PCAST, supra note 113, at 47-48.

(245) Jane Bambauer, Collection Anxiety, 99 CORNELL L. REV. ONLINE 195, 198-99 (2014).

(246) See Thomas Busey et al., The Relation Between Sensitivity, Similar Non-Matches and Database Size in Fingerprint Database Searches, 13 L., PROBABILITY & RISK 151, 151 (2014).

(247) See, e.g., Simon A. Cole, How Much Justice Can Technology Afford? The Impact of DNA Technology on Equal Criminal Justice, 34 SCI. & PUB. POL'Y 95, 96 (2007); Simon A. Cole, Double Helix Jeopardy, IEEE SPECTRUM (Aug. 1, 2007),; see Barry Scheck, DNA Data Banking: A Cautionary Tale, 54 AM. J. HUM. GENET. 931, 931 (1994).

(248) See, e.g., Bambauer, supra note 245, at 202-03.

(249) See Joshua A.T. Fairfield & Erik Luna, Digital Innocence, 99 CORNELL L. REV. 981, 986 (2014).

(250) Garrett, supra note 207, at 214; see Fairfield & Luna, supra note 249, at 1043-44.

(251) Carso, supra note 225, at 712.

(252) See, e.g., id. at 713.

(253) In re Morton, 326 S.W.3d 634, 638 (Tex. Crim. App. 2010).

(254) Id.

(255) See id. at 637.

(256) See id. at 645.

(257) Id. at 647 (citing Skinner v. State, 122 S.W.3d 808, 813 n.4 (Tex. Crim. App. 2003)).

(258) Memorandum from Katie Lentz, Open Records, Williamson County, Sherifs Office on Public Information Act Request #317775 to Office of the Attorney General of Texas Open Record Division (June 6, 2008) (on file with author).

(259) See In re Morton, 326 S.W.3d at 647 (citing Skinner, 122 S.W.3d at 813 n.4); Memorandum from Katie Lenz, supra note 258.

(260) Brief of Appellant at 47-48, Morton v. State, No. 03-08-00216-CR (Tex. Ct. App. Nov. 21, 2008).

(261) See id. at 48.

(262) See id.; Morton, 326 S.W.3d at 647-48 (denying DNA testing of the fingerprint evidence).

(263) In re Morton, 326 S.W.3d at 647 (citations omitted). This reasoning is particularly interesting in light of the recent ruling by the United States Supreme Court in which the Court downplayed the biological nature of DNA profiles and held that a DNA profile is just another biometric identifier. Maryland v. King, 569 U.S. 435, 451 (2013) ("DNA is another metric of identification used to connect the arrestee with his or her public persona, as reflected in records of his or her actions that are available to the police.").

(264) See, e.g., Ryan v. State, No. 08-14-00167-CR, 2015 Tex. App. LEXIS 8448, at *1, *3 (Tex. App. Aug. 12, 2015) (citations omitted); Goodrich v. State, No. 09-14-00027-CR, 2015 Tex. App. LEXIS 2538, at *4, *8 (Tex. App. Mar. 18, 2015) (citations omitted).

(265) See Fain v. State, No. 02-10-00412-CR, 2012 Tex. App. LEXIS 1903, at *1, *42, *45 (Tex. App. Mar. 8, 2012) (citations omitted).

(266) See TEX. CODE CRIM. PROC. ANN. art. 64.01(a-1) (West 2018); Morton, 326 S.W.3d at 647.

(267) See TEX. CODE CRIM. PROC. ANN. art. 64.01(a-1); Morton, 326 S.W.3d at 647.

(268) In re Morton, 326 S.W.3d at 647 (citation omitted).

(269) See Ex parte Morton, No. AP-76,663, 2011 Tex. Crim. App. Unpub. LEXIS 778, at *2 (Tex. Crim. App. Oct. 12, 2011).

(270) See Michael Morton: Time Served 24 Years, INNOCENCE PROJECT, https://www.innocenceprojectorg/cases/michael-morton/ (last visited Apr. 3, 2018); Michael Morton: Other Texas Murder Cases with DNA, NAT'L REGISTRY OF EXONERATIONS, (last visited Apr. 3, 2018).

(271) See Michael Morton: Time Served 24 Years, supra note 270; Michael Morton: Other Texas Murder Cases with DNA, supra note 270.
COPYRIGHT 2018 Albany Law School
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Cole, Simon A.; Scheck, Barry C.
Publication:Albany Law Review
Date:Mar 22, 2018

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters