Printer Friendly

Realizing reliability in forensic science from the ground up.




    A. "Science" Short of the Nth Degree
    B. Splitting Hairs: Anatomy of a Cheap Fix
    C. Reading the Fine Print
    D. Crime Lab Contagion: A Culture of Cutting Comers


    A. Treating Symptoms Instead of the Cause: The Early Years of
        Forensic Reform
    B. The Cash Cow: Funding Linked to DNA Testing
    C. Forensic Reform 3.0: A Graveyard of Good Ideas

    A. Federal Power to Mandate Standards.
    B. Federal Enforcement Creates National Standards
    C. Tie Federal Funds to Adoption of Regulations
    D. The Buy-In: Resistance to Reform


    A. Appreciating the Big Picture: Nonnegotiables
    B. Drilling Down on the Details: Longer-Term Goals


   A. Incubating Forensic Science Reform: Ideas at Home and
   B. Acknowledging that Probably Nothing is Perfect
   C. Setting a Stage for Reform



Forensic science is a fractured and burdened discipline. Five years ago, in 2009, the National Academy of Sciences (NAS) published a revealing report announcing that forensic science is broken. (1) Depending on the audience, reactions to the NAS Report ran the gamut, from calling it predictable to groundbreaking to misleading. (2) In many respects, although it could hardly be characterized as new information, the NAS Report laid forensic science's shortcomings to bare and brought to the surface the weaknesses that have plagued forensic science for decades. (3) Moreover, the NAS Report underscored a harsh truth: faulty forensic science has contributed to convicting innocent people--and will continue to do so if the status quo persists. (4)

American courts have improperly legitimized various forensic disciplines without subjecting them to the kind of scrutiny that would be required of novel scientific or technical evidence today. Courts accept the untested view that "science," such as fingerprinting and hair analysis, is (1) generally accepted, (2) science, and (3) reliable. Such unsupported conclusions have lacked adequate scrutiny, whether from a scientific or a legal perspective. Take forensic fingerprint analysis. The common--yet unrealistically romantic--starting point is that there are no two fingerprints exactly alike in the world. That assumption produces the further assumption that fingerprint analysis must be correspondingly reliable. This logic is erroneous.

For example, forensic science and its resulting expert testimony sealed the fate of Bennie Starks during his trial for a brutal rape in 1986. (5) At trial, the State's forensic serologist testified that, based on her analysis of a semen sample taken from the victim's underpants and a sample obtained from Starks, she could not exclude Starks as the source. (6) The prosecution also hired dentists Dr. Carl Hagstrom and Dr. Russell Schneider (who self-identified as experts in forensic odontology) to testify that bite marks on the victim's shoulder had been made by Starks. (7) The dentists testified that after comparing the evidence, photos, X-rays, and a model of Starks's teeth, the bite marks shared sixty-two similar characteristics with Starks's teeth. (8) After hearing these forensic "experts" testify that scientific evidence tied the defendant to the crime, the jury convicted Starks of two counts of aggravated criminal sexual assault, attempted aggravated sexual assault, and aggravated battery. (9) Starks was sentenced to sixty years in prison. (10)

In 2006, after spending nearly twenty years behind bars, a DNA test categorically excluded Starks as the source of the semen. (11) Additionally, two other odontologists' independent examinations of the bite mark evidence completely discredited the conclusions and testimonies presented at trial. (12) Their reports pointed out that the examination method used by the State's odontologists had since been rejected by its own creators and concluded that the dentists "misapplied the methodology and used flawed preservation and photography techniques." (13) The appeals court ordered Starks released on bond pending a new trial. (14) His convictions were vacated and the last charges dismissed in January 2013. (15) Although Starks is free today, the lack of lab oversight and forensic standards leaves forensic science distrusted and vulnerable to manipulation. During the twenty years Starks spent behind bars, advancements in forensic science technology progressed exponentially, yet the system continues to suffer from fatal flaws and a low threshold of reliability.

Indeed, five years after the NAS Report, the so-called "Path Forward" seems murky, and various political logjams have barricaded the road to reliability. I posit that reliability--the bedrock of forensic science-remains a fleeting notion, because efforts at reform have lacked coordination and implementation. The only way to adequately address the flaws brought to light through the NAS Report is to align the various stakeholders and make a concerted effort from all facets of forensic science, rather than waiting for guidance through a frustrated and exhausted legislative and judicial process.

Although impossible to quantify, the number of wrongfully convicted individuals is at least in the hundreds. (16) Unreliable science presents itself in a virtual smorgasbord of ways, from the routine (contamination) to the egregious (forensic misconduct) and everything in between (misrepresented or exaggerated results, misinterpretation of results, lack of research for basic assumptions, unqualified analysts, inconsistent lab practices). Regardless of the root causes of the forensic flaws, the NAS Report clearly issued a "call to arms" to reform forensic science from the top down by proposing the creation of a centralized National Institute of Forensic Science (NIFS). (17) Little has been done, however, to achieve reform. Indeed, legislation has crawled to a standstill (i.e., dead in the water) several times in Congress, and certain constituencies have brought stiff resistance to reforms. (18)

With the exception of DNA, no single forensic technique yet has the ability to definitively link an evidence sample to its source. (19) Ability is very different from invariable actuality, however; even DNA evidence has its limitations and stress points. (20) Deficiencies in forensic science have harrowing implications, and the number of exonerations in recent years has underscored the very real threat that innocent people can be convicted. The reality of wrongful convictions has risen to the forefront of public awareness through the work of the Innocence Project and other organizations. (21) Of course, there are numerous factors that relate to wrongful convictions outside of faulty forensic evidence--witness misidentification, false confessions, jailhouse snitches (22)--but in some ways, the public conception of erroneous convictions, and that DNA will cure them all, represents a somewhat myopic view.

The Innocence Project predominantly accepts cases where biological evidence is available for DNA testing. (23) That only applies to a small subset of cases with potential claims of actual innocence. For each case where DNA is able to definitively exonerate an individual, there are many more equally innocent people in cases where DNA evidence is lacking. (24) Relying on the postconviction process to correct the problem simply puts a Band-Aid on a gaping wound. We can do better. DNA may provide the "get out of jail free" card in certain cases, but its absence in others nearly ensures that both the convictions and any bad forensic practices involved will persist.

To prevent wrongful convictions (as opposed to just responding to them), the NAS Report concluded that problems with forensic evidence could "only be addressed by a national commitment to overhaul the current structure that supports the forensic science community in this country." (25) To be clear, the NAS Report was not the first conscious conclusion that forensic science needs work. (26) Moreover, the Report was not the first suggestion that the mechanisms for change should occur at the federal level. (27) It probably will not be the last.

The spate of legislation the NAS Report has spawned over the past few years represents a laudable but failed effort to repair a broken system. (28) The top-down mentality of restructuring forensics essentially sweeps everything behind a gigantic curtain in an attempt to control all of the loose pieces in a one-size-fits-all manner. But a careful evaluation of the bottom of that curtain reveals the wizard's feet peeking out: reforms are plagued by underfunded entities, unrealistic budgets, and permissive language that strips real reform of any enforcement power. Simply put, if we continue to suggest a national entity to overhaul forensic science in a grandiose and unrealistic fashion, then we will continue to tabulate wrongful convictions based on bad science.

Having formerly argued that we need a federal agency devoted to the reliable development and distribution of sound forensic science, (29) history coupled with reality tells me that legislative gridlock and territorial pissing contests may make this impossible. Thus, while I still maintain that centralization is the key, I now advocate for a grassroots effort in creating a reliable forensic framework from the ground up, rather than the top down. Cooperation and collaboration across all levels of the criminal justice commerce stream is, in my view, the only currently accessible method. In addition, bringing universities--the bastions of scientific research--into the framework will increase the speed and accuracy, while reducing the costs, of developing standards. Law enforcement, forensic analysts, research scientists, and lawyers need to recognize that forensic science does not exist in a vacuum, and if errors continue to multiply, then we are left with a system that only slides deeper into disrepair.

This Article proceeds in five parts. Part I focuses on the science behind forensics and highlights some of the misconceptions regarding the validity of some disciplines. Part II discusses previous attempts at forensic reform in the United States. Part III discusses the obstacles to implementing a federal forensic science entity and national standards, including potential constitutional challenges and the ever-present issue of locating funding for such an endeavor. Part IV proposes that, rather than creating an entirely new framework, we should leverage existing frameworks already in place to improve the quality and cost of the U.S. forensic science program. Finally, Part V outlines some works-in-progress, notably the U.K.'s major overhaul, and suggests that we capitalize on lessons already learned from those who have transformed forensics into a science.


   "[Forensic science] is justice's best friend, but it has to not
   only be used right but done right." (30)

Despite the authority with which television and movie crime dramas depict forensic science results, the practice sometimes falls short of that "used and done right" standard. Popular culture, news outlets, and public perception guide the belief that forensic evidence is reliable and absolute proof of an individual's guilt. In fact, forensic evidence has the essential hallmarks of certainty that juries need and society craves. Most people agree that it would be a miscarriage of justice to imprison an innocent person. (31) Consequently, we want to be sure that we are convicting the right person. In many cases, forensic evidence closes the confidence gap left open by these concerns and seals the defendant's fate. (32) It has the power to move the jury from maybe to guilty, and everyone can sleep better at night because "science" solidified the conviction. (33) The forensic analysts, then, are the criminal justice system's rock stars, bringing their objective scientific skill and authority to an otherwise emotionally charged process. (34) Yet, "public crime laboratories are not the sanctuaries of science we believed them to be." (35) Even the Supreme Court has recognized that "[sjerious deficiencies have been found in the forensic evidence used in criminal trials." (36) It is undeniable, and the "legal community now concedes, with varying degrees of urgency, that our system produces erroneous convictions based on discredited forensics." (37)


In tracking the 311 cases of postconviction exoneration brought about by DNA testing, the Innocence Project estimates that the average sentence served in those cases is about thirteen years, with eighteen people sentenced to death before DNA was able to prove their innocence. (38) Moreover, of those 311 cases, 141 of the original convictions involved "unvalidated or improper forensic science."39 Given the now-universal nature of DNA testing, "it is possible to forget that, for decades, law enforcement had to rely on much less accurate forensic methods." (40)

Although today's criminal cases often revolve around whether there is DNA--even for low-level property crimes--forensic science traditionally encompasses many different disciplines. Those disciplines include "general toxicology, firearms/toolmarks, questioned documents, trace evidence, controlled substances, biological/serological screening, fire debris/arson analysis, impression evidence (e.g., fingerprints, shoe/tire prints), blood pattern analysis, crime scene investigation, medicolegal death investigation, and digital evidence." (41) In many forensic disciplines, "the human examiner is the main instrument of analysis." (42) The forensic analyst examines "visual patterns and determines if they are 'sufficiently similar' to conclude that they originate from the same source." (43) The forensic disciplines can thus be divided into two main categories: lab disciplines and disciplines based on expert interpretation of observed patterns. (44) Examples of the former include DNA analysis, toxicology, and drug analysis. (45) Disciplines based on expert interpretations aim to determine a common source for patterns observed in, but not limited to, fingerprints, writing samples, and toolmarks. (46)

In what may be an oversimplification of the distinction, the lab disciplines also bring quantitative results that seem to reflect objectivity. For example, DNA results culminate in the all-important statistical representation of the likelihood of a random match based on population genetics (47)--i.e., the pervasive "1 in n billion" number. The lab-based forensic disciplines are deemed to be more analytical and thus more reliable than the more subjective "pattern identification" disciplines, which produce qualitative results 48 Although consideration of whether the lab disciplines are deserving of such deference is better saved for another article, the disciplines generally hold a notable edge over disciplines based on expert interpretation."). element of subjectivity inherent in the analysis of lab disciplines merits comment. DNA analysis is subject to human error based on the interpretation (read: subjective analysis) of results that include, among other things, mixture samples, Low Copy Number DNA, (49) and degraded evidence. (50)

Distinctions aside, forensic science disciplines lack significant peer-reviewed research of the scientific bases and validity studies that should support their methods. (51) Fingerprint-matching techniques, for instance, lack "sufficient data on the diagnosticity and reliability" of even the most basic assumptions. (52) For pattern-identification methods generally, research establishing the limits and measures of their performance is "sorely needed." (53) Although research in many disciplines would allow for more consistent, quantitative results, research culture has not found a foothold in forensic science. (54) Without the requisite level of empiricism that grounds scientific endeavors, forensic science devolves into forensic art.

Despite the public desire for certainty and the legal requirement to prove guilt beyond a reasonable doubt, "[f]ew forensic science methods have developed adequate measures of the accuracy of inferences made by forensic scientists." (55) It seems to be common sense that every forensic technique should include the applicable level of "uncertainty in the measurements that are made." (56) Taken in isolation, the lack of scientifically acceptable standards for such a wide segment of forensic practices that continually calls itself a "science" seems quixotic.

The disconnect between forensic research and forensic practice occurred long ago and is the product of a criminal justice system that misplaces value in that gap. Many of the disciplines evolved solely for the purpose of solving crimes, (57) and I hazard a guess that the inability to challenge forensic techniques' reliability due to the lack of solid research produces more convictions than acquittals. In the absence of validation studies, forensic techniques were initially applied to cases; once their application was established, the ongoing prosecutorial use of forensic techniques (and a good bit of judicial notice) continued unquestioned, and courts cemented their longevity. (58)

With a pile of cases to solve, research, repeatability, and reliability assessments were--quite understandably--not crime labs' priority. Furthermore, implementing research and standards presents costs (in both workload and real dollars) that crime lab budgets simply cannot absorb. This steady progression to deem results acceptable, however, permitted forensic evidence development to continue unimpeded and elevated it to "sure bet" status in criminal trials. Of course, some forensic evidence is more reliable than others, (59) but that does not excuse a continued culture of "because I said so" testimony that uses loaded terminology such as "match," "positively," or "to the exclusion of all others" without the proper considerations of validity and rarity found in other research sciences.

This lack of a research-oriented culture in forensic evidence leads to errors in the way the evidence is used in prosecutions and presented in courts. In a recent study of the "predictors" of wrongful convictions, Jon Gould et al. concluded that forensic errors most often accumulate in evidence interpretation and the resulting testimony, rather than the "actual scientific testing." (60) In some ways, these predictors presuppose that "scientific testing" takes place, as opposed to analysts merely "eyeballing" (61) the evidence. Nonetheless, Gould and colleagues do acknowledge that there is a fundamental lack of foundational research underlying forensic science disciplines. This contributes to the eventual errors in forensic testimony, such as exaggerating the "inculpatory nature of the evidence by

providing inaccurate or non-existent statistics; and misstating the certainty of the results when the forensic technique, such as bite mark, scent, or fiber analysis, does not allow for it." (62) Indeed, there are no instruments that measure or quantify a reasonable degree of scientific certainty when the "scientific certainty" really boils down to the experience of the witness and not much else.


In a 2012 sequence of investigative reports, the Washington Post exposed a Department of Justice (DOJ) review of hundreds of cases believed to contain flawed forensics. The DOJ task force spanned nine years and (regrettably) focused on the work of one particular examiner performing hair and fiber analyses. DOJ officials began reexamining cases in the 1990s after receiving reports that careless work by analysts at the FBI lab produced unreliable forensic results that were later used in trials. The results of that DOJ review--kept silent from many alleged offenders for more than a decade--demonstrated that flawed hair and fiber evidence was used to gamer convictions in numerous cases. (63)

Hair and fiber evidence has long been the subject of scrutiny. (64) It should not come as a surprise that some of the defendants against whom this evidence was used turned out to be innocent. What is surprising is that DOJ deliberately withheld the findings from the defendants whose convictions resulted--at least in part--on that evidence. Instead, DOJ made the findings available only to the prosecutors in the affected cases. The Washington Post's investigation revealed that possibly fewer than half of the defendants whose hair evidence was called into question never learned of the task force's review. Based on this investigation alone, it is clear that numerous individuals may "remain in prison or on parole for crimes that might merit exoneration, a retrial or a retesting of evidence using DNA because FBI hair and fiber experts may have misidentified them as suspects." (65)

In one such case, Donald E. Gates served twenty-eight years for the rape and murder of a Georgetown University student based on FBI Special Agent Michael P. Malone's testimony that Gates's hair was found on the victim's body. (66) DNA testing exonerated Gates in 2009. (67) Even before the DOJ task force reviewed Malone's work, DOJ's Office of the Inspector General (OIG) issued an unsparing report on investigated "allegations of wrongdoing and improper practices within certain sections of the [FBI] Laboratory." (68) That particular report--released in 1997--specifically targeted Malone. Malone's work was the lynchpin to Gates's conviction, but Gates never learned about the OIG's report regarding Malone or his faulty work. (69) Although eventually exonerated and released, Gates spent decades in prison for a crime he did not commit. (70)

Benjamin Herbert Boyle was also convicted based on Malone'sz testimony. (71) Boyle's case was part of the task force's review, but--like Gates--he never learned of the investigations into Malone's case. In fact, Boyle would never have the opportunity to learn about it. The State of Texas executed him in 1997. (72) A prosecutor's memo indicated that Boyle never would have been eligible for the death penalty had the problems in the FBI lab work been disclosed. (73) The task force would later determine that Malone's conclusions in Boyle's case were flawed. (74)

For years, scholars, attorneys, and scientists have questioned the validity of microscopic hair comparison. The discipline is beset with weaknesses; yet, DOJ only reviewed the work of one FBI analyst-- Malone--despite the questions surrounding the integrity of the FBI lab as a whole. (75) Of course, choosing to focus on one bad apple rather than a holistic repair of the tree is the easier, lower cost option. Moreover, it allowed the task force to blame the misconduct or ineptitude of one and ignore the systemic failures of an entire discipline.

The shortsightedness of such limited review, however, is palpable when viewed through the lens of cases that slipped through the cracks. Santae A. Tribble was convicted of killing a taxi driver named John McCormick in 1978. (76) During the investigation of McCormick's murder in Seat Pleasant, Maryland, a police dog uncovered a stocking mask one block away from the crime scene; the stocking contained thirteen hairs in total. (77) Of the thirteen, the FBI concluded through hair analysis that one belonged to Tribble. (78) Over the course of his three-day trial, Tribble took the stand in his own defense, urging the jury to accept the fact that he had no connection to McCormick's death. (79) Nevertheless, the jurors gave weight to the one "matching" hair and found Tribble guilty of murder; the judge sentenced him to twenty years to life in prison. (80)

Both in prison and while on parole, Tribble maintained his innocence, and in January 2012, Tribble's lawyer succeeded in having the evidence retested. (81) A private lab concluded through DNA testing that the hairs could not have belonged to Tribble. (82) A more thorough analysis at the time of the crime--even absent DNA testing--would have revealed the same result: one hair had Caucasian characteristics and Tribble is AfricanAmerican. (83) But a shoddy examination left an innocent man in prison for twenty-five years, plus another three years on top of that for failing to meet the conditions of his parole. (84) And Tribble is, perhaps, "lucky." His case had testable DNA, and he found freedom in 2012, eight years after the task force completed its work. (85)

In another case that escaped the task force's review, Kirk L. Odom was convicted of sexual assault in 1981. (86) The star prosecution witness-- an FBI special agent--testified that a hair discovered on the victim's nightgown was microscopically similar to Odom's hair, "meaning the samples were indistinguishable." (87) To illustrate the credibility of the evidence, the agent also testified that he had concluded hairs to be indistinguishable only "eight or 10 times in the past 10 years, while performing thousands of analyses." (88) Although Odom presented alibi evidence, the jury convicted him after just a few hours of deliberation. Odom was paroled in March 2003 and was required to register as a sex offender. (89)

That would have been the end of Odom's story had it not been for his lawyer's crusade to right the wrongs attributable to the erroneous hair comparisons. (90) In February 2011, Sandra Levick (who had also represented Gates and Tribble) filed a motion for DNA testing under the D.C. Innocence Protection Act. (91) In response, the government located stained bedsheets, a robe, and the microscopically examined hair from the crime scene. (92) "DNA-STR testing on semen from a pillowcase and robe, as well as mitochondrial testing of the hair, all excluded Odom" and instead implicated a convicted sex offender. (93) Odom was exonerated on July 13, 2012. (94)

In response to the Gates-Tribble-Odom trifecta, DOJ and the FBI announced a joint effort to review convictions involving FBI (and only FBI) analyses of hair evidence. (95) For its part, the FBI appears to be in denial. In a July 2012 statement, the FBI explained:

   The FBI Laboratory still conducts microscopic hair comparisons.
   There is no reason to believe the FBI Laboratory employed "flawed"
   forensic techniques.

   The validity of the science of microscopic hair comparison is not
   at issue; however, based on recent cases, the FBI and Department of
   Justice are committed to undertaking a review of historical cases
   that occurred prior to the regular use of mitochondrial DNA testing
   to ensure that FBI testimony at trial properly reflects the bounds
   of the underlying science. (96)

The U.S. Attorney for the District of Columbia, Ronald C. Machen, Jr., has stated that his office would conduct "a sweeping review" of past cases "where hair analysis was used in part to secure convictions." (97) In addition to being too little, too late for some, this effort again seems to deliberately ignore the fact that flawed hair analysis is a widespread problem. (98) To believe such errors occur in isolation--confined to just one lab or just one forensic discipline such as hair analysis--is nonsensical when the entire forensic discipline produces wrongful convictions because of analytical and structural defects. In many cases, we continue to allow the criminal justice system to be held hostage by bad science, and those caught in the cross hairs have little recourse from a system designed to reinforce finality over truth. (99)


Questionable results may come from weak methodology, misapplication of methods to a specific case, second-rate analysts, or outright fraud. While it may be easy to conceive of how forensic errors can exist in disciplines such as hair analysis, we have more difficulty understanding errors in established forensic techniques, such as latent print identification, commonly known as fingerprints. The bedrock of fingerprint analysis is the familiar refrain that no two fingerprints are alike. Indeed, fingerprints have general ridge patterns that make it possible to

systematically classify and compare them, and the average fingerprint contains between 50 and 150 points of comparison (termed "friction ridge analysis"). (100)

But fingerprint analysis does not involve a comparison of 150 or even 50 points of identification. Rather, most jurisdictions in the United States do not require a minimum number of points between samples to sufficiently call the comparison a "match." (101) Even among fingerprint analysts, the number of points of similarity required for identification varies, ranging from as few as eight points to as many as twelve or more. (102) So, while it may be that on the whole no two fingerprints are alike, there is little to support that six, ten, or even twelve points are a sufficiently discriminating means of identifying a suspect. Moreover, such evidence is never presented with an indication of how accurate it might be (i.e., a quantifiable number that presents the analyst's confidence in the conclusion). It seems logical that the likelihood that a given print belongs to a suspect increases when there are more points of commonality. Yet, the fingerprint community has never embraced this component because the requisite data (i.e., probability studies) does not exist. (103)

Such a theoretical disconnect became a blatant reality in the case of Brandon Mayfield. On March 11, 2004, a terrorist attack on commuter trains in Madrid, Spain, killed approximately 200 people and injured over 1,400 more. (104) Needing assistance, the Spanish National Police enlisted the help of the world-renowned FBI crime lab and its fingerprint specialists. Just eight days later, on March 19, the FBI identified Mayfield as the source of one of the fingerprints on a bag containing detonators connected with the attacks. (105) A second examiner verified the "match," and a unit chief reviewed the conclusion and concurred in the results. (106) The FBI then learned on April 13 that the Spanish National Police performed an independent examination of the print comparison but could not positively identify Mayfield as the source. (107) After meeting with FBI representatives, the Spanish National Police agreed it would reexamine Mayfield's fingerprints. (108)

The FBI ultimately arrested Mayfield on May 6. (109) Mayfield was still in detention on May 17 when the court appointed an independent fingerprint examiner to review the FBI's identification. (110 ) On May 19, the independent examiner agreed with the FBI's identification and became at least the fourth examiner to positively link Mayfield to the suspect print. (111) Yet, on the same day, the Spanish National Police notified the FBI that it had positively matched the fingerprint with Ouhnane Daoud, an Algerian national. (112) The court released Mayfield the next day to be detained at home; the FBI withdrew its identification on May 24, and the case against Mayfield was dismissed. (113)

OIG ultimately found multiple sources for the FBI lab's error. (114) One source of error concerned facts specific to the case--such as the similarity between the identified prints and Mayfield's religious background. (115) Another source concerned general problems with the fingerprint identification process--including its reliance on extremely tiny details, inadequate explanations for differences, failure to assess the poor quality of the similarities, and failure to reexamine the fingerprints after the Spanish National Police investigation returned a negative result. (116) While the Mayfield case may seem like an outlier, it remains true that serious errors in supposedly reliable and accurate methodology nearly perpetrated a miscarriage of justice. Brandon Mayfield's case is a high-profile example of a systemic problem that likely increases in frequency when the case is merely average, neither implicating national security nor requiring multiple reviews of the evidence. Perhaps what makes Mayfield's case the exception is not that forensic science got it wrong but that investigators figured out the errors before the man was convicted. Still, these errors resulted in an innocent man being investigated and detained. Further, the resources of the FBI and other investigatory organizations were wasted on pursuing a meritless lead.

Even beyond the Mayfield blemish, additional work is beginning to demonstrate that fingerprint analysis has been undermined by its own methodology. (117) The NAS Report cites Lyn and Ralph Haber's paper in which they conclude: "We have reviewed the available scientific evidence of the validity of the ACE-V method [of latent fingerprint identification] and found none." (118) The development of the ACE-V method (119) itself has a curious chronology. It was conveniently adopted after the Supreme Court's decision in Kumho Tire Co. v. Carmichael, which refused to distinguish technical testimony (including fingerprint identification) from scientific evidence, making technical testimony subject to the rigors of Daubert. (l20) The decision effectively removed the cloak of invisibility for some forensic disciplines that rested on "technical experience," rather than scientific methods as the foundation for the expert opinion. (121)

Suddenly, latent print examiners needed some sort of method in addition to an abundance of experience and a good set of eyes. Consequently (and conveniently), the ACE-V method was born. But it is not in the family of scientific analysis that the term "method" might otherwise indicate. Despite widespread propaganda that promotes ACE-V as a scientific method, fingerprint analysis lacks validated standards and testing with respect to the process and the level of reliability needed to draw conclusions about the relative similarity between two prints. (122) A recent study has shown that when identical fingerprint evidence is presented to the same set of examiners for analysis, they reach different conclusions approximately 10% of the time. (123)

Moreover, the "V" in ACE-V (which stands for "verification") was meant to address the need for peer review, but the slipshod fix ignores the vulnerabilities of cognitive bias replete in fingerprint analysis. The Mayfield case highlighted this particular weakness, but it is not an isolated incident and it is not limited to fingerprint analysis. Context influences many aspects of the forensic process. Forensic examiners may be aware of the nature and details of the particular crime or the suspect, pressured by an investigator to find a match between samples, or apprised of prior conclusions drawn by colleagues working on the same piece of evidence (the peer review). All of these factors can contribute to contextual bias. (124)

The contextual stimuli that permeate forensic science may be subtle or flagrant, but they are omnipresent. Mayfield's erroneous identification exemplified the gravity of forensic bias: "the latent fingerprint was examined against a pre-existing 'target,' without first being properly analyzed in isolation; the examiners were pre-armed with contextual information, leading them to be suspicious of their target; and the case was high in profile and time-urgent, increasing the need for closure." (125) Couple the bias component with the possibility for false positives, and the threat of a wrongful conviction based on flawed fingerprint evidence is very real.


In recent years, a number of shocking crime lab scandals have gained media attention and grabbed headlines. The cases appear to encompass errors ranging from mere negligence to outright malfeasance and occur in labs all over the country. Accusations involve evidence tampering, (126) perjury, (127) and withholding evidence. (128) Such charges are often linked to a particular person or even section within the crime lab. The problem of one, however, becomes the pestilence for many, because a crime lab is the sum of its collective parts. When one part is infected, it can bring down the entire organism.

As with the individual forensic disciplines, crime labs also lack any cohesive set of mandatory standards. Depending on the crime lab, this creates a quality control issue. (129) The crime lab accreditation process-- which implies reviews, testing, and audits--is, at best, voluntary and, at worst, a charitable endowment. Many states do not require their crime labs to be accredited. (130) Those labs that do seek accreditation do so through the American Society of Crime Laboratory Directors/Laboratory Accreditation Board (ASCLD/LAB), the primary certifying body for crime labs. In 1996, Peter Neufeld--cofounder of the Innocence Project--observed that "[t]here's absolutely no reason that crime laboratories, which routinely make decisions that have life and death consequences for an accused person, should be less regulated than a clinical laboratory utilizing similar tests." (131)

The NAS Report noted the lack of standards for lab management and administration. (132) Specifically, it observed:

  There is no uniformity in the certification of forensic
  practitioners, or in the accreditation of crime laboratories.
  Indeed, most jurisdictions do not require forensic practitioners to
  be certified, and most forensic science disciplines have no
  mandatory certification programs. Moreover, accreditation of crime
  laboratories is not required in most jurisdictions. Often there are
  no standard protocols governing forensic practice in a given
  discipline. And, even when protocols are in place ... they often
  are vague and not enforced in any meaningful way. (133)

History demonstrates that if a lab produces errors (on any scale), it is unlikely to affect its accreditation from ASCLD/LAB. A member of the New York Forensic Science Commission criticized ASCLD/LAB for its "culture of tolerance for errors stemming from a highly forgiving corrections system, some times of major and/or lesser magnitudes, but many of which either violate ASCLD/LAB's ethics guidelines and/or standards." (134) Indeed, by its own terms, ASCLD/LAB does not conduct random inspections of crime labs. (135) Labs always get notice of a visit, and the lab itself selects the case files for review. (136)

Reminiscent of the mortgage industry's countercyclical diversification strategy (which produced the housing bubble), ASCLD offers a wealth of services to its member labs, "such as protection from outside inquiry, shielding of internal activities and where necessary, especially in the event of public condemnation, a spokesperson to buffer the laboratory from media inquiry." (137) In other words, when times are bad for a crime lab, ASCLD still reaps benefits from member labs. Crime lab accreditation is a for profit business that sorely needs an overhaul, but it likely is not the root cause of crime lab scandals.

What makes forensic error into a full-blown crime lab scandal? As with any scandal that brings down an organization, it usually includes repetitive misconduct, a failure to respond, and a culture of tolerance of such activity. (138) The situations that push an incident from the "problem" column to the "scandal" column are varied and diverse. Examiners may lie about test results, (139) produce misleading data regarding the reliability of their methods, (140) or conceal exculpatory evidence. (141) Other cases may involve "dry-labbing," where analysts record data for tests that they never conducted. (142) Protocols may be ignored, forensic scientists may exaggerate their credentials or expertise, or tests may be tampered with.

Whatever the particular problem, it cannot be denied that between 2005 and 2011, authorities identified fifty significant failures at American crime labs. (143) These types of problems have led to scandals across the nation, resulting in full or partial closures, reorganizations, investigations, or firings at city or county crime labs. (144)

To highlight some recent examples of flawed testing, for example, Detroit in 2008 shut down its crime lab when an audit revealed errors in 10% of cases. (145) In 2010, an audit revealed that technicians in a North Carolina lab provided false or misleading results in 190 murder or similarly serious cases. (146) In 2011, New York shut down a state crime lab after an investigation revealed that the lab had engaged in flawed testing for MDMA (more commonly known as ecstasy), triggering review of 9,000 cases. (147) Authorities were aware of issues with the crime lab as far back as 2008. (148)

In some cases, analysts have stolen evidence for personal use. (149) San Francisco crime lab technician Deborah Madden admitted to taking cocaine from evidence. (150) Police arrested Massachusetts chemist Sonja Farak on similar charges related to both cocaine and heroin earlier in 2013. (151) The need for standard protocol and oversight in state-run crime labs has never been more apparent.

Other analysts tamper with evidence, effectively committing fraud, to attain professional recognition. (152) Chemist Annie Dookhan (also in Massachusetts) was responsible for the lab's quality control. (153) Authorities discovered that she manipulated evidence to obtain false positives. (154) Dookhan was renowned for her "preternatural speed." (155) She analyzed an astonishing 500 samples per month, while the average forensic chemist makes it through 50 to 150 samples in the same amount of time. (156) Her supersonic speed, however, was anything but the result of superior skill. Dookhan admitted that she cut comers and rarely respected lab protocol.157 One of Dookhan's supervisors noted that she "did not seem to use a microscope, which is necessary to confirm that a substance is cocaine." (158) Dookhan further admitted to sprinkling samples submitted for testing with a known illegal substance to ensure a positive result as well as testing a small percentage of samples and then listing all the remaining samples as positive. (159) Her misconduct implicated over 30,000 defendants (160) and as many as 200 cases, which federal officials now must review. (161)

Ohio toxicologist James Ferguson lied about his credentials on the witness stand hundreds of times. (162) Ferguson claimed to have received his college degree sixteen years prior to his actual graduation date. (163) Ferguson discounted the magnitude of the deception in light of his twenty-plus-years' experience. (164) One cannot help but wonder what else Ferguson has lied about, given his willingness to perjure himself over something he characterized as minor. If he lied about evidence, Ferguson would not be alone in committing perjury to bolster prosecutors' cases. Michael Hansen served six years for the murder of his daughter before a judge found that the medical examiner, Dr. Michael McGee, testified falsely in Hansen's trial. (165) The prosecution ultimately dropped the charges. (166)

In addition to problems spawning from overt misconduct in crime labs, their close connection to law enforcement can result in policies favoring the prosecution. For example, North Carolina's crime lab recently came under fire for a policy of withholding certain results from defense attorneys. (167) In situations where an initial sample tested positive as blood, the lab would withhold any subsequent negative tests--even where the later tests were more specific. (168) According to an FBI report, the "North Carolina crime lab workers omitted, overstated or falsely reported blood evidence over a 16 year period." (169)

The harms caused by errant crime labs are often compounded by their lack of transparency, and some are outright attributable to hiding evidence. (170) Labs often can be more concerned with reputation than with rectifying wrongs (which requires informing defendants of the error(s)). These troubling issues exact enormous costs. When scandals do come to light, the criminal justice system must reexamine huge numbers of past convictions. (171) Annie Dookhan, for example, was directly involved with at least one hundred cases in one federal district court alone. (172) As many as 500 or more cases in which she was involved may eventually have to be reviewed. (173) Ultimately, once state court cases and cases invoking the mandatory minimum sentencing requirements based on state convictions are considered, the toll for review is estimated to reach approximately 34,000 cases. (174) In the cases that had been reviewed as of January 2013, courts overturned 1,141 convictions where Dookhan handled evidence. (175) The scandal was expected to cost the state more than $40 million. (176) Of that, the Massachusetts judiciary reportedly requested about $13.6 million to deal with the scandal. (177) These figures likely exclude the expenses for the public defenders needed in many of these cases. (178)

At a time when the federal and state governments bemoan declining revenues, it seems far more efficient to ensure labs are adequately resourced in the first instance than to divert money cleaning up messes after the fact. But no matter their gravity, the problems that plague crime labs also exact substantial nonmonetary costs. Not only are internal investigations still required to ferret out tainted samples, (179) but more importantly, the integrity of the criminal justice system is eroded. These scandals undermine society's faith in a fair and just system. And, of course, the human cost of forensic errors is greatest of all. There is no way to quantify the pain suffered by innocent people incarcerated for crimes they did not commit. It is also well worth remembering that in crimes where there is a victim, every innocent person wrongfully convicted means a guilty person is allowed to go free.

While these are but a few in a laundry list of crime lab errors, collectively, they underscore the need for greater oversight and increased accountability. The continued failure to address these problems exacts too high a toll.


"Insanity is doing the same thing over and over again, but expecting different results." (180)

In the aftermath of the NAS Report and the rise in reporting of crime lab errors (whether it is a true increase versus an uptick in reporting is subject to debate), it seems that U.S. forensic reform is in its infancy stages. While the NAS Report proposed a federal reshaping of forensic science services, it was not the first entreaty into reform. Legislation has tiptoed around forensic issues for decades, with little to no success. Most legislation targeted labs rather than forensic science as an industry. The year 2012, however, saw a shift in legislation proposing research, standards, and oversight, as opposed to dumping more money into labs.


The abysmal state of crime labs first gained national attention in 1967 when President Lyndon B. Johnson's Commission on Law Enforcement and the Administration of Justice found that many police labs lacked both equipment and expertise. (181) During the Nixon Administration, a 1973 commission echoed many of these same concerns. (182) A few years later, the National Institute of Law Enforcement and Criminal Justice garnered nationwide media attention with its finding that scores of crime labs were underperforming. (183) Identifying weaknesses, however, does little to actually effectuate change in the absence of funds to accomplish those improvements. This lack of funding is a continuous theme in the chronology of forensic reform legislation.

In the 1970s and 1980s, the answer to performance issues seemed to be a differential diagnosis of treating symptoms rather than causes by the provision of "grants" to fund "assessments." (184) Such an ad-hoc approach essentially threw some cash at various problems to incentivize and compel improvements. Of course, that rarely works, and the early attempts at reform were just that--attempts.


Despite the evidence of widespread performance lapses among crime labs, Congress largely remained silent on the issue until the use of DNA in criminal investigations gained prominence. (185) Competing views over DNA evidence admissibility led to a 1992 report by the National Academy of Sciences. (186) A 1996 follow-up report revealed that DNA tests were both scientifically valid and reliable. (187) The follow-up report, in concert with the standards for admissibility established by Daubert v. Merrell Dow Pharmaceuticals, Inc. (188) resulted in a rise in the use of DNA in criminal trials--and a corresponding uptick in regulating legislation. (189)

After the follow-up report, the National Institute of Justice (NIJ) joined forces with the Office of Law Enforcement Standards to fund the "Forensic Summit: Roadmap to the Year 2000." (190) The summit resulted in a report outlining persistent deficiencies in most public crime labs. (191) The report called for greater standardization, increased research, and quality controls in labs. (192)

The report notwithstanding, DNA continued to become the so-called gold standard in law enforcement and this new reverence--bordering on obsession--meant the vast majority of federal funding allocated to crime labs was tied to DNA research. (193) For example, Congress in 2000 enacted the DNA Analysis Backlog Elimination Act of 2000 (194) and the Paul Coverdell National Forensic Sciences Improvement Act of 2000, (195) both meant to improve the quality of forensic science services. (196) The funding mechanisms for DNA testing far outstripped any other allotments, despite the fact that DNA testing represents a mere fraction of crime lab work. (197) Moreover, this preference for DNA-related spending did nothing to address the persistent issues within crime labs.

The sad state of forensic labs again gained national attention a few years later when President George W. Bush spearheaded the formation of a forensic science commission. (198) Two mechanisms created in 2004 were supposed to carry out the President's mandate. (199) The Consolidated Appropriations Act obligated NIJ to provide Congress with a report on the forensic science and medical examiner communities' needs beyond DNA initiatives. (200) That same year, the DNA Sexual Assault Justice Act of (2004) (part of the Justice For All Act) tasked the Attorney General with creating a national forensic science Commission, which would identify resource needs beyond DNA, in addition to making recommendations, disseminating best practices, and researching privacy issues around using DNA samples. (201) Although the bill passed, the commission was never funded. (202)

The situation again appeared hopeful with the passage of the Science, State, Justice, Commerce, and Related Agencies Appropriations Act of 2006, which authorized NAS to create a forensic science committee and issue a report with findings and recommendations to improve the state of forensic science. (203) Among the findings previously mentioned, the NAS Report noted "great disparities among existing forensic science operations in federal, state, and local law enforcement jurisdictions and agencies." (204) The differences pertained to funding, access to analytical instrumentation, the availability of skilled and well-trained personnel, certification, accreditation, and oversight. (205) In the chronology of forensic reform, the NAS Report did much to gain national attention to an issue first acknowledged--but not much improved--since the Johnson Administration. (206)


If the NAS Report's release can be viewed as a watershed moment, then the legislation it spawned might be viewed as the third iteration of proposed forensic reform. A few days prior to the release of the NAS Report, Representative Peter Roskam introduced the State and Local Criminal Forensic Laboratory Enhancement Act of 2009. (207) Despite the national attention garnered by the NAS findings, the bill never made it out of committee. President Barack Obama responded by chartering a subcommittee on forensic science. (208) That subcommittee's role was to make recommendations to achieve the goals the NAS Report outlined. (209) But DNA testing remained the focus of most legislation and received the lion's share of funding through the 111th Congress. (210)

Two years later, Senator Patrick Leahy introduced the Criminal Justice and Forensic Science Reform Act of 2011. (211) The bill, which also died in committee, (212) would have established an Office of Forensic Science within DOJ. (213) In 2012 and again in 2013, Representative Eddie Bernice introduced legislation to "establish scientific standards and protocols across forensic disciplines." (214) The Forensic Science and Standards Act of 2013 (Standards Act)--and its 2012 predecessor (215)--intends to create "a national forensic science research program to improve, expand, and coordinate Federal research in the forensic sciences." (216) In addition, the Standards Act would establish both a national forensic science coordinating office at the National Institute of Standards and Technology (NIST) and a forensic science advisory committee. (217) Unlike in Senator Leahy's bill, which would place the forensic science office within DOJ, both the NIST director and the Attorney General would create the advisory committee, which, in turn, would advise DOJ and NIST. (218)

Notwithstanding the failed 2012 Standards Act, the resurrected Standards Act is notable for its trailblazing approach to tackling forensic reform in a manner that prior legislation had not. The Act aims to fix forensic science by encouraging research, adopting standards, and creating accreditation requirements. The legislation, however, suffers from its corpulent proportions, despite its ambitious objectives. Aside from the historical failure rate of forensic reforms, the legislation is problematic because it would effectively birth a Lemaean Hydra with a multitude of agencies, committees, and other entities that border on redundancy and grandiosity. It would create a chaotic assemblage of organizations by establishing new entities under the auspices of the existing National Science Foundation (NSF) and NIST.

The NAS Report observed that a lack of quality, peer-reviewed forensic science research stymies advancements in the field. To address this deficit, the Standards Act would create a research program, which would direct research efforts in the forensic sciences from a variety of federal groups. (219) In addition to the research program, NIST would house a coordinating office, the purpose of which would be to produce a "unified Federal research strategy" that identifies and prioritizes research goals consistent with the NAS Report and to develop a roadmap to achieve them. (220) Specifically, the roadmap is intended to establish the criteria that the coordinating office would use to assess research progress. The coordinating office also would have oversight responsibility for the research program and would submit reports to Congress to identify and make recommendations regarding areas of forensic science that would benefit from further research.

The Standards Act also would provide NSF with a research grant program at an operating budget of $34 million for fiscal year 2014, increasing by $3 million each year until 2018. (221) On top of the tremendous budget allocation, the most ambitious aspect of the Standards Act would be the creation of one or more new forensic science research centers under the auspices of the NSF. (222) The Standards Act would establish the research center for four specific purposes: (1) to develop a plan to unify forensic research across federal agencies; (2) to "build relationships between forensic science practitioners and members of the research community"; (3) to promote education of individuals with the aim of creating leaders in the forensic sciences; and (4) to disseminate their work. (223)

Collecting a few more federal entities to add to the convention-like atmosphere, the Standards Act provides for additional forensic roles within the confines of the NIST. Responding to the NAS Report's concerns about disparate forensic science results, the Standards Act requires NIST to develop "forensic science standards to enhance the validity and reliability of forensic science activities." (224) Such activities encompass uniform measurements and criteria both for the methods and tools forensic scientists use (225) Further, the Standards Act would saddle NIST with standardizing the terminology forensic scientists use in their reports, providing for interoperability of forensic science databases, testing and validating existing standards, and independently validating "forensic science measurements and methods." (226)

To add to the confusion, the Standards Act would establish an advisory committee under the supervision of NIST, the NSF, and the Attorney General to counsel federal departments, agencies, and offices. The committee would consist of an interdisciplinary array of scientists and lawyers. To achieve these ends, the NIST director would be given free rein to establish working groups to "identify gaps, areas of need, and opportunities for standards development." (227) The Standards Act would allocate NIST a budget of $5 million for 2014, $12 million for 2015, $20 million for 2016, $27 million for 2017, and $35 million for 2018. (228)

The final piece to this forensic puzzle concerns the Attorney General's role. The Standards Act would provide the Attorney General with lackluster enforcement powers. While the Act requires the Attorney General to enforce forensic standards developed under the Act at the federal level, the Attorney General is relegated in nonfederal labs to "encouraging" and "promoting" powers that (in a better translation) merely suggest that nonfederal labs adopt the standards and promote certification and accreditation criteria. (229) Since the Standards Act effectively holds the cash hostage at the federal level, all other labs would have little incentive to implement any new standards or accreditation measures. Simply put, the Act lacks any "buy in" for the little (i.e., nonfederal) guys.

On the one hand, the Standards Act's broad agenda would accomplish several things. It identifies the need for research, showcases the utility of research centers, and underscores the basic requirement of standards. Unfortunately, similar earlier versions of the bill died in committee, so this iteration may become another obituary in the history of forensic reform, likely doomed by a lack of political capital and a steep price tag. Consequently, the Act may very well be a classic example of an unrealistic wish list that no one can afford.

In a post-script to the demise of the Forensic Science Standards Act of 2012, Senator Leahy indicated his commitment to forensic reform in an early 2013 speech. (230) This afterthought, at the very least, dovetailed into a development where, pursuant to the Federal Advisory Committee Act, DOJ announced that it would partner with NIST to create a National Commission on Forensic Science. (231) The role that commission will play in the ongoing debate on forensic reform remains unclear.


Against the backdrop of failed forensic legislation, a myriad of forensic standards remain across the multitude of forensic science disciplines. (232) The NAS Report concluded that these problems could "only be addressed by a national commitment to overhaul the current structure that supports the forensic science community in this country." (233) After the Report's release, other scholars and forensic science experts called for a national entity or entities to provide national forensic science standards, (234) but consensus on how to best accomplish this has remained an uncatchable shadow. Indeed, many forensic science practitioners disagreed with a federal entity running the show. (235) Consequently, the Standards Act highlights the problem of too many ideas floating about to translate into one workable system.

Even assuming a slight consensus that a federal entity should (or could) promulgate national forensic standards, (236) two questions remain: (1) whether the federal government has the power to effectively create and enforce such standards; and (2) if so, how such a program should operate. This Part offers attempts to answer both. First, the federal government likely has the power to regulate at least parts of the forensic science community, but it would need support from state and federal courts to enforce the standards it promulgates. Second, I submit that even with judicial support and the express authority to cram federal legislation down the state pipeline, resistance would be stiff, and the requisite buy-in from crime labs and forensic organizations is lacking.


Congress could attempt to mandate federal standards on its own. Under Gonzales v. Raich, Congress has the power to regulate even noneconomic goods if it does so as part of a commercial regulatory scheme. (237) This could give Congress some latitude to regulate parts of the forensic science community. For example, Congress might choose to regulate instruments used in forensic science analysis, because it would have a rational basis for regulating their creation and use. But the power to regulate commerce would have its limits, (238) especially related to research. One of the most critical needs in the forensic science community is for research into standards and protocols. (239) Congress could perhaps fund its own research into these areas (as it suggests in Forensic Science Standards Act), but mandating the direction of university-level research likely would be beyond the scope of Congress's power, even if it were politically feasible.

Congress is also limited either politically or constitutionally in what it can do to mandate what state and local courts admit as evidence. (240) Even if Congress could significantly affect the landscape of the forensic science community through mandates, much of its effect would diminish if state and local courts did not adopt the same standards.


Enforcing national standards in federal courts is a direct method of encouraging their adoption in the states. To make this happen, a federal forensic science agency first could consider the current Federal Rules of Evidence (FRE) and advise Congress on changes needed for properly implementing national standards in federal courts. Next, mandating these modifications as forensic evidence standards in federal courts would provide for significant, positive changes. Terminology, reporting, operational principles, and other processes could be standardized in federal court, providing for more efficiency, less juror confusion, more accurate outcomes, and less time spent litigating. Further, many of the federal-level changes would positively impact standards at state and local levels, because some state and local agencies rely on the same labs as federal law enforcement agencies. (241) Thus, changing standards in the shared labs would benefit agencies at all levels. Finally, federally mandating crime lab technician certifications as part of this process would result in an increased demand for colleges and universities to offer courses for students to pursue those certifications. The resulting increase in educational opportunities would allow more state and local forensic scientists to receive the same education as their federal counterparts.

But enforcing evidentiary standards in federal courts would only be the first positive step in achieving national forensic standards. Perhaps some states would adopt the FRE changes, but not all states base their rules of evidence on the FRE; (242) thus the changes may not receive universal, or even significant, adoption. Moreover, states' lack of resources would also slow adoption. As it is, local and state forensic science services are underfunded and backlogged. (243) Many labs have neither the time nor the funds to transition to a uniform, FRE-guided system. Finally, implementing national evidence standards would also create political resistance in many states, especially under current economic conditions.

Without an ability to truly mandate the same changes at the state and local level, imposing new forensic evidence standards would only get part of the way toward a truly national system of forensic science. Moreover, adopting and applying standards and practices rooted in federal origins takes time. (244) This FRE approach would have to be combined with another approach, such as tying federal funding for forensic science initiatives to the adoption of national standards.
COPYRIGHT 2014 Northwestern University, School of Law
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Introduction through III. To Big to Fail: Obstacles to Federal Forensic Oversight B. Federal Enforcement Creates National Standards, p. 283-318
Author:Gabel, Jessica D.
Publication:Journal of Criminal Law and Criminology
Date:Mar 22, 2014
Previous Article:Shadow immigration enforcement and its constitutional dangers.
Next Article:Realizing reliability in forensic science from the ground up.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |