Neuroscience, mental privacy, and the law.
INTRODUCTION I. GOVERNMENT USE OF NEUROSCIENCE IN LAW A. The Use of Neuroscientific Evidence in Law B. The Overlooked History of Neurolaw: The Case of EEG and Epilepsy II. THE MENTAL PRIVACY PANIC A. Seeds of a Mental Privacy Panic B. Mind, Brain, and Behavior C. Mind Reading Typology III. MIND READING WITH NEUROIMAGING: WHAT WE CAN (AND CANNOT) DO A. Lie Detection with fMRI B. Memory Detection with fMRI C. Memory Recognition with EEG D. Decoding of Visual Stimuli with fMRI IV. PROTECTING MENTAL PRIVACY: THE FOURTH AND FIFTH AMENDMENTS A. "Scholars Scorecard" on Mental Privacy B. Fourth Amendment C. Fifth Amendment V. ADDITIONAL THREATS TO MENTAL PRIVACY A. Competency and Parole B. Police Investigation, Employee Screening, and National Defense C. Future Developments CONCLUSION
"If there is a quintessential zone of human privacy it is the mind."
--Justice Allen E. Broussard, Supreme Court of California (1)
"If George Orwell had written Nineteen Eighty-four during our times, would he have put an MRI scanner in the Ministry of Truth?"
--Neuroscientist Jamie Ward, in The Student's Guide to Cognitive Neuroscience (2)
"fMRI is not and will never be a mind reader...."
--Neuroscientist Nikos Logothetis (3)
The first and second quotations in the epigraph capture a fear that many share about rapidly improving neuroscientific techniques: Will brain science be used by the government to access the most private of spaces--our minds--against our wills? (4) Such scientific tools would have tremendous privacy implications if the government suddenly used brain science to more effectively read minds during police interrogations, criminal trials, and even routine traffic stops. Pundits and scholars alike have thus explored the constitutional protections that citizens, defendants, and witnesses would require to be safe from such mind searching. (5)
Mind reading has also caught the public's imagination. Machine-aided neuroimaging (6) lie detection has shown up in popular television shows and films, including the show "Numb3rs," (7) and the action thriller Salt. (8) Most notably, the plot of the 2002 Tom Cruise movie Minority Report, (9) based on a short story of the same name by Philip K. Dick, (10) involved the government reading citizens" thoughts. In the movie, when criminal thoughts were detected, the government would react before the criminal act occurred. This "precrime" monitoring and enforcement, carried out by "PreCogs," was made possible in the movie by the fictional assumption that technology would develop to a point where the government could reliably determine a person's criminal intentions. (11)
Future-oriented thinking about where brain science may lead us can make for great entertainment and can also be useful for forward-thinking policy development. But only to a point. Too much talk of 1984, Minority Report, Inception, (12) and the like can generate a legal and policy debate that becomes too untethered from scientific reality. Consider this opening line from a law review note published in 2012: "In George Orwell's novel Nineteen Eighty-Four, the Thought Police monitor the thoughts of citizens, trolling for any hint of forbidden viewpoints. In 2012, functional Magnetic Resonance Imaging ('fMRI') of the brain may accomplish similar ends." (13) As the third quotation in the epigraph suggests, such claims about the current mind reading powers of fMRI are mistaken, and similar claims about the future of fMRI (and related) techniques should be carefully scrutinized. (14)
Such claims are the seeds of a mental privacy panic. (15) The panic script typically unfolds in the following way. First, it is observed that we are now on the verge of powerful mind reading technologies. Second, it is suggested that the state will use these technologies in devious ways. Third, it is argued that citizens (especially those suspected of criminal acts) will be powerless to stop these practices because necessary legal protections are not in place. Thus, so the script concludes, quick and drastic action is required to prevent the government from reading our minds.
In this Article, I reconsider these concerns about the use of brain science to infer mental functioning. The primary message of this Article is straightforward: "Don't panic!" (16) Current constitutional protections are sufficiently nimble to allow for protection against involuntary government machine-aided neuroimaging mind reading. The chief challenge emerging from advances in brain science is not the insidious collection of brain data, but how brain data is (mis)used and (mis)interpreted in legal and policy settings by the government and private actors alike.
Reconsideration of neuroscience and mental privacy should start by acknowledging a basic fact about the social nature of the human race: We are all natural mind readers. (17) As recognized in the introduction to a comprehensive volume on brain imaging and mental privacy, our brains "are well equipped by natural selection to read other people's minds." (18) One of the influential theories in psychology to describe this natural mind reading capacity is called "Theory of Mind" (ToM). ToM can be defined as "the ability to observe behavior and then infer the unobservable mental state that is causing it." (19) To "infer the unobservable mental state" is mind reading, and unless we are living in isolation, we do this type of mind reading every day. In law, we even sometimes codify such mind reading, such as when jurors are assigned the task of determining the mens rea of a criminal defendant. (20)
If mind reading is a skill that every normally developing human acquires, and if humans do it all the time in the course of life, why would we ever see--as we did in 2009--a headline in Newsweek announcing with great fanfare that "Mind Reading Is Now Possible"? (21) The reason that Newsweek ran this headline is that mind reading techniques using fMRI are new and thought to be inherently better than everything else that has come before. In particular, such media coverage suggests that machine-aided neuroimaging mind reading will unearth the contents of our minds without our permission (and perhaps even without our knowledge). We ought to be cautious in making this presumption. Just as legal scholar Stephen Morse has called for "neuromodesty" in the context of brain science and criminal responsibility, (22) so too should we be modest in making claims about the power of brain science tools to read minds against our will.
The question, "can the government (or anyone else) read your mind against your will?" is not a useful way to phrase the problem, because the answer to this question is an obvious "yes." As just discussed, humans--including those working for the government--are natural mind readers. Many remember moments when our parents only had to look at us (and the chocolate on our hands) to know that we were lying about grabbing an extra chocolate chip cookie. These traditional methods of mind reading have been with us since our earliest history. (23)
It is useful at this point to clarify terminology. I use the term mind reading throughout the Article to capture the myriad of strategies humans use to attribute mental states to others. (24) I distinguish between non-machine-aided, machine-aided (but not neuroimaging), and machine-aided neuroimaging methods of mind reading. Examples of non-machine-aided mind reading are using facial expressions and body language to gauge intent. Government officials routinely use such techniques to make inferences about the mental states of individual citizens. An example of machine-aided (but not neuroimaging) mind reading is the use of a computer to administer a psychological evaluation to determine cognitive ability. Neuroimaging methods include the use of a neuroscience technology such as fMRI, EEG, or PET, among others.
Having clarified terminology, the question to ask is: "How, if at all, should the law differentially treat, in particular contexts, certain types of government-compelled and government-coerced machine-aided neuroimaging evidence?" Admittedly, this formulation of the question does not roll off the tongue. But that is to be expected, because the legal and policy questions related to involuntary machine-aided neuroimaging mind reading are not readily packaged into a catchy headline. The questions are multiple, murky, and often misunderstood. In this Article, I articulate a framework by which we might better navigate this complexity. The framework emphasizes the importance of placing new neuroscience techniques into proper historical and legal perspectives, and of recognizing the difficulties in making inferences about the mind from brain data.
The Article proceeds in five parts. Part I reviews the use of neuroscientific information in legal settings generally, discussing both the recent rise of neurolaw as well as an often overlooked history of brain science and law that stretches back decades. Part II evaluates concerns about mental privacy and argues for a two-by-three typology that distinguishes between the inferences to be drawn from the data and the methods by which the data is collected. Part III assesses current neuroscience techniques for lie detection and mind reading. Part IV then evaluates the relevant legal protections available in the criminal justice system. I argue that the weight of scholarly opinion is correct: The Fourth Amendment and Fifth Amendment likely both provide protections against involuntary use of machine-aided neuroimaging mind reading evidence. Part V explores other possible machine-aided neuroimaging mind reading contexts where these protections might not apply in the same way. The Article then briefly concludes.
I. GOVERNMENT USE OF NEUROSCIENCE IN LAW
Before turning to the specific question of machine-aided neuroimaging mind reading, it is useful to consider how the government is already using neuroscience in law. (25) After a brief review of the emerging field of neurolaw, this Part discusses two illustrative instances in which the government might require a citizen to undergo a neuroimaging test or might carry out such a test as part of its legal machinery: (1) the use of electroencephalography (EEG) in diagnosing epilepsy for purposes of social security disability benefits; and (2) the use of neuroimaging methods in competency exams initiated by the judge or prosecution. I argue that both cases can help us understand the likely path forward for neuroimaging mind reading evidence. Such mind reading evidence will not be dispositive, but may be relevant as an additional piece of information from which to arrive at a legal conclusion.
A. The Use of Neuroscientific Evidence in Law
Neuroscience is being integrated into U.S. law and policy in a variety of ways. Neuroscientific evidence is increasingly (if still rarely) seen in courtrooms; (26) legislatures are using neuroscience to craft public policy; (27) scholarship at the intersection of law and neuroscience is increasing; (28) more law students are being exposed to neurolaw; (29) the first "Law and Neuroscience" coursebook is being published; (30) thousands of judges and lawyers have been exposed to neuroscience through conferences and continuing legal education programs; (31) and multiple websites make neurolaw news available to the interested public. (32)
Moreover, this area of research has seen investments from foundations and government agencies. The John D. and Catherine T. MacArthur Foundation invested $10 million in 2007 to start a Law and Neuroscience Project, and in 2011 the Foundation renewed its commitment with a $4.85 million grant to sustain the Research Network on Law and Neuroscience. (33) These institutional commitments not only foster dialogue and research, but also send a strong signal that this is a field of great possibility.
Though some have predicted that neuroscience will fundamentally change the law, (34) there has been push back to this claim. (35) The field has debated criminal responsibility; (36) free will; (37) neuroethics; (38) and many areas beyond criminal law. (39)
Structural brain imaging is a standard part of a psychiatric or neuropsychiatric assessment of an individual known to have experienced a traumatic brain injury (TBI). (40) Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) technology have been used in a variety of criminal and civil cases. (41)
In a paper assessing the state of neurolaw, legal scholar Adam Kolber (noting the routine use of structural brain scans in brain injury cases) asks: "Has a revolution already occurred?" (420 The answer is both yes and no. On one hand, much of the law remains untouched by neuroscience, and certainly no body of legal doctrine has been upended by neuroscience research. But on the other hand, the "technological neurolaw revolution" Kolber writes of (43) has already touched law in a number of ways. Consider the following ways in which neuroscience and law now intersect:
* Brain data routinely is used to show personality change after head trauma. (44)
* The electrical brain measurements recorded with EEG appeared in court cases as early as the 1950s and are used regularly in a variety of civil proceedings. (45)
* Structural brain scans such as computed tomography (CT) scans were first used in the 1970s and are now used in many types of litigation. (46)
* Brain scans have been used in the determination of competency to stand trial. (47)
* Brain scans have been introduced to mitigate sentencing where there is evidence of brain trauma or mental trauma. (48)
* Brain scans have been used in the criminal defense of cases involving sexual offense. (49)
* In social security disability law, the proffered medical documentation to support a finding of an organic mental disorder (a "[p]sychological or behavioral abnormalit[y] associated with a dysfunction of the brain" (50)) can include neuroscientific evidence such as EEG and MRI. (51)
* The results of MRI and EEG tests are sometimes included in a claimant's efforts to receive benefits for epilepsy. (52)
* Brain data has been introduced in support of a contractual incapacity argument. (53)
* Brain evidence has been proffered to support insanity defense claims. (54)
Even though many of these uses are criminal defenses, there are instances where brain evidence is used by the prosecution as well. (55) In addition, neuroscience may well play an increased role in assessing pain, suffering, and damages in civil litigation. (56) This influx of brain data has had, at least in some instances, a material effect on case outcomes. (57)
B. The Overlooked History of Neurolaw: The Case of EEG and Epilepsy
Although there are many ways in which "law and neuroscience" is indeed a new legal phenomenon, there is a longer history to neurolaw than most contemporary commentators typically recognize. This history can be instructive. (58) Here I review one part of this history: the government's requirement (now abandoned) that in order to receive federal social security disability benefits, a claimant submit at least one abnormal EEG test. This case is particularly helpful for illustrating the limitations of brain data as evidence for mental phenomena.
The method of EEG was discovered in 1929. (59) EEG is a method in which electrodes are placed on the subject's scalp and electrical activity is recorded. (60) In the 1930s, researchers were beginning to use EEG in their diagnosis of epilepsy, (61) "a brain disorder in which a person has repeated seizures (convulsions) over time," where these "[s]eizures are episodes of disturbed brain activity that cause changes in attention or behavior." (62) By mid-century, EEG was appearing regularly in court proceedings involving epilepsy. (63) Not surprisingly, commentators at the time were already expressing concerns about overreliance on the test. (64) Moreover, some courts were already (and incorrectly) using EEG to supposedly determine "criminal tendencies." (65)
Some medical professionals exhorted lawyers to become familiar with EEG. "The lawyer interested in [epilepsy] must know some principles of [EEG]--both in understanding and evaluating epilepsy and because of its frequent use as a tool in court cases." (66) At the same time, these professionals also recommended caution because "[t]he EEG has been vastly misused, and is likely to be more misused in the future. It is not a magical tool, and does not give magical answers (medicine does not yet have an IBM machine to answer its problems)." (67) This tension, voiced in the 1950s, should sound familiar. It is the same basic tension reemerging today when we ask of mind reading (and other) neuroimaging technology: What can it reliably tell us? Do we learn anything from these new methods that we cannot already discover without them? These questions foreshadow present debates.
As EEG diagnoses of epilepsy developed, they were eventually subsumed into statute by the Social Security Disability Amendments of 1980, (68) one of the purposes of which was to allow the Secretary of Health and Human Services broader authority in creating regulations, especially in the area of performance standards for disability. (69) Regulations released the same year required EEG evidence for a claim of disability as a result of epilepsy. (70) The epilepsy requirement read, in relevant part: "Epilepsy--major motor seizures, (grand real or psychomotor), documented by EEG and by detailed description of a typical seizure pattern, including all associated phenomena; occurring more frequently than once a month, in spite of at least 3 months of prescribed treatment." (71) In the early 1980s, the first court decisions appear discussing EEGs, epilepsy, and social security disability. (72) Over time, questions began to emerge about the relationship between the EEG brain measure and the inferences made about the existence of epilepsy. What result if the EEG is normal, but other types of evidence suggest an abnormality? One administrative law judge was reprimanded for putting too much emphasis on EEG as a diagnostic measure, while "disregarding the overwhelming weight of evidence of an actual disabling condition." (73) The court found that under the epilepsy regulation, "[a] claimant can be deemed disabled either by meeting the standard or by proving a disability which is equivalent to one described in the standard." (74) Thus, direct observations of behavior trumped the inferential chain set in motion by the EEG measures.
By 2000, concerns about the reliability of EEG for diagnosing epilepsy led to a change in the law. (75) The final rules were published in 2002 after notice and comment. (76) The relevant portion states:
In the neurological body system listings for adults and children, 11.00 and 111.00, we made a number of changes to reflect current medical terminology (convulsive and nonconvulsive epilepsy), and to modify the documentation requirement for an electroencephalogram (EEG). With the exception of nonconvulsive epilepsy in children, we will no longer require that an EEG be part of the documentation needed to support the presence of epilepsy. An EEG is a definitive diagnostic tool in cases of nonconvulsive epilepsy in children, but it is rare for an EEG to confirm epilepsy in its other forms for either adults or children. (77)
Case law now reflects this new rule. Just because the EEG evidence is negative, it does not follow that an administrative law judge can dismiss a disability claim for epilepsy. (78)
The history of EEG and epilepsy is an example of the government using regulations to require a citizen to provide neuroimaging evidence for the purpose of allowing the government to make an inference about that citizen's mind. This history teaches us that neuroscience may at times prove to be a useful addition to the court's collection of evidence. But if over time neuroscience proves not to be useful, law may adjust by declining to require such evidence. The bottom line for law is the added value (or lack thereof) of brain data to the legal enterprise.
II. THE MENTAL PRIVACY PANIC
"So far, the government is not able to enter and rummage through a person's mind for 'guilty knowledge'--although that possibility may be on the horizon."
--Judge W. William Leaphart, Supreme Court of Montana (79)
Part I established that brain science now appears in a variety of legal contexts, with some version of brain evidence in courts for over a half century. With this foundation in place, Part II now examines the emergence of neuroimaging mind reading. This Part critically examines the "mental privacy panic," and proposes a two-by-three typology that distinguishes between the inferences to be made from brain data and the methods by which that data is collected.
A. Seeds of a Mental Privacy Panic
"[O]ne might humbly venture a preliminary diagnosis of the pop brain hacks' chronic intellectual error. It is that they misleadingly assume we always know how to interpret ... 'hidden' information, and that it is always more reliably meaningful than what lies in plain view. The hucksters of neuroscientism are the conspiracy theorists of the human animal, the 9/11 Truthers of the life of the mind."
--Steven Poole (80)
Steven Poole's quotation correctly suggests that "neuro" is a label being placed on just about everything, from neuromarketing to neurolaw, and often without sufficient critical thought. (81) In published pieces in law reviews, authors have suggested that "[t]he government can read our minds'; (82) that "[t]he awesome power an irresponsible government might wield with an unhindered ability to use brain-imaging technology must be addressed, whether the technology is ready or not"; (83) and that "Orwell may have missed the mark by a few decades, but the technology that he feared would lead to unbreakable totalitarian society is now visible on the horizon." (84)
Activists are voicing concern not just in law review articles, but in the public sphere. Jay Stanley, a Senior Policy Analyst at the American Civil Liberties Union, warns:
Nonconsensual mind reading is not something we should ever engage in.... We view techniques for peering inside the human mind as a violation of the 4th and 5th Amendments, as well as a fundamental affront to human dignity.... [W]e must not let our civilization's privacy principles degrade so far that attempting to peer inside a person's own head against their will ever becomes regarded as acceptable. (85)
There are even groups such as Christians Against Mental Slavery to protest mind reading. (86)
What are the origins of such concern? Although there are likely many factors, a contributor certainly must be the depiction of this technology in the media. For instance, the following headline ran in July 2012: "The Mind-Reading Machine: Veritas Scientific is developing an EEG helmet that may invade the privacy of the mind." (87) In the article, the CEO of the company is quoted as saying that" [t]he last realm of privacy is your mind. This [tool] will invade that." (88) He goes on to observe that "it's a potential tool for evil.... If only the government has this device, it would be extremely dangerous." (89) Similarly, a professor of biomedical engineering says in the article that "[o]nce you test brain signals, you've moved a little closer to Big Brother in your head." (90) Citizens who read such articles are likely to be concerned about government mind reading.
Citizens also see these types of stories on prime-time television. In a 60 Minutes segment on fMRI-based mind reading that aired in 2009, the crew went to several neuroscience labs, including those of Marcel Just and John Dylan-Haynes. (91) In the segment, a 60 Minutes associate producer completed Just's fMRI tasks, in which she looked at ten different images while in the scanner. (92) Using the producer's brain data, and comparing it to brain data previously collected from other subjects, the computer algorithm was 100% successful in determining the category of image at which the producer was looking. (93) The segment also showed that Dylan-Haynes has a program that can accurately predict, based on brain-activation patterns, whether a subject had decided, in his or her head, to add or subtract numbers shown to them in the scanner. (94) Viewers of the program also learn that the bioethicist Dr. Paul Root Wolpe tells his students that "there is no science fiction anymore. All the science fiction I read in high school, we're doing." (95) Toward the end of the segment, two very telling exchanges occur. The first is between CBS correspondent Lesley Stahl and the ethicist Dr. Wolpe:
[Stahl:] Can you[,] through our legal system[,] be forced to take one of these tests? ...
[Wolpe:] It's a great question. And the legal system hasn't decided on this yet....
[Stahl:] But we do have a Fifth Amendment. We don't have to incriminate ourselves....
[Wolpe:] Well here's where it gets very interesting, because the Fifth Amendment only prevents the courts from forcing us to testify against ourselves. But you can force me to give DNA or a hair sample or blood.... So here's the million dollar question: if you can brain image me and get information directly from my brain, is that testimony? Or is that like DNA, blood, semen, and other things you could take from me? ... There will be a Supreme Court case about this.... 96
This is followed later in the program by an exchange between Stahl and Dr. Just:
[Stahl:] Do you think one day, who knows how far in the future, there'll be a machine that'll be able to read very complex thought like 'I hate so-and-so' or ... 'I love the ballet because ...'....
[Just:] Definitely. Definitely.... And not in 20 years. I think in three, five years. (97)
With predictions such as this--that a Supreme Court case is inevitable, and that in five years we will be able to reveal thoughts about who one hates--it is no wonder the public is getting concerned. (98) The remainder of this Part suggests that these concerns should be tempered.
B. Mind, Brain, and Behavior
To speak constructively about mind reading via neuroimaging, a working definition of the mind is required, as is a working assumption about the mind's relationship to the brain. To start: Is brain reading the same as mind reading?
For some, the mind reduces to the brain. (99) And, as legal scholars and philosophers Michael Pardo and Dennis Patterson have pointed out, "Once this reduction takes place, there is nothing about the mind left to explain or understand." (100) If the mind equals the brain, then brain reading is mind reading. But the answer is not so simple. (101)
The relationship between mind and brain, which is known in philosophical circles as the "mind-body" problem, (102) can be understood in many ways. Two common positions are "dualism" and "materialism." Dualism, which finds its roots in the writing of Rene Descartes, holds that the mind is non-material (while the brain is material). (103) Materialism, in contrast, holds that there is nothing beyond the physical material of our brains. (104) Our minds are our brains, and nothing more.
This is, of course, a vast oversimplification of the mind-brain debate, and there are a multitude of middle and tangential positions that one can reasonably take. (105) However, this dualism-materialism dichotomy is sufficient to illustrate that how one defines the mind vis-a-vis the brain has implications for assessing neuroimaging mind reading.
Here, I adopt as my working definition of "mind" the computational theory of mind (CToM). CToM is not universally accepted, but it does have widespread support. The theory, as described by psychologist Steven Pinker, is that "the mind is not the brain but what the brain does, and not even everything it does, such as metabolizing fat and giving off heat." (106) CToM "says that beliefs and desires are information, incarnated as configurations of symbols. The symbols are the physical states of bits of matter, like chips in a computer or neurons in the brain." (107) Neuroscientist Read Montague similarly describes CToM this way: "Your mind is not equal to your brain and the interaction of its parts, but your mind is equivalent to the information processing, the computations, supported by your brain." (108)
C. Mind Reading Typology
In addition to adopting a working definition of the mind-brain relationship, a distinction needs to be made between (1) the inferences to be drawn from brain data, and (2) the method of collecting the brain data. This distinction generates the two-by-three typology presented in Table 1.
The six categories generated by the typology are: (1) non-machine-aided mind reading; (2) non-machine-aided brain reading; (3) machine-aided mind reading; (4) machine-aided brain reading; (5) machine-aided mind reading with neuroimaging; and (6) machine-aided brain reading with neuroimaging.
The first category, non-machine-aided mind reading, consists of observing an individual's behavior and then inferring from that behavior the individual's mental state. This is the mind reading strategy most commonly used in everyday life.
The second category, non-machine-aided brain reading, consists of looking at an individual's brain to see what it looks like, but not drawing an inference about the individual's mental functioning. Since modern brain investigation almost always involves some machinery, this is a relatively less important category.
The third category, machine-aided mind reading, includes the use of non-neuroimaging technology to improve mind reading. This category is now extensive, as non-neuroimaging technology is so much a part of modern life. For instance, this category includes a neuropsychologist administering a computer-based battery of questions to assess cognitive function, a polygrapher administering a polygraph during a police investigation, and an investigator who uses digital technology to study deception through eye movements. (109)
The fourth category, machine-aided brain reading, describes the use of machine technologies, such as the microscope, that allow for improved assessment of brain tissue. For instance, autopsies of some former NFL football players have used new techniques to uncover the presence of the degenerative brain disease Chronic Traumatic Encephalopathy (CTE). (110) Many autopsies fit into this category, for instance when the goal of the coroner is to draw an inference about how a bullet entered the brain, and how a bullet might have disrupted critical functions. The coroner is not aiming to make an inference about what the individual was feeling when shot, whether the individual still holds grudges, or how the individual's memory is functioning in the morgue.
The fifth category, machine-aided neuroimaging mind reading, is the use of fMRI, EEG, and related technologies to probe mental functioning. Part III discusses the use of these technologies for lie and memory detection, and in assessing mental capacity, mental health, and the like.
The sixth category, machine-aided neuroimaging brain reading, is the use of machine-generated data (for example, an MRI scan) to assess the brain for some purpose other than assessing the mind. An example of this category is using a CT or MRI scan to determine the presence of a tumor.
This typology helps us to see that the mental privacy panic is about only a subset of neuroimaging investigations (that is, those investigations aimed at drawing an inference about mental function), and the panic concerns only a subset of mind reading (namely, mind reading that utilizes new neuroimaging technologies). To emphasize this point, Category Five, which will be the focus of the rest of the chapter, is shaded in gray.
The techniques in Category Five all require inference of mental functioning from neuroimaging data. At least one commentator has therefore argued that lie detection with neuroimaging "is best conceived of as a sense-enhancement of the observer, not as a 'mind reader' because it does not read thoughts, but merely manifestations of thoughts, which are recorded as electrical waves or oxygenated blood patterns." (111) To be sure, there are many inferential steps between the mental event of lying and the measurement of blood flow, (112) but just because mind reading is inferential does not mean it is not mind reading. Rather, it means that mind reading will only be as good as the inferential connections.
An emphasis on inference allows us to reevaluate the argument that "[i]f we view our minds as our 'selves' and our brains as enabling our minds, then technologies capable of uncovering cognitive information from the brain threaten to violate our sense of privacy in a new and profound way." (113) This argument may be true, but only if we know something very specific about the precise way our minds are enabled by our brains.
If we do not, then the inferential chain between brain activity and mental activity may be broken and our mind's privacy might be left intact (even if our brain's privacy is not). Perhaps fMRI is less akin to mind reading and akin instead "to trying to understand how an engine works by measuring the temperature of the exhaust manifold." (114) Maybe EEG is "like blind men trying to understand the workings of a factory by listening outside its walls."(115) Reasonable people, including reasonable experts, can--and do--disagree about how much a particular test tells us about a particular mental faculty and its relationship to a behavioral outcome. (116) Brain reading can tell us something meaningful about the mind, just as other non-brain data can. But brain data produced by advanced machinery is not inherently better or worse (for legal purposes) than data gathered by more traditional means.
The current limitations are reflected in neuropsychology and forensic psychiatry practices, where the mind and the brain are typically assessed without the use of neuroimaging tools. Neuropsychology, for example, "attempts to explain the way in which the activity of the brain is expressed in observable behavior." (117) Yet, it is introductory textbook material in neuropsychology to recognize that these attempts at explanation typically rely on chains of inference and not on actual brain monitoring. (118)
Similarly, defining and detecting mental disorders continues to be based on behavioral, not brain, observation. (119) As neuroscientist Steven Hyman observes:
The term "mental disorders" is an unfortunate anachronism, one retained from a time when these disorders were not universally understood to reflect abnormalities of brain structure, connectivity or function. Although the central role of the brain in these disorders is no longer in doubt, the identification of the precise neural abnormalities that underlie the different mental disorders has stubbornly defied investigative efforts. (120)
To be sure, there are indications that this may change. In 2012, Neuroimaging in Forensic Psychiatry was published, and the editor of the volume, psychiatrist Joseph Simpson, observed that although there are many cautions and concerns to be addressed, "neuroimaging holds great potential for the mental health field ... [and] also holds significant potential value in the legal domain." (121) Moreover, neuroimaging is sometimes used in assessing dementia, (122) psychopathy, (123) schizophrenia, (124) and depression, (125) among others. At present, however, there remains "[a] gaping disconnect ... between the brilliant discoveries informing genetics and neuroscience and their almost complete failure to elucidate the causes (and guide the treatment) of mental illness." (126)
These illustrations are instructive. They show that even in the fields of neuropsychology and psychiatry--which are both dedicated to studying the "mind"--one does not necessarily need assessment with neuroimaging. They also remind us that the substantive value added of mind reading with neuroimaging is what it can tell us beyond what we can already learn from existing methods.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Introduction through II. The Mental Privacy Panic, p. 653-679; Privacy, Security, and Human Dignity in the Digital Age|
|Author:||Shen, Francis X.|
|Publication:||Harvard Journal of Law & Public Policy|
|Date:||Mar 22, 2013|
|Previous Article:||Updating the law of information privacy: the new framework of the European Union.|
|Next Article:||Neuroscience, mental privacy, and the law.|