Printer Friendly

How Competency Examiners Should (and often don't) Assess for Malignering and Poor Effort.

"There may be great fraud in this matter... (the judge) may do well to
inquire... whether it (incompetence) be real or counterfeit." (Hale,
1736) (1)


THE POSSIBILITY OF FAKING during legal proceedings has been recognized since ancient times. While forensic psychlogists were among the first mental health professionals to investigate malingering, lately, the sister discipline of neuropsychology has been much more active and has produced hundreds of publications in the past 20 years. One influential study concluded that whenever siutations provide incentives for faking, roughly 40% of examinees will do so or present with poor effort to the extent that their presentation during the evaluation is not a reliable guide to their actual abilities. (2)

A recent survey of examiners across the US estimated that 24% of defendants referred for competency assessments were feinging, and a further 10% were not presenting validly in other ways. (3) Feigning is a general term that means "faking bad" without specifying a motive. Malingering is the intentional production or gross exaggeration of symptoms for a tangible benefit. There are a number of other conditions that also imply invalid responding: Factitious disorder is a condition in which a person intentionally fakes a disorder for the purpose of gaining attention and special treatment from treatment providers. It cannot be diagnosed if there are significant other benefits to the behavior, (4) as there almost always are in a criminal case or in jail. Somatoform disorders are conditions in which the patient complains of bodily dysfunction or pains that cannot be medically explained. It is believed that such reports are not deliberately inaccurate. It may simply be that some people are particularly sensitive to minor bodily sensations, over-interpret such experiences, or to complain about them more others. Conversion disorders usually involves complaints of paralysis or cognitive dysfunction, such as amnesia, that cannot be medically explained. Such patients were also referred to as displaying hysterical paralysis or blindness. Pioneers such as Charcot and Freud interpreted their behavior as unconsciously determined. They noted that such patients often seemed oddly unconcerned about their sudden inability to, for example, use their left arm, and observed that these symptoms often functioned to excuse the patient from distasteful social obligations. This sounds a lot like malingering, and this is how such behavior was interpreted prior to the age of psychoanalysis. Charcot himself wrote, "Malingering is to be found in every phase of hysteria." (5) Recent authors question the existence of unconscious motivation in such presentations. (6)

While many CST examinees appear to malinger, lack of full cooperation, without a clear motive and deliberate intent to perform badly, is also a major concern. Many of the tests and procedures psychologists use assume full engagement and effort on the part of the test-taker. It is no more difficult to low-ball an IQ test than for someone to do fewer push-ups than their maximum. There is increasing evidence that assuming a test taker will perform to the best of their ability is naive and unfounded: Poor effort has been found in groups of subjects, such as college volunteers (7) and children tested in school, (8) that were not thought to be at risk for underperformance. Such examinees don't have any clear motivation to perform badly, but neither are they especially motivated to do their best.

All of the above response styles potentially spoil the assessment. I refer to them by the broad term negative response bias, which make no assumption about the motivation for the behavior. The crucial issue is given any evidence of less than full cooperation and honesty, one cannot put much weight on defendant's presentation during the evaluation. Collateral sources will be required to validly assess the defendant's actual cognitive, psychiatric, and functional status.

As a court-appointed expert, I often encountered defense-obtained evaluations that provided second opinions on defendants I opined had feigned. It was not uncommon for the defense examiners to ignore the data from my investigation and attempt to approach the defendant "with a clean slate." Another examiner would routinely testify that he "saw no evidence of malingering" in cases a previous examiner concluded this was the case, offering no facts or observations in support of his opinion. Prosecutors could reasonably conclude that such examiners simply write down whatever the defendant tells them and testify as if this was a meaningful assessment.

Although it may be hard to tell from such reports, there are professional standards that do guide such practices. Unfortunately, the two most prominent guidelines, the American Psychological Association Code of Ethics (APA, 2002/2010/2016) (9) and the Association for the Advancement of Psychiatry and Law's practice guidelines (2007), (10) do not provide strong recommendations on assessment of feigning. The Specialty Guidelines for Forensic Psychologists" contains firmer language, although recommendations about the need to assess for feigning are merely implicit.

Several neuropsychology professional societies have issued position statements stating that assessment of an examinee's effort on cognitive testing (which includes assessment of intelligence) is medically necessary in ALL such assessments, not merely those that are conducted for psycholegal purposes. (12) These followed accumulating evidence that an examinee's motivation and effort during testing has a much larger effect on the test scores obtained than brain damage does. (13) As it turns out, mild brain injuries, by far the most common, have no significant effects on cognition three months after injury. (14) A mild traumatic brain injury (mTBI) is one that results in less than 30 minutes of unconsciousness, with no abnormality on CT or MRI brain scans, and no complication in the recovery (such as bleeding into the brain). (15)

Some highly influential and well-known authors have provided very clear directives to assess feigning in forensic exams in general and competency to stand trial (CST) exams in particular. In their classic text Psychological Evaluations for the Court, Melton et al. (1997, p. 54) wrote, "Given the significant potential for deception and implications for the validity of their findings, mental health professional should develop a low threshold for suspecting deceptive responding." (16) In the Oxford Best Practices series book on assessing CST, the authors state, "Malingering must always be considered by any evaluator working within the forensic context" (p. 124). (17) Thomas Grisso, in his 1988 book on competency assessment, wrote: "Malingering must be considered whenever a pre-trial competency evaluation produces signs of psychotic or organic disorders, mental retardation, deficits in competency abilities, or special states like amnesia" (p. 35). (18) This statement remains in force for defendants who have a legitimate mental condition, because even examinees with schizophrenia, (19) serious head injury, (20) and intellectual disability (21) can exaggerate their disabilities. In fact, they are best-situated to do so: Defendants with no such history cannot support their claims and will usually lack knowledge of how to credibly portray the condition.

As demonstrated above, there is explicit endorsement from authoritative authors regarding the need to assess for possible feigning or poor cooperation in CST exams. There is also strong support from statistical surveys to support a high index of suspicion. A recent meta-analysis of 59 studies reported that an average of 27.5% of defendants referred for competency examination were found incompetent. (22) This can be compared with the proportion of defendants estimated to be feigning (24.1%) or uncooperative (8.3%) in a recent survey of competency to stand trial (CST) examiners. (23) From these numbers, it is apparent that a defendant who presents as impaired is about equally likely to be feigning or uncooperative as to be legitimately incompetent. For this reason, I argue that validity assessment is the primary diagnostic task in CST assessments, and a primary competency of CST examiners.

Technically, Grisso's recommendation above to assess every defendant that presents with a mental or psychiatric impairment for feigning is overly-inclusive: If a defendant presents with evidence of a mental disorder but without deficits in competence to stand trial, there may be no need to assess for feigning. For example, if a defendant presents as rational but reports hearing voices only at night--who cares? Court hearings are during the day. Mental illness or deficit should very rarely be equated with incompetency: There are only a few diagnoses that strongly imply impairment to the point of incompetence, such as delirium or severe dementia or intellectual disability. A diagnosis of moderate intellectual disability, if accurate, suggests probable incompetency, while schizophrenia and Bipolar disorder do not: About 50% of defendants with such diagnoses are found competent. (24)

For intellectually disabled (ID) defendants, prior IQ scores can provide guidance regarding CST status: ID defendants found competent have an average IQ of 63.7 across studies, whereas those found incompetent have an average IQ of 56.9. Scores of 65 and above suggest competence, absent other issues, while valid IQ scores below 60 increasingly suggest incompetence. As IQ scores dip to 55 and below, there is a low likelihood of competence or the capacity of being educated into competence. (25)

Feigning can take many forms, some of which have not been previously emphasized in the professional literature. (26) These are shown in Table 1.

These various presentations can present in multiple combinations. Unsophisticated defendants often fake multiple issues and conditions, including psychosis, amnesia for the crime, intellectual limitations, and ignorance of the court system. More sophisticated malingerers will often portray a more specific condition, such as dementia or severe depression. One such defendant passed two validity tests and had a credible treatment history of depression, but was shown to have defended himself in another legal matter during the time he was allegedly incompetent.

There is a myth among less sophisticated examiners that malingers are a pretty dull lot and easy to catch. This may be true of the feigners they have caught, but this may be a small fraction of those they encountered. As in most endeavors, it is a mistake to underestimate one's opponent.

MEANS OF ASSESSING NEGATIVE RESPONSE BIAS

Some examinees will give such dramatic or implausible answers in the interview that one should immediately question their motivation. For example, a few defendants will claim not to know their age, birthdate, the colors in the American flag, or the role of their lawyer. They may report hearing voices all the time and that they have done so all their lives. Such answers are red flags, absent a compelling explanation (e.g., a defendant is an immigrant from a country in which birth records were lost). However, most examinees will be more subtle. There are a few behavioral clues to feigning that have been supported in multiple studies, as shown in Table 2.

VALIDITY TESTING

Validity testing refers to instruments and procedures designed to assess whether the examinee is presenting in a reliable, valid manner. There are two basic types of validity tests: Those that rely on the examinee's answers when asked about symptoms and problems, and those that rely on examinee's performance on motor, cognitive or knowledge tasks.

SYMPTOM REPORT TESTS

Many readers may already be familiar with the Minnesota Multiphasic Personality Inventory-2 (MMPI-2), (27) which is a 567-item true-false questionnaire about psychiatric symptoms. There is also a newer version, the MMPI-2-Restructured Form (MMPI-2-RF), (28) which is over 200 items shorter and contains other changes from the prior version. Both MMPI-2 editions are bristling with response style scales that detect inconsistent responding, over-reporting, exaggeration, and defensivcness.

On both tests, the first order of business is to determine if the examinee responded consistently and meaningfully. This is assessed through the consistency scales. Most scores on the MMPI and other similar tests are expressed as T scores, which have an average of 50 in the general population. A score of 70 is quite high, typically at about the 98th percentile, while 80 is >99th percentile. Scores above 80 on the consistency scales invalidate the rest of the test.

On the MMPI-2, the primary "fake bad" scales of interest are the Infrequency scale, often labeled simply "F," and the Infrequency Psychopathology scale. The F scale is composed of items that are rarely answered in the scored direction by people without psychiatric problems. They include reporting odd beliefs, behaviors, and experiences. Very high scores (T score > 120) invalidate the rest of the test. Because psychiatric patients tend to endorse more of these items than "normals," another scale was subsequently developed to better distinguish true from exaggerated psychiatric symptoms. It is referred to as the Psychopathology Infrequency scale, often denoted as "Fp." It is not much elevated by any known mental illness, and scores > 100 are strong evidence of feigning or exaggeration. The MMPI-2 has a half dozen more response style scales, although they are not all scored by the primary vender.

On the MMPI-2-RF, there are five scales devoted to over-reporting in three distinct domains: Psychiatric symptoms, bodily and neurological complaints (e.g., pains, feelings of weakness, dizziness, blackouts), and cognitive complaints (reports of poor concentration and memory; see Table 3). The Infrequency and Psychopathology Infrequency scales were refined and carried over to the MMPI-2-RF, and are distinguished from their MMPI-2 counterparts by appending "-r" to their labels (e.g.: F-r). All the scales in Table 3 are scored by the official vendor, so the MMPI-2-RF assesses a broader range of exaggerated presentations.

Lawyers should be aware that all these scores exist and may have been considered by the examiner, even if they do not appear in the written report. Psychologists may be reluctant to include them for various reasons, and even if contacted by an attorney, may decline to release them without a release from the examinee. They may be more agreeable to releasing them to a psychologist designated by the prosecutor. This can lead to discovery of scores that were not interpreted in the standard manner or over-interpreted. For example, although cutoff scores are given for validity scales to indicate probable exaggeration, they are quite conservative. Suppose an examinee obtained high scores on multiple validity scales, but none quite exceeded the cutoff score? While a conscientious examiner might be cautious in describing the meaning of this data, a conclusion of "no evidence of feigning or exaggeration" would not be accurate.

A competitor of the MMPI-2/RF is the Personality Assessment Inventory, (29) which is 344 items long but its items are answered on a four-point scale, from False, not at all true to Very True. The PAI also has three strong validity indices shown in Table 4.

The Negative Distortion Scale is new and has been shown to be superior to the two more established indices in three recent studies, (30) but is not scored by the publisher/vendor.

A final self-report validity test is the Structured Inventory of Malingered Symptoms, (31) a 75-item self-report inventory. Billed and researched primarily as a screening test, very high scores (e.g. > 40) can nonetheless serve as evidence of feigning, both regarding traditional mental illness symptoms and cognitive issues such as memory complaints.

All these inventories are less frequently used by examiners outside of state hospitals, as the examinee must be supervised and the MMPI and PAI require from 45 minutes to over two hours to complete. Self-report tests should NEVER be given to the examinee to take home or to complete without supervision. An examiner that does so violates ethical proscriptions regarding test use and maintaining test security. (32)

Examiners who do not work in a hospital setting will usually employ one of several structured interviews. These resemble tests like the MMPI-2, but the items are read to the examinee and the examiner records and scores each response, and some observations are also recorded and scored. The Structured Inventory of Reported Symptoms (SIRS)" was introduced in 1992 and quickly became identified as the gold standard of malingering measures after initial, very promising results in forensic samples. Using standardized scoring and interpretive rules, it is able to identify about half of feigners with a fairly low false positive rate (about 5%). (34) It was recently updated and revised (35) after findings that it was prone to false positive errors in some samples, such as examinees with intellectual disabilities or dissociative disorders. Dissociative disorder are conditions that lack the normal continuity of memory and experience, as reported in patients with multiple personality disorder. New interpretive rules and categories were added, which did reduce false positives in problematic groups, but also significantly reduced sensitivity--the ability to successfully detect feigning. (36) Combined with some other problems, (37) the SIRS-2 has not achieved the gold standard status claimed by and often granted its predecessor. Still, it provides solid evidence of feigning and is the recommended instrument for intellectually disabled and dissociative patients suspected of feigning or exaggerating psychiatric symptoms. (38)

The Miller Forensic Assessment of Symptom Test (M-FAST) (39) is marketed as a screening test, quite possibly to avoid direct competition with the SIRS, with whom it shares the same publisher. As a screening test, its role would be to identify possible feigners for further evaluation. However, several authors pointed out that by simply using a higher cutoff score (e.g., >11), the M-FAST can provide substantial evidence of over-reporting/exaggeration. (40) It is roughly one seventh the length of the SIRS/SIRS-2, is much quicker to score (1 minute vs. 20), and thus offers a huge advantage in terms of time efficiency. This is an important consideration, as CST exams are often poorly compensated.

PERFORMANCE VALIDITY TESTS (PVTS)

These tests require the examinee to "do" something, such as remember pictures or words, then provide answers that are objectively right or wrong. Memory testing is a common approach. There is a considerable range of tests available in this domain, so I will discuss the most common, and summarize others in Table 4.

One of the earliest, quickest, and most commonly used performance validity tests (PVT) is the Rey 15 Item Test. It takes about one minute and is presented a memory task. It is actually quite easy, so most examinees can correctly recall at least 8 of the 15 items. However, very low functioning examinees, such as with intellectual disabilities, severe head injuries, or dementia may fail. (41) Similarly, because it is quite easy, some examinees may either perceive it as a validity test or pass it even though they exert little effort. The Rey has been cited as a test that could be unethically used as evidence of good effort by biased witnesses: (42) Because of its low sensitivity, passing it is not evidence that the person performed to the best of their ability.

The Test of Memory Malingering is probably the most widely used PVT in CST exams at the present. (43) It consists of several booklets of line drawings, all of common objects. The examinee is shown the pictures and then tested for their memory. In the most-researched version of the test, the examinee is shown the pictures twice and then tested for recall immediately after each presentation. The most common criterion is a score below 45 correct on Trial 2. Recent research suggests this is an overly conservative criterion for most examinees, and that a considerably higher cut off score might strike a better balance of sensitivity and specificity. (44) As is, the TOMM is less sensitive (less likely to detect feigning) than several other modern PVTs, (45) and because it is so widely used, there is a risk that it has been compromised through internet articles and frequent exposure to defendants. Some repeat offenders may have seen their competency reports in which TOMM results were used to conclude they were faking, and are not likely to be fooled again. For these reasons, passing a TOMM is often not strong evidence of genuine responding.

Examinees with severe cognitive impairment may legitimately obtain scores below the TOMM cutoff scores. Recommended cutoff scores for examinees with intellectual disabilities have varied widely (from <35 to <45), (46) which is problematic. Unlike some recently developed PVTs, the TOMM does not have any internal validity checks to distinguish very low ability for poor effort: Either can produce failing scores and they cannot be reliably distinguished in most cases.

However, it is possible to score so low on the TOMM, and many other such tests, that deficient ability alone can be ruled out. Many PVTs (although not the Rey 15 Item Test) require the test-taker to choose among two response options. For a 50-item test with two possible answers per question, even someone with no memory or mental capacity of any sort (other than to be able to point to their choice) should get approximately 25 correct just by guessing. Scores that are significantly below 25 suggest the person knew the correct answer and intentionally chose the wrong one. This is the strongest evidence of malingering that a test can provide. Unfortunately, few feigners will score below chance. (47)

Most PVTs reach their limits with intellectually disabled defendants or those who are demented. Such test-takers may lack the mental capacity to complete even very easy cognitive tasks, and most PVTs cannot distinguish very poor ability from poor effort. However, several PVTs from the neuropsychological literature can in some cases. (48) These include the Word Memory Test, (49) the Medical Symptom Validity Test, (50) and the Non-Verbal Medical Symptoms Validity Test. (51) They work by comparing the examinee's performance on tasks that vary in difficulty--some that are very easy, and some that appear easy but are actually harder than they look. Examinees with very compromised abilities should score best on the easiest tasks, worse on the harder ones. Malingerers often do not.

THE INVENTORY OF LEGAL KNOWLEDGE (ILK) (52)

The ILK is a recently published validity test that attempts to assess if the examinee is falsely portraying ignorance of the court system--a common strategy. Despite its recent arrival, it had already achieved widespread use by December 2012. (53) The ILK consists of 61 true-false question about the court system, and it is reported to correlate substantially with other PVTs such as the TOMM. (54) However, the ILK is vulnerable to high false positives among intellectually disabled examinees (55) and those that are truly incompetent. (56) The manual reported that among a small sample of 17 incompetent defendants, 82% scored below the recommended cutoff score of 46. (57) Thus, the ILK alone cannot adequately distinguish between real and feigned incompetence, unless the score is significantly below chance, which is a major limitation.

Tests have taken center stage in assessing negative response bias. However, their effectiveness relies largely on two factors: That they are not perceived to be malingering tests, and that their rationale and scoring rules are not known to the examinee. Psychologists are expected to list tests used in their assessment, and there is information available on the internet about validity tests. Further, attorneys may coach clients undergoing CST assessment about validity tests and how to respond to them. (58) Because of this, and their ethical obligation to preserve test security, psychologists should resist disclosure of test manuals to non-psychologists. Instead, disclosure to a psychologist retained by the defense attorney is preferable. If a court rules that test materials be turned over to the defense, an order that requires return of the materials at the end of the case, and forbids reproduction or distribution, should be sought.

COLLATERAL DATA

CST examiners usually have access to the police report and often, the defendant's criminal history. If the defendant has a psychiatric history, the examiner will want to review at least the most recent records. If the defendant presents as intellectually compromised, school records can be sought, although these are usually not retained by school districts after seven years.

The range of potential sources is very broad, and might include family members, treatment providers, jail staff, the arresting officer, probation or parole officers, and prior evaluations. If a defendant is in custody, it is often desirable to speak to jail security staff, as they observe the defendants over many hours and occasions. In contrast, meetings with a nurse or physician at the jail may be brief, infrequent, and an opportunity for the defendant to falsely present a MH issue. In US v. Gigante, (59) observations by a corrections officer and nurse were apparently more credible to the judge than some very respected professionals' opinions.

Prior evaluations, particularly by government agencies like the Social Security Administration and Veterans Administration, may be given substantial credibility. Often, they should not: The Social Security Administration has resisted the use of validity measures, usually does not pay for examiners to administer them, and has described their use "not programmatically useful," (60) despite evidence of frequent feigning in their clients. (61) Even administrative law judges have reported feeling pressured to approve claims. (63) Veterans Administration evaluations are "uniquely proclaimant" and pressures discouraging validity assessment among disability claimants have been published, (64) despite high failure rates on validity tests and evidence of malingering. (65) While many VA disability examiners do use response style measures, congress recently allocated $5.8 billion to private evaluation companies that rarely if ever address the possibility of malingering. Thus, representation or even proof that a defendant is considered disabled by the SSA or VA is not compelling proof of a disabling condition. Further, even legitimate inability to work should not be equated with incapacity to stand trial.

Most mental health treatment providers cannot be relied on to distinguish real from exaggerated presentations. This is simply not their role and most lack adequate training or motivation to do so. In fact, diagnosing a patient as malingering (which is very rare) will likely bring the provider nothing but trouble, including possible complaints to the agency administration and to the state professional board, threats, and loss of income when the patient seeks future treatment elsewhere. One recent study found 42.4% of mental health patients reported having agendas for their MH treatment beyond getting better, while only 9.5% informed their providers of these issues. (66) Finally, even if prior examiners or treatment providers addressed response style, the thoroughness and competence of this effort should be carefully considered and not assumed: Few mental health clinicians are competent in this area.

Mental Health and Veterans Courts have been created to deal with the special needs of these defendants. Because these settings may lead to more favorable treatment than a general criminal court, the possibility of feigning must be considered. While veterans with combat experience do appear to be at greater risk for subsequent legal problems, one should not assume this a result of PTSD. While most veterans have a clean legal history, a substantial number report having gotten in trouble in school for fighting. Soldiers that seek or are selected for infantry and other combat roles may have a higher basal level of aggression than others even prior to any specialized training and combat. Further, episodes of violence may be triggered by use of alcohol, not PTSD-related symptoms, as is the case for many crimes.

CLAIMED AMNESIA

Defendants frequently claim amnesia for the offense. (67) Even legitimate amnesia is not an automatic bar to competence, (68) and often, amnesia is offered as an attempt to reduce responsibility. The most plausible cause for legitimate amnesia during a crime, based on sheer numbers, is heavy use of alcohol or alcohol combined with depressant or sleep-inducing drugs. Confusion following an epileptic seizure is also a plausible cause. (69) Alcohol use is frequently involved in crimes, (70) and it has been estimated that amnesia during a crime is roughly five million times more likely due to alcohol intoxication than to a sleep disorder. (71) While blackouts are usually associated with very high BACs (e.g., .30%), (72) some authors have reported them at BACs as low as .07%. (73) While one might assume that claims of alcohol-induced blackouts could be corroborated by observations of intoxication, some people, presumably chronic alcoholics, can reach very high BACs (e.g., .30) without showing typical signs of intoxication. (74) Blackouts are not uncommon among alcoholics and even samples of students. Most are "partial," in that some memories are encoded and recoverable, while those with a total lack of recall occur about one-third as often. (71)

Amnesia due to dissociation, anger, or other psychological processes is highly controversial, with some authors giving such claims credence, (76) and many others asserting that nearly all amnesia claims pertaining to crimes are feigned. (77) Although such patients diagnosed with dissociative conditions, like multiple personality disorder, often claim to have no memory for events experienced in other personality states, experimental studies show normal levels of memory transfer, retention, and interference with similar material to be remembered. (78) One such defendant I evaluated claimed to have a dissociative disorder and no recollection for the offense. However, she signed papers in her own name, in her regular penmanship, at the time of her arrest, seemingly contradicting her claim.

Often collateral data, such as police reports and videos of the crime scene, can be important in disputing such claims. Validity tests, as described above, can also contribute. Finally, psychologists have developed a method to objectively assess for feigned amnesia of a crime. (79) Briefly, a two-choice alternative knowledge test is created about the crime. Details of the crime should be culled from the police-report and other sources. These should be details the defendant would have noticed but claims not to know. The items should be equally plausible to a naive test-taker, such as, "What weapon did the robber use--a gun or a knife?" A test-taker that scores significantly below chance reveals knowledge of the crime. Conversely, because the procedure has modest sensitivity (typically less than 50%), (80) passing such a test does not rule out feigning.

BIAS

Bias in expert witnesses has long been recognized by legal professionals and more recently, investigators of forensic practice. (81) Murrie and colleagues (82) found across 60 clinicians who conducted a combined total of more than 7,000 CST evaluations, different examiners found widely differing numbers of their examinees incompetent: The figures ranged from 0 to 62%! Recently, over 100 psychologists and psychiatrists were randomly assigned and paid as consultants to score of a measure of dangerousness. Even though they met only 15 minutes with the presumed referring attorney, scores produced on the risk assessment measure depended on whether the examiner thought they were hired by the defense or prosecution, and some of the effects observed were quite large." (83)

While the Specialty Guideline for Forensic Psychologists are a bit oblique on the need to assess for feigning, they are clearer regarding issues of bias and distinguishing between facts, inferences, and conclusions:

1.02 Impartiality and Fairness

Forensic practitioners strive for accuracy, impartiality, fairness, and independence. Forensic practitioners recognize the adversarial nature of the legal system and strive to treat all participants and weigh all data, opinions, and rival hypotheses impartially

"Rival hypotheses" means alternative ways of perceiving or interpreting the evidence, such as a defendant reporting he hears voices. Several hypotheses might be considered: 1) That the person is schizophrenic, 2) that the person is withdrawing from alcohol or drugs, or 3) the person is feigning.

9.01 Use of Appropriate Methods

Forensic practitioners seek to maintain integrity by examining the issue or problem at hand from all reasonable perspectives and seek information that will differentially test plausible rival hypotheses.

9.02 Use of Multiple Sources of Information

Forensic practitioners ordinarily avoid relying solely on one source of data, and corroborate important data whenever feasible... When relying upon data that have not been corroborated, forensic practitioners seek to make known the uncorroborated status of the data, any associated strengths and limitations, and the reasons for relying upon the data.

11.02: Differentiating Observations, Inferences, and Conclusions

In their communications, forensic practitioners strive to distinguish observations, inferences, and conclusions. Forensic practitioners are encouraged to explain the relationship between their expert opinions and the legal issues and facts of the case at hand.

Because they are presented as aspirational guidelines, not minimal standards of practice, cross examination may wish to first establish that the expert regards him/herself as a well-credentialed forensic psychologist that practices at the highest level of the profession.

Reports often contain many clues about examiner bias. Some of these include:

* Use of the defendant's first name (for adults) rather than a more formal appellation (e.g., Mr. Smith); sympathetic reporting of life events.

* Reporting the defendant's (or other friendly sources') answers about personal history, perceptions, and feelings as if they are facts. (E.g., "Ms. X was born in Ann Arbor, MI and sexually abused by her father from the ages of 5 through 12.")

* Failure to comment on and fairly consider contradictions between the defendant's accounts and other sources.

* Accepting and reporting the defendant's demeanor and performance at face value and as representative. An examinee might project a very different persona during the evaluation than in other settings. I often observed defendants swagger into my office building, then act like a helpless, mistreated puppy during the exam. Such non-verbal behaviors can have a powerful influence on judgments, like competency, that they have no bearing on. (84)

* Intermixing observations, facts, and inferences in the body of the report. A frequent example is, "Mr. Jones was unable to describe what a plea bargain is and was not able to benefit from tutoring. "This is a conclusion, as it provide an interpretation of what was actually observed: Mr. Jones not answering the question. Another variant: "Mr. Jones acknowledged hearing voices and thoughts of suicide."The word "acknowledged" is loaded with additional meanings and suggests the author believes the account. For this reason, I stick to very neutral words such as "reported" and "said" that do not editorialize.

* Relying on subjective assessments of truthfulness or good effort. There is at best conflicting evidence that psychologists can detect feigning or poor effort without the aid of tests and collateral data. (85) A recent survey of neuropsychologists found the over-whelming majority believed validity testing is more accurate than subjective impressions about effort expended in an exam. (86)

* Failure to seek or obtain collateral data, such as offense reports, psychiatric records, or speak with persons familiar with the defendant--especially those that may contradict the defendant's account or show him or her in a different light. Relying on family members and selective medical records or sources provided by the defense attorney are common.

* Failure to consider or sufficient assess the possibility of malingering or poor effort. As described earlier, highly respected authors have stated such assessments are necessary since at least 1988.

* Use of weak or inappropriate validity tests, or discounting the significance of those that are failed. Because many validity tests have set cutoff scores to minimize false positive errors, they sacrifice the ability to catch those that are feigning. To compensate for the stringent cutoff scores, multiple validity tests should typically be used. (87) Sometimes examiners employ validity tests, but rationalize failures as due to depression, fatigue, or pain, none of which are plausible explanations. (88)

* Failure to weigh the importance of validity tests failed in a previous evaluation.

* Misrepresenting the meaning of a passed validity test. Passing a validity test with low sensitivity is not meaningful, and much less informative than failing the same test.

* Equating the presence of a legitimate mental condition with genuine presentation during the exam. These are two, entirely separate issues. A person with a mental condition can present genuinely or not, as can one without a mental condition. The presence of a legitimate mental condition tells you nothing about whether the examinee presented genuinely.

* Allowing the defense attorney or others to remain in the room during testing. Doing so violates two important principles: Maintaining standardization of test administration (APA Ethical Code 9.02) and protecting test security (APA Ethical Code 9.II). (89) No tests has been standardized with the examinee's attorney looking over their shoulder.

* Offering facile and unsupportable explanations for apparent malingering. One defendant I examined was subsequently examined by a defense psychologist, who reported the defendant spoke incoherently throughout a nearly three-hour interview. Phone calls recorded from the jail revealed him speaking in a completely lucid and rational manner in lengthy conversations with family the week before and after the psychologist's interview. During testimony, the psychologist opined the discrepancy could be due to the defendant's comfort in talking with family vs. the psychologist.

WHAT'S A PROSECUTOR TO DO?

Prosecutors should be aware of systemic problems that contribute to poor CST assessments. Quality CST exams arc facilitated when examiners are court-appointment and have adequate time and resources to complete their work. Unfortunately, the fee for a CST evaluation, conducted by a certified CST examiner, is as low as $170 in some locales. (90) Examiners often rely on defense attorneys to supply school or medical records, which may be redacted or thinned before being passed on. Even state examiners may have difficulty accessing corrections staff and recorded phone calls, and a defense-retained examiner has little chance of doing so. To the extent that prosecutors can do so and provide such data to examiners, these important sources are more likely to be used. Similarly, court orders that direct the defendant to identify any facilities where psychiatric treatment was obtained (or schools for defendants presenting as intellectually disabled), and direct those facilities to release records to the examiner, are preferable to relying on the defense attorney.

Prosecutors can also improve the quality of CST reports by holding examiners to the standards set out in this article. The available evidence suggests judges have difficulty appraising the relative quality of conflicting reports, and when faced with conflicting professional opinions, overwhelmingly side with the majority (91) Since examiners often avoid critiquing a colleague's report, prosecutors might consider hiring a separate expert to do so.

BY STEVE RUBENZER, PH.D., ABPP

Steve Rubenzer, Ph.D., ABPP is board certified in Forensic Psychology and has conducted nearly 3800 CST exams on cases ranging from trespassing to the highest profile capital murder. He is author of the upcoming book Assessing Negative Response Style in Competency to Stand Trial Evaluations by Oxford University Press. He offers reviews, consultation, second opinions, and testimony regarding CST evaluations, particularly whether issues of feigning or poor effort have been adequately addressed. He can be reached at CSTReviews.expert, rubenzer.stcve@att.net, or 281-814-7743.

(1) Hale, M. (1736). Historia placitorum coronae. The history of the pleas of the crown. Edited by Sollom Emlyn. 2 vols. London, 1736. Reprint. Classical English Law Texts. London: Professional Books, Ltd., 1971.

(2) Mittenberg, W., Patton, C., Canyock, E. M., & Condit, D. C. (2002). Base rates of malingering and symptom exaggerating. Journal of Experimental and Clinical Neuropsychology, 24(8), 1094-1102. http://dx.doi.org/10.1076/icen.24.8.1094.8379

(3) Rubenzer, S.J. (in press). Assessing negative response bias in competency to stand trial evaluations. Oxford University Press.

(4) American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders (5th ed. text rev.). Washington, DC: Author. http://dx.doi.org/10.1176/appi.books.9780890425596

(5) Merksey, H. (1979). The analysis of hysteria. London: Bailliere Tindall.

(6) Merten, T., & Merckelbach, H. (2013). Symptom validity testing in somatoform and dissociative disorders: A critical review. Psychological Injury and Law, 6(2), 122-137. DOI: 10.1007/sl2207-013-9155-x

(7) An, K., Zakzanis, K., & Joordens, S. (2012). Conducting research with nonclinical healthy undergraduates: Does effort play a role in neuropsychological test performance? Archives of Clinical Neuropsychology, 27, 849-857. http://dx.doi.org/10.1093/arclin/acs085.

(8) Kirkwood, M.W., Kirk, J.W., Blaha, R. Z., & Wilson, P. (2010). Noncredible effort during pediatric neuropsychological exam: A case series and literature review. Child Neuropsychology, 16(6), 604-18. http://dx.doi.org/10.1080/09297049.2010.495059.

(9) American Psychological Association. (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57(12), 1060-1073.

(10) Mossman, D., Nottsinger, S. G., Ash, P., Frierson, R. L., Gerbasi, J., Hackett, M.,... & Wall, B.W (2007). AAPL practice guideline for the forensic psychiatric evaluation of competence to stand trial. Journal of the American Academy of Psychiatry and the Law Online, 35(Supplement 4), S3-S72

(11) American Psychological Association. (2013). Specialty guidelines for forensic psychology, lite American Psychologist, 68(1), 7-19. http://dx.doi.org/10.1037/a0029889.

(12) Bush, S. S., Ruff, R. M., Troster,A., Barth.J., Koffler, S. P., Pliskin, N. H., et al. (2005). NAN position paper: Symptom validity assessment: Practice issues and medical necessity. Archives of Clinical Neuropsychology, 20, 419-426. http://dx.doi.org/10.1016/j.acn.2005.02.002. Board of Directors. (2007). American Academy of Clinical Neuropsychology (AACN) practice guidelines for neuropsychological assessment and consultation. The Clinical Neuropsychologist, 21(2), 209-231. Heilbronner, R. L., Sweet, J. J., Morgan, J. E., Larrabee, G. J., Millis, S. R., & Conference Participants 1. (2009). American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist, 23(7), 1093-1129.

(13) Green, P., Rohling, M. L., Lees-Haley, P. R., & Allen, L. M. A. (2001). Effort has a greater effect on test scores than severe brain injury in compensation claimants. Brain injury, 15(12), 1045-1060. http://dx.doi.org/10.1080/02699050110088254. Green, P. (2007). The pervasive influence of effort on neuropsychological tests. Physical Medicine and Rehabilitation Clinics of North America, 18(1), 43-68. http://dx.doi.org/10.1016/j.pmr.2006.11.002. Fox, D. D. (2011). Symptom validity test failure indicates invalidity of neuropsychological tests. The Clinical Neuropsychologist, 25(3), 488-495. http://dx.doi.org/10.1080/13854046.2011.554443.

(14) Karr, J. E., Areshenkoff, C. N., & Garcia-Barrera, M. A. (2014). The neuropsy-chological outcomes of concussion: A systematic review of meta-analyses on the cognitive sequelae of mild traumatic brain injury. Neuropsychology, 28(3), 321-336. http://dx.doi.org/10.1037/neu0000037

(15) Ruff, R. M., Iverson, G. L., Barth, J.T., Bush, S. S., & Broshek, D. K. (2009). Recommendations for diagnosing a mild traumatic brain injury: a National Academy of Neuropsychology education paper. Archives of Clinical Neuropsychology, 24(1), 3-10. http://dx.doi.org/10.1093/arclin/acp006

(16) Melton, C. B., Petrila.J., Poythress, N. G., & Slobogm, C. (1997). Psychological evaluations for the courts: A handbook for mental health professionals and lawyers. New York, NY: Guilford Press.

(17) Pirelli, G., Gottdiener, W. H., & Zapf, P.A. (2011). A meta-analytic review of competency to stand trial research. Psychology, Public Policy, and Law, 17, 1-53. http://dx.doi.org/lO.1037/a0021713.

(18) Grisso T. (1988).

(19) Raffard, S., Capdevielle, D., Boulenger, J. P., Gely-Nargeot, M. C., & Bayard, S. (2014). Can individuals with schizophrenia be instructed to deliberately feign memory deficits? Cognitive Neuropsychiatry, 19(5), 414-426.

(20) Bianchini, K.J., Greve, K.W., & Love, J. M. (2003). Definite malingered neurocognitive dysfunction in moderate/severe traumatic brain injury. The Clinical Neuropsychologist, 17(4), 574-580. http://dx.doi.org/10.1076/clin.17.4.574.27946. Sweet, J. & Giuffre Meyer, D. (2011). Well-documented, serious brain dysfunction followed by malingering. In J. E. Morgan, I. S. Baron, J.H. Ricker (Eds.). Casebook of clinical neuropsychology (pp 200-212). New York, NY: Oxford University Press.

(21) Everington, C, Notario-Smull, H., & Horton, M. L. (2007). Can defendants with mental retardation successfully fake their performance on a test of competence to stand trial? Behavioral Sciences & the Law, 25(4), 545-560. DOI: 10.1002/bsl.735

(22) Pirelli, Gottdiener, and Zapf (2011).

(23) Rubenzer, S.J. (in press).

(24) Nicholson, R. A., & Kugler, K. E. (1991). Competent and incompetent criminal defendants: A quantitative review of comparative research. Psychological Bulletin, 109(3), 355-370. http://dx.doi.org/10.1037/0033-2909.109.3.355

(25) Rubenzer, S.J. (in press).

(26) Rubenzer, S.J. (in press).

(27) Butcher, J. N., Graham, J. R., Ben-Porath,Y. S.,Tellegen, A., & Dahlstrom, W. G. (2001). Minnesota Multiphasic Personality Inventory-2 (MMPI-2): Manual for administration and scoring (Rev. ed.). Minneapolis: University of Minnesota Press.

(28) Ben-Porath,Y. S., & Tellegen, A. (2008). Minnesota Multiphasic Personality Inventory--2 Restructured Form: Manual for administration, scoring, and interpretation. Minneapolis, MN: University of Minnesota Press. http://dx.doi.org/10.1002/9780470479216.corpsy0573

(29) Morey, L. C. (1991). Personality Assessment Inventory (PAI). Lutz, FL: Psychological Assessment Resources.

(30) Mogge, N. L., Lepage, J. S., Bell, T., & Ragatz, L. (2010). The Negative Distortion Scale: A new PAI validity scale. Journal of Forensic Psychiatry & Psychology, 21(1), 77-90. http://dx.doi.org/10.1080/14789940903174253. Thomas, K. M., Hopwood, C.J., Orlando, M. J., Weathers, F. W., & McDevitt-Murphy, M. E. (2012). Detecting feigned PTSD using the Personality Assessment Inventory. Psychological Injury and Law, 5(3-4), 192-201. http://dx.doi.org/10.1007/s12207-011-9111-6. Rogers. R., Gillard, N. D., Wooley, C. N., & Kelsey, K. R. (2013). Cross-validation of the PAI Negative Distortion Scale for feigned mental disorders: A research report. Assessment, 20(1), 36-42. http://dx.doi.org/10.1177/1073191112451493.

(31) Widows, M. R., & Smith, G. P. (2005). SIMS: Structured Inventory of Malingered Symptomatology: Professional Manual. Lutz, FL: Psychological Assessment Resources.

(32) American Psychological Association. (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57(12), 1060-1073.

(33) Rogers, R., Bagby, R. M., & Dickens, S. E. (1992). Structured Interview of Reported Symptoms (SIRS) and professional manual. Lutz, FL: Psychological Assessment Resources, Inc.

(34) Green, D., & Rosenfeld, B. (2011). Evaluating the gold standard: A review and meta-analysis of the Structured Interview of Reported Symptoms. Psychological Assessment, 23(1), 95-107. http://dx.doi.org/10.1037/a0021149

(35) Rogers, R., Sewell, K.W., & Gilliard, N. (2010). Structured Interview of Reported Symptoms-2 and professional manual. Odessa, FL: Psychological Assessment Resources.

(36) Brand, B. L., Tursich, M., Tzall, D, Loewenstein, R. J. (2014). Utility of the SIRS-2 in distinguishing genuine from simulated dissociative identity disorder. Psychological Trauma: Theory, Research, Practice, and Policy, 6(4), 308-317. http://dx.doi.org/10.1037/a0036064 DeClue, G. (2011). Harry Potter and the Structured Interview of Reported Symptoms? Open Access Journal of Forensic Psychology, 3, 1-18. Green, D., Rosenfeld, B., & Belfi, B. (2013). New and improved? A comparison of the original and revised versions of the Structured Interview of Reported Symptoms. Assessment, 20(2), 210-218. http://dx.doi.org/10.1177/1073191112464389 Green, D., Rosenfeld, B., & Belfi, B. (2013). New and improved? A comparison of the original and revised versions of the Structured Interview of Reported Symptoms. Assessment, 20(2), 210-218. http://dx.doi.org/10.1177/1073191112464389. Tarescavage, A. M., & Glassmire, D. M. (2016, April 14). Differences between Structured Interview of Reported Symptoms (SIRS) and SIRS-2 sensitivity estimates among forensic Inpatients: A criterion groups comparison. Law and Human Behavior. Advance online publication. http://dx.doi.org/10.1037/lhb0000191

(37) Rubenzer, S.J. (2010). Review of the Structured Inventory of Reported Symptoms-2. Open Access Journal of Forensic Psychology, 2, 273-286. DeClue, G. (2011). Harry Potter and the Structured Interview of Reported Symptoms? Open Access Journal of Forensic Psychology, 3, 1-18.

(38) Rubenzer, S. J. (in press).

(39) Miller, H. A. (2001). M-Fast: Miller Forensic Assessment of Symptoms Test professional manual. Odessa, FL: Psychological Assessment Resources.

(40) Boone, K. B. (2013). Clinical practice of forensic neuropsychology: An evidence-based approach. New York, NY: Guilford Press. Frederick, R. I. (2011). Clinical assessment of malingering and deception. American Academy of Forensic Psychology workshop, presented July, 2011, Portland, OR. Gaines, M.V. (2009). An examination of the combined use of the PAI and the M-FAST in detecting malingering among inmates (Doctoral dissertation, Texas Tech University). http://dx.doi.org/2346/10347. Glassmire, D. M.,Tarescavage, A. M., & Gottfried, E. D. (2016). Likelihood of obtaining Structured Interview of Reported Symptoms (SIRS) and SIRS-2 elevations among forensic psychiatric inpatients with screening elevations on the Miller Forensic Assessment of Symptoms Test. Psychological Assessment, 28(12), 1586-1596. http://dx.doi.org/10.1037/pas0000289.Tarescavage. A. M., & Glassmire, D. M. (2016, April 14). Differences between Structured Interview of Reported Symptoms (SIRS) and SIRS-2 sensitivity estimates among forensic Inpatients: A criterion groups comparison. Law and Human Behavior. Advance online publication. http://dx.doi.org/10.1037/lhb0000191

(41) Reznek, L. (2005). The Rey 15-item memory test for malingering: A metaanalysis. Brain Injury, 19(7), 539-543. http://dx.doi.org/10.1080/02699050400005242

(42) Vallabhajosula, B., & Van Gorp. W. G. (2001). Post-Daubert admissibility of scientific evidence on malingering of cognitive deficits. Journal of the American Academy of Psychiatry and the Law Online, 29(2), 207-215.

(43) Rubenzer, S.J. (in press)

(44) Greve, K. W., Bianchini, K. J., & Doane, B. M. (2006). Classification accuracy of the Test of Memory Malingering in traumatic brain injury: Results of a known group analysis. Journal of Clinical and Experimental Neuropsychology, 28, 1176-1190. Mossman, D., Wygant, D. B., & Gervais, R. O. & Hart, K.J. (2017). Trial 1 vs. Trial 2 of the Test of Memory Malingering: Evaluating accuracy without a "gold standard." (January 3, 2017). Psychological Assessment, Forthcoming. Smith, K., Boone, K., Victor, T., Miora, D., Cottingham, M., Ziegler, E.,... & Wright, M. (2014). Comparison of credible patients of very low intelligence and non-credible patients on neu-rocognitive performance validity indicators. The Clinical Neuropsychologist, 28(6), 1048-1070. http://dx.doi.org/10.1080/13854046.2014.931465. Stenchk. J. H., Miele, A. S., Silk-Eglit, G., Lynch, J. K., & McCaffrey, R. J. (2013). Can the sensitivity and specificity of the TOMM be increased with differential cutoff scores?. Applied Neuropsychology: Adult, 20(4), 243-248.

(45) Armistead-Jehle, P., & Gervais, R. O. (2011). Sensitivity of the Test of Memory Malingering and the Nonverbal Medical Symptom Validity Test: A replication study. Applied Neuropsychology, 18(4), 284-290. http://dx.doi.org/10.1080/09084282.2011.595455. Gervais, R. O., Rohling M. L., Green, P., & Ford, W. (2004). A comparison of WMT, CARB, and TOMM failure rates in non-head injury disability claimants. Archives of Clinical Neuropsychology, 19(4), 475-487. http://dx.doi.Org/10.1016/j.acn.2003.05.001. Mossman, D., Wygant, D. B., & Gervais, R. O. (2012). Estimating the accuracy of neurocognitive effort measures in the absence of a "gold standard". Psychological Assessment, 24(4), 815-822. http://dx.doi.org/10.1037/a0028195. Green, P. (2011). Comparison between the Test of Memory Malingering (TOMM) and the Nonverbal Medical Symptom Validity Test (NV-MSVT) in adults with disability claims. Applied Neuropsychology, 18(1), 18-26. Mossman, D., Wygant, D. B., & Gervais, R. O. (2012). Estimating the accuracy of neurocognitive effort measures in the absence of a "gold standard". Psychological Assessment, 24(4), 815-822. http://dx.doi.org/10.1037/a0028195. Mossman, D., Wygant, D. B., & Gervais, R. O. & Hart, K. J. (in press).Trial 1 vs. Trial 2 of the Test of Memory Malingering: Evaluating accuracy without a "gold standard." Psychological Assessment. Mossman, D., Wygant, D. B., & Gervais, R. O. & Hart, K. J. (in press). Trial 1 vs. Trial 2 of the Test of Memory Malingering: Evaluating accuracy without a "gold standard." Psychological Assessment. Tan, J. E., Slick, D. J., Strauss, E., & Hultsch, D. F. (2002). How'd they do it? Malingering strategies on symptom validity tests. The Clinical Neuropsychologist, 16(4), 495-505. http://dx.doi.org/10.1076/clin.l6.4.495.13909 Tan, J. E., Slick, D.J., Strauss, E., & Hultsch, D. F. (2002). How'd they do it? Malingering strategies on symptom validity tests. The Clinical Neuropsychologist, 16(4), 495-505. http://dx.doi.org/10).1076/clin.16.4.495.139119. Whitney, K. A. (2013). Predicting Test of Memory Malingering and Medical Symptom Validity Test failure within a Veterans Affairs Medical Center: Use of the Response Bias Scale and the Henry-Heilbronner Index. Archives of Clinical Neuropsychology, 28(3), 222-235, http://dx.doi.org/10.1093/arclin/act0l2.

(46) Ray, C. L. (2012). Assessment of feigned cognitive impairment: A cautious approach to the use of the Test of Memory Malingering for individuals with intellectual disability. Open Access Journal of Forensic Psychology, 4, 24-50.

(47) Greve, K. W., Binder, L. M., & Bianchini, K. J. (2009). Rates of below-chance performance in forced-choice symptom validity tests. The Clinical Neuropsychologist, 23(3), 534-544. http://dx.doi.org/10.1080/13854040802232690

(48) Green, P., & Flaro, L. (2015). Results from three performance validity tests (PVTs) in adults with intellectual deficits. Applied Neuropsychology. Adult, 22(4), 293-303. http://dx.doi.org/10.1080/23279095.2014.925903. Green, P., & Flaro, L. (2016). Results from three performance validity tests in children with intellectual disability. Applied Neuropsychology: Child, 5(1), 25-34. http://dx.doi.org/10.1080/21622965.2014.935378.

(49) Green, P. (2005). Green's Word Memory Test user's manual. Edmonton: Green's Publishing, Inc.

(50) Green, P. (2004). Manual for the Medical Symptom Validity Test for Windows. Edmonton, Alberta, Canada: Green's Publishing.

(51) Green, P. (2008). Manual for the Nonverbal Medical Symptom Validity Test for Windows. Edmonton, Alberta, Canada: Green's Publishing.

(52) Otto, R. K., Musick. J. E., & Sherrod, C. B. (2010). Inventory of Legal Knowledge professional manual. Lutz, FL: Psychological Assessment Resources, Inc.

(53) Rubenzer, S.J. (in press).

(54) Otto, R. K., Musick, J. E., & Sherrod, C. B. (2010).

(55) Gottfried, E., & Carbonell, J. (2014). The role of intelligence on performance on the Inventory of Legal Knowledge (ILK). The Journal of Forensic Psychiatry & Psychology, 25(4), 380-396. http://dx.doi.org/10.1080/14789949.2014.920900. Watson. M. E. & Kivisto, A. J. (2017). The Inventory of Legal Knowledge (ILK) and adults with intellectual disabilities. Journal of Intellectual Disabilities and Offending Behaviour, 8(2),

(56) Rubenzer, S. J. (2011). Review of the Inventory of Legal Knowledge. Open Access Journal of Forensic Psychology, 3, 70-81.

(57) Rubenzer, S.J. (2011).

(38) Wetter, M. &. Corrigan, S.. (1995) Providing information to clients about psychological tests: A survey of attorneys' and law students' attitudes. Professional Psychology: Research and Practice, 26 (1995), 474-477. Youngjohn, J. R. (1995). Confirmed attorney coaching prior to neuropsychological evaluation. Assessment, 2(3), 279-283.

(59) United States v. Gigante,982 F. Supp. 140 (E.D.N.Y. 1997).

(60) Chafetz, M. D. (2008). Malingering on the social security disability consultative examination: Predictors and base rates. The Clinical Neuropsychologist, 22(3), 529-546. http://dx.doi.org/10.1080/13854040701346104, Dunlop, T. (2005). Malingering. [Speech to Administrative Law Judges]. Disability Determination Service, Louisiana, USA. Social Security Administration Program Operations Manual System (2013). DI 22510.006 When not to purchase a consultative examination (CE). Retrieved from https://secure.ssa.gov/appsl0/poins.nsf/lnx/0422510006, April 16, 2015. Chafetz, M. (2015). Intellectual disability: Criminal and civil forensic issues. New York, NY: Oxford University Press.

(61) Chaftez, M. (2008). Chafetz, M., & Underhill, J. (2013). Estimated costs of malingered disability. Archives of Clinical Neuropsychology, 28(7), 633-639. http:dx.doi.org/10.1093/arclin/act038

(62) Ohlemacher, S. (2013). Judges: Social Security pushes approval of claims. Washington, DC: Associated Press.

(63) Hodge v. West, 153 F. 3d 1356 (Fed. Cir. 1998).

(64) Poyner, G. (2010). Psychological evaluations of veterans claiming PTSD disability with the Department of Veterans Affairs: A clinician's viewpoint. Psychological Injury and Law, 3(2), 130-132. http://dx.doi.org/10.1007/s12207-010-9076-x. Russo, A. C. (2014). Assessing veteran symptom validity. Psychological Injury and Law, 7(2), 178-190. http://dx.doi.org/10.1007/s12207-014-9190-2. Worthen. M. D., & Moering, R. G. (2011). A practical guide to conducting VA compensation and pension exams for PTSD and other mental disorders. Psychological Injury and Law, 4(3-4), 187-216. http://dx.doi.org/10.1007/s12207-011-9115-2

(65) Armistead-Jehle, P. (2010). Symptom validity test performance in U.S. veterans referred for evaluation of mild TBI. Applied Neuropsychology, 17(1), 52-59. http://dx.doi.org/10.1080/09084280903526182. Burkett, B. G., & Whitley, G. (1998). Stolen valor: How the Vietnam generation was robbed of its heroes and its history. Dallas, TX: Verity Press. Freeman, T, Powell, M., & Kimbrell, T. (2008). Measuring symptom exaggeration in veterans with chronic posttraumatic stress disorder. Psychiatry Research, 158(3), 374-380. http://dx.doi.org/10.1016/j.psychres.2007.04.002.

(66) Van Egmond, J., Kummeling, I., & Balkom,T. (2005). Secondary gain as hidden motive for getting psychiatric treatment. European Psychiatry, 20(5-6), 416-421. http://dx.doi.org/10.1016/j.eurpsv.2004.11.012

(67) Kopelman, M. D. (1995). The assessment of psychogenic amnesia. In A. D. Baddeley, B. A. Wilson, & F. N. Watts (Eds.), Handbook of memory disorders (pp. 427-448). West-Sussex: Wiley.

(68) Wilson v. United States, 391 F.2d 460 (1968).

(69) Mart, E. G., & Connelly, A. W. (2010). An unusual case of epileptic postictal violence: Implications for criminal responsibility. Open Access Journal of Forensic Psychology, 2, 49-58.

(70) Bradford, J. M. W., & Smith, S. M. (1979). Amnesia and homicide: The Padola case and a study of thirty cases. Journal of the American Academy of Psychiatry and the Law Online, 7(3), 219-231. Evans, J. R., Schreiber Compo, N., & Russano, M. (2009). Intoxicated witnesses and suspects: Procedures and prevalence according to law enforcement. Psychology, Public Policy, and the Law, 15(3), 194-221. http://dx.doi.org/10.1037/a0016837.

(71) Pressman, M. R., Mahowald, M. W, Schenck, C. H., & Cramer Bornemann, M. (2007). Alcohol induced sleepwalking or confusional arousal as a defense to criminal behavior: A review of scientific evidence, methods, and forensic considerations. Journal of Sleep Research, 16(2), 198-212. http://dx.doi.org/10.1111/j.1365-2869.2007.00586.x

(72) Hartzler, B., & Fromme, K. (2003). Fragmentary and en bloc blackouts: similarity and distinction among episodes of alcohol-induced memory loss. Journal of Studies on Alcohol, 64(4), 547-550. DOI: http://dx.doi.org/10.15288/jsa.2003.64.547. Perry, P. J., Argo, T. R., Barnett, M. J., Liesveld, J. L., Liskow, B., Hernan. J. M.,... & Brabson, M. A. (2006). The association of alcohol-induced blackouts and grayouts to blood alcohol concentrations. Journal of Forensic Sciences, 51(4), 896-899. http://dx.doi.orgl0.1111/j.1556-4029.2006.00161.x/.

(73) Hartzler, B. & Fromme, K. (2003).

(74) Perper, J.A.,Twerski, A., & Wienand, J. W. (1986). Tolerance at high blood alcohol concentrations: A study of 110 cases and review of the literature. Journal of Forensic Sciences, 31(1), 212-221. http://dx.doi.org/10.1520/JFS11873J. Rubenzer, S. J. (2011). Judging intoxication. Behavioral Sciences & the Law, 29(1), 116-137. http://dx.doi.org/10.1002/bsl.935. Sullivan, J. B., Hauptman, M., & Bronstein, A. C. (1987). Lack of observable intoxication in humans with high plasma alcohol concentrations. Journal of Forensic Sciences, 32(6), 1660-1665. http://dx.doi.org/10.1520/JFS11224J. Urso,T., Gavaler. J. S., & Van Thiel. D. H. (1981). Blood ethanol levels in sober alcohol users seen in an emergency room. Life Sciences, 28(9), 1053-1056. http://dx.doi.org/10.1016/0024-3205(81)90752-9

(75) Pressman, M. R., & Caudill, D. S. (2013). Alcohol-induced blackout as a criminal defense or mitigating factor: An evidence-based review and admissibility as scientific evidence. Journal of Forensic Sciences, 58(4), 932-940. http://dx.doi.org/l0.1111/1556-4029.12134. White, A. M. (2003). What happened? Alcohol, memory blackouts, and the brain. Alcohol Research and Health, 27(2), 186-196.

(76) Bradford & Smith, 1979. Bourget, D., & Whitehurst, L. (2007). Amnesia and crime. Journal of the American Academy of Psychiatry and the Law Online, 35(4), 469-480. Kopelman, 1995, 2002. Parwatikar, S. D., Holcomb, W. R., & Menninger, K. A. (1985). The detection of malingered amnesia in accused murderers. Bulletin of the American Academy of Psychiatry & the Law, 13(1), 97-103. Pyszora, N. M., Barker.A. E, & Kopelman, M. D. (2003). Amnesia for criminal offences: A study of life sentence prisoners. The Journal of Forensic Psychiatry, 14(3), 475-490. http://dx.doi.org/10.1080/14789940310001599785. Pyszora, N. M, Fahy, T, & Kopelman, M. D. (2014). Amnesia for violent offenses: Factors underlying memory loss and recovery. Journal of the American Academy of Psychiatry and the Law Online, 42(2), 202-213.

(77) Centor, A. (1982). Criminals and amnesia: Comment on Bower. American Psychologist, 7(2), 240. http://dx.doi.org/10.1037/0003-066X.37.2.240. Cima, M., Nijman, H., Merckelbach, H., Kremer, K., & Hollnack, S. (2004). Claims of crime-related amnesia in forensic patients. International Journal of Law and Psychiatry, 27(3), 215-221. Frederick (2011). Marshall, W. L., Serran, G., Marshall, L. E., & Fernandez,Y. M. (2005). Recovering memories of the offense in" amnesic" sexual offenders. Sexual Abuse, 17(1), 31-38. Merckelbach, H., & Christianson, S. A. (2007). Amnesia for homicides as a form of malingering. In S. A. Christianson (Ed.), Offenders memories of violent crimes (pp. 165-190). Chichester, England: Wiley. http://dx.doi.org/10.1002/9780470713082.cli7. Ornish, S. A. (2001). A blizzard of lies: Bogus psychiatric defenses. American Journal of Forensic Psychiatry 22(1), 19-30. Peters, M.J., van Oorsouw, K. I., Jelicic, M., & Merckelbach, H. (2013). Let's use those tests! Evaluations of crime-related amnesia claims. Memory, 21(5), 599-607. http://dx.doi.org/10.1080/0965821 1.2013.771672.

(78) Boysen, G.A., & VanBergen, A. (2014). Simulation of multiple personalities: A review of research comparing diagnosed and simulated dissociative identity disorder. Clinical Psychology Review, 34(1), 14-28. http://dx.doi.Org/10.1016/j.cpr.2013.10.008

(79) Denney, R. L. (1996). Symptom validity testing of remote memory in a criminal forensic setting. Archives of Clinical Neuropsychology, 11(7), 589-603. http://dx.doi.org/10.1093/arclin/11.7.589. Frederick, R. I., Carter, M., & Powel, J. (1995). Adapting symptom validity testing to evaluate suspicious complaints of amnesia in medicolegal evaluations. Bulletin of the American Academy of Psychiatry and the Law, 23(2), 231-237.

(80) Greve, K.W, Binder, L. M., & Bianchini, K.J. (2009). Rates of below-chance performance in forced-choice symptom validity tests. The Clinical Neuropsychologist, 23(3), 534-544. http://dx.doi.org/10.1080/13854040802232691)

(81) Dror, I. E. (2015). Cognitive neuroscience in forensic science: understanding and utilizing the human element. Philosophical Transactions of Royal Society B, 370(1674), 20140255. Available on-line at http://rsth.royalsocietypublishing.org/content/370/l674/20140255

(82) Murrie, D. C, Boccaccini, M.T., Zapf, P. A., Warren, J. I., & Henderson, C. E. (2008). Clinician variation in findings of competence to stand trial. Psychology, Public Policy, and Law, 14(3), 177-193. http://dx.doi.org/10.1037/a0013578.

(83) Murrie, D. C, Boccaccini, M.T., Guarnera, L.A., & Rufino, K. A. (2013). Are forensic experts biased by the side that retained them?. Psychological Science, 24(10), 1889-1897.

(84) Dror, I. (2015). Keynote Address on the Psychology and Impartiality of Forensic Expert Decision Making, Youtube.

(85) Rubenzer, S.J. (in press).

(86) Green, D., & Rosenfeld, B. (2011). Evaluating the gold standard: A review and meta-analysis of the Structured Interview of Reported Symptoms. Psychological Assessment, 23(1), 95-107. http://dx.doi.org/10.1037/a0021149

(87) Larrabee, G.J. (2003). Detection of malingering using atypical performance patterns on standard neuropsychological tests. The Clinical Neuropsychologist, 17(3), 410-425. http://dx.doi.org/ 10.1076/clin.17.3.410.189. Victor. T. L., Boone, K. B., Serpa. J. G., Buehler. J., & Ziegler, E. A. (2009). Interpreting the meaning of multiple symptom validity test failure. The Clinical Neuropsychologist, 23(2), 297-313. http://dx.doi.org/10.1080/138540802232682

(88) Green, P. & Merten,T. (2013). Noncredible explanations for noncredible performance on symptom validity test. In D. Carone & S. Bush (Eds.), Mild traumatic brain injury: Symptom validity assessment and malingering. New York, NY: Springer.

(89) "Psychologists make reasonable efforts to maintain the integrity and security of test materials and other assessment techniques consistent with law and contractual obligations, and in a manner that permits adherence to this Ethics Code."

(90) Gowensmith, W. N., Pinals, D. A., & Karas, A. C. (2015). States' standards for training and certifying evaluators of competency to stand trial. Journal of Forensic Psychology Practice, 15(4), 295-317. http://dx.doi.org/lo.llW"/1522X932.2013.104679K

(91) Govvensmith, W. N., Murrie, D. C., & Boccaccini, M.T. (2012). Field reliability of competence to stand trial opinions: How often do evaluators agree, and what do judges decide when evaluators disagree?. Law and Human Behavior, 36(2), 130-139. http://dx.doi.org/10.1037/h009395S.

(92) Rubenzer, S.J. (in press).

(93) Texas, New Hampshire, and Alabama
Table 1
Types of Invalid Responding in CST Evaluations

Feigned Presentation                               Mean

Feigned ignorance of the court system              17.2%
Feigned amnesia for offense                        14.6%
Feigned or exaggerated intellectual limitations    14.5%
Feigned memory problems (NOT amnesia for offense)  12.8%
Feigned hallucinations                             10.5%
Feigned depression                                 10.2%
Feigned anxiety or PTSD                             8.2%
Feigned demeanor (a)                                7.5%
Feigned paranoia                                    6.7%
Feigned/exaggerated medical issues (b)              4.3%
Feigned agitation/mania                             2.3%
Feigned disorganized speech                         1.7%
Other feigned presentation (not listed above)       1.6%
ANY kind of feigning (all previous styles)         24.1%
Factitious disorder                                 1.2%
Somatoform or conversion disorder                   1.9%
Lack of cooperation WITHOUT
malingering, factitious or somatoform d/o           8.7%

Notes. (a) E.g., helplessness, vulnerability, child-like demeanor,
speech impediment (b) E.g., unneeded cane, wheelchair, oxygen tank, etc.

Reproduced from Assessing Negative Response Bias in Competency to Stand
Trial Evaluations (2018) with permission of Oxford Univerity Press

Table 2
Behavioral Indicators of Feigning

* Endorses bogus/unusual symptoms
* Positive but no negative symptoms (a)
* Unusual combinations of symptoms
* Very slow performance
* Inconsistent performance on similar tasks (b)
* Exaggerated behavior (c)
* Fails very easy items
* Gives improbable answers

Notes. (a) "Positive symptoms" include hearing voices and delusions,
while negative symptoms are problems with initiative and emotional
reactivity.

(b) For example, an examinee may perform poorly on a formal test of
attention or memory, but not show such deficits during the interview.
(c) Some malingerers grossly over-act, such as ducking and cowering
from alleged hallucinations.

Table 3
MMPI-2-RF Fake Bad Validity Index Cutoff Scores

Index                               Domain(s) of Over-Reporting

Infrequency (F-r)                   Unusual experiences

Infrequency Psychopathology (Fp-r)  Symptoms rare among psychiatric
                                    patients
Infrequency Somatic (Fs)            Unusual bodily and neurological
                                    symptoms

Symptom Validity Scale (SVS/FBS-r)  Unusual bodily, neurological, and
                                    cognitive symptoms

Response Bias Scale (RBS)           Unusual cognitive symptoms

Index                               Interpretive Rule

Infrequency (F-r)                   [greater than or equal to] 120
                                                                79-119
Infrequency Psychopathology (Fp-r)  [greater than or equal to] 100
                                                                70-99
Infrequency Somatic (Fs)            [greater than or equal to] 100
                                                                80-99

Symptom Validity Scale (SVS/FBS-r)  [greater than or equal to] 100
                                                                80-99

Response Bias Scale (RBS)           [greater than or equal to] 100

Index                               Interpretation(s)

Infrequency (F-r)                   Invalid
                                    Possible over-reporting
Infrequency Psychopathology (Fp-r)  Invalid
                                    Possible over-reporting
Infrequency Somatic (Fs)            Scores on somatic scales may be
                                    invalid Possible over-reporting on
                                    somatic scales
Symptom Validity Scale (SVS/FBS-r)  Some scales may be invalid
                                    Possible over-reporting on some
                                    scales
Response Bias Scale (RBS)

Note. All scores listed are T scores, which have an popluation average
of 50.

Table 4
Validity Scales on the Personality Assessment Inventory

Scale                                 Content

Negative Impression Management (PIM)  Unusual symptoms

Malingering Index (Ml)                Unusual combinations of symptoms

Negative Distortion Scale (NDS)       Symptoms rarely endorsed by
                                      psychiatric patients

Scale                                 Cut off Score

Negative Impression Management (PIM)   [greater than or equal to] 77
                                      [greater than or equal to] 100
Malingering Index (Ml)                  [greater than or equal to] 3
                                        [greater than or equal to] 4
Negative Distortion Scale (NDS)        [greater than or equal to] 19


Scale                                 Interpretation

Negative Impression Management (PIM)  Probable exaggeration
                                      Definite exaggeration
Malingering Index (Ml)                Probable exaggeration
                                      Definite exaggeration
Negative Distortion Scale (NDS)       Very likely exaggeration


Note. All scores listed are T scores, which have an popluation average
of 50.

Table 5
Some PVTs That May be Used in CST Exams

Test                        Description

Rey 15 Item Test            Subject is shown 15 numbers, letters,
                            and designs; then asked to write them

TOMM                        Subject is shown 50 pictures and asked
                            to identify those that were seen
Dot Counting Test           Subject counts groups of dots, either
                            scattered randomly or in orderly groups
Reliable Digit Span         Subject attempts to remember strings of
                            digits, saying them back to the examiner
                            or in reverse order
Validity Indicator Profile  Forced choice vocabulary test
(Verbal subtest)
Validity Indicator Profile  Forced choice puzzle solving test
(Nonverbal subtest)
Word Memory Test            Multifaceted memory test (verbal)


Medical Symptom             Multifaceted memory test (verbal)
Validity Test
Nonverbal Medical           Multifaceted memory test for pictures
Symptoms Validity Test

Test                        Strengths

Rey 15 Item Test            Very fast, free;
                            Good validity if true cognitive
                            impairment can be ruled out
TOMM                        Well-researched, widely accepted

Dot Counting Test           Inexpensive, brief

Reliable Digit Span         Free, brief


Validity Indicator Profile  Assesses an aspect of intelligence; uses
(Verbal subtest)            subject's own performance as index of effort
Validity Indicator Profile  Same as above
(Nonverbal subtest)
Word Memory Test            Highly sensitive, internal validity
                            checks; suitable for mild ID; yields
                            useful memory scores
Medical Symptom             Internal validity checks; suitable for
Validity Test               mild ID; yields useful memory scores; brief
Nonverbal Medical           Outstanding internal validity checks;
Symptoms Validity Test      suitable for mild ID; yields useful
                            memory scores; brief
Test                        Weaknesses

Rey 15 Item Test            Too hard for cognitively impaired, limited
                            sensitivity

TOMM                        Limited sensitivity, widely exposed; truly
                            impaired may fail
Dot Counting Test           Low sensitivity, too hard for some
                            examinees
Reliable Digit Span         Low sensitivity, too hard for some
                            examinees

Validity Indicator Profile  Expensive, too hard for some examinees
(Verbal subtest)
Validity Indicator Profile  Expensive, too hard for some examinees;
(Nonverbal subtest)         mentally demanding
Word Memory Test            Relatively long


Medical Symptom             May be transparent as a validity test to
Validity Test               brighter examinees
Nonverbal Medical           Relatively little data in psychiatric and
Symptoms Validity Test      ID samples; not widely used by CST
                            examiners
COPYRIGHT 2017 National District Attorneys Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Rubenzer, Steve
Publication:Prosecutor, Journal of the National District Attorneys Association
Date:Nov 1, 2017
Words:11472
Previous Article:Statistics Collection Tool - Helping Tell Law Enforcement's Story of Going Dark.
Next Article:The Opioid Epidemic and White Collar Drug Users: Spotting the Subtle Signs.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters