Printer Friendly

BELIEF AND DELUSION AS PALLIATIVE RESPONSES TO UNCERTAINTY.

In December 1954, the Chicago Tribune reported that Dr. Charles Laughead of Michigan foresaw the end of the world via tidal wave and volcano. He was speaking on behalf of Dorothy Martin, who was supposedly relaying a prophecy from extraterrestrials. The prophecy did not manifest. Martin was placed in psychiatric care to avoid legal charges for creating disturbances and scaring children with her prophecies. However, on leaving that care, she traveled to the Peruvian Andes, Mount Shasta, in California and ultimately settled in Sedona, Arizona, where she lived until she was 92, continuing to proselytize about aliens and their ministrations on earth, but essentially evading interaction with psychiatric services. Did Laughead, Martin, and their followers have delusions? Their beliefs were certainly bizarre and firm. At times, being a follower, sharing those beliefs, was distressing (although that distress usually arose when the beliefs were challenged, rather than when adherents considered the consequences of the beliefs--that the world would end). The beliefs were definitely outside of the doxastic norms of the culture. However, something seems different about these followers compared to the clinical cases with which we are more familiar. One way to explore the overlap and distinctions between belief and delusions is to consider their function. I believe that healthy and unhealthy beliefs are responses to uncertainty or ambiguity. By explaining away the inexplicable, they permit continued engagement with the world.

I define uncertainty and ambiguity in terms of decision-making and probability distributions. We can have uncertainty with regard to a particular event, we can assign it some subjective probability based on what we know and believe. That probability (based on knowledge and beliefs) may be very different between individuals. Ambiguous situations are so uncertain that we do not have enough information to be sure that our particular belief--our specific prediction--is the correct one. In perception, an uncertain situation would involve listening to a friend speak at a noisy party (you can resolve the uncertainty by making predictions based on what you know about your friend). Listening to someone you have just met, at the same noisy party, someone about whom you have no prior beliefs would engender ambiguity. Both are at best frustrating and at worst distressing. We respond to both uncertainly and ambiguity by relying on prior beliefs. And we can respond strongly and sometimes counter-intuitively when those priors themselves are challenged.

Unbeknownst to Martin, some of her followers were imposters: social psychologists, led by Leon Festinger. The academics infiltrated the group as the end-times loomed. The result was a book; "When Prophecy Fails: A social psychological study of a modern group that predicted the destruction of the world" (Festinger et at 1956). They developed the theory of cognitive dissonance, the internal discord felt from holding conflicting beliefs simultaneously (Festinger 1962)--in this case, between the prophecy and real world events. People in the cult responded in a variety of ways to reduce their dissonance. Many relinquished their beliefs. In some cases, however, a dissonant experience actually increased conviction. For example, failed predictions were recontextualized as actually having come to fruition ("the aliens did come for us, but they were scared off by the crowds of press"). These deft sleights of mind (McKay et at 2005) will be familiar to those who have spoken to patients with delusions (Garety 1991, 1992).

One major challenge for humans is to form and maintain a set of beliefs about the world that are sufficiently accurate and strong to guide decision-making, but flexible enough to withstand changes in the underlying contingencies. One way this might be achieved involves Bayesian learning; we sustain a set of prior beliefs based on past experience, and we combined them with new data. If those new data are highly precise and compelling, they garner updating of the prior. If they are not, those data can be discarded. However, sometimes people do not update their beliefs in this manner. For example, when confronted with evidence that challenges a deeply cherished belief, such evidence may backfire and strengthen people's belief. Is such behavior contrary to the Bayesian model? Furthermore, in the face of uncertainty and ambiguity, people seem to adopt unrelated, sometimes contradictory, and extreme beliefs. This likewise seems to depart from Bayesian rationality. Here, I will describe some of the work and theorizing on belief ad delusion and the palliative function they may both serve with regard to minimizing uncertainty. I will show that while some beliefs appear irrational, they can nevertheless arise from a system that conforms to Bayesian principles.

Bayesian brains

Hierarchical Bayesian approaches to belief in the brain are simple and intuitive: What we ultimately learn to believe (posterior belief) depends on the integration of previously held beliefs (prior beliefs, priors) with new information (evidence; Friston 2005a, Lee and Mumford 2003). We generate an internal model of the world (and of ourselves as agents who act in it). We use the model to generate predictions that are compared to incoming data. When predictions are violated, a prediction error (PE) signal can update the model. PE may also be ignored, depending on its variance; highly variable PE's tend to be discounted. PEs thus enable a flexible adaptation to changes in the environment (Corlett et al. 2010). Ultimately, the brain works to minimize uncertainty (or PE). It maintains a set of predictive associations (based on past experience) that is flexible enough to adapt yet robust enough to avoid superstitions and instabilities (Friston 2005a, 2009). PE minimization occurs at all levels from the single neuron (Fiorillo 2008) up through the hierarchical neuroanatomy of the brain (Friston 2005a, 2009).

Prior expectations based on established associations are communicated from areas with more abstract representations downwards through the hierarchy (Mesulam 2008). PEs can either be canceled by top-down expectancy (i.e., something unexpected is ignored) or propagated and used to update associations with new learning (Friston 2005a, 2009). Whether PE is ignored or incorporated depends on its precision--consistent errors are precise and drive new learning, imprecise errors are less likely to garner updates. Precision is signaled by specific slower neuromodulators dedicated to the particular inference (e.g., acetylcholine for perceptual inference, dopamine for motor inference). And these slower neuromodulators are implicated in the pathophysiology of psychosis (Adams et al. 2013, Friston 2005b).

I have previously explained delusions, the fixed false beliefs that characterize serious mental illnesses like schizophrenia in terms of these Bayesian mechanisms (Corlett et al. 2007, Hemsley et al. 1994, Miller 1976). Unexplained PE or uncertainty is stress inducing. We do not like being surprised. Explaining surprise so that events can be better predicted in the future drives belief formation. And even if the belief is wrong, or delusional, it still explains away the uncertainty or resolves the ambiguity.

Bayesian biases?

This predictive coding model of mind and brain function and dysfunction seems to be committed to veracity; at its heart is an error correcting mechanism that maximizes future rewards and minimizes punishments like the agents of traditional microeconomics--econs (Padoa-Schioppa and Schoenbaum 2015)--theoretical agents whose decisions are only focused on rationally optimizing the expected value. This seems at odds with predictive coding models of psychopathology and in particular psychotic symptoms like hallucinations and delusions (Corlett et al. 2010). Put simply, if delusions result from a noisy maladaptive learning mechanism, why do individuals learn anything at all--let alone the complex and strongly held beliefs that characterize psychotic illness? We know from behavioral economists that humans can depart from econ-like responding (Kahneman et al. 1982). Can predictive coding depart likewise? We think so. Computational modeling of learning and perception allows us to test the consequences of specific changes in a model learner (Stephan and Mathys 2014). For example, some models produce biases--the spreading of erroneous rumors in a social network (Butts 1998), or the tendency to ignore base rates when making probabilistic decisions (Soltani and Wang 2010), even habit formation (FitzGerald et al. 2014).

One particularly interesting example is the confirmation bias (Lord et al. 1979, Nickerson 1998), in which prior beliefs bias current decisionmaking; specifically, contradictory data are ignored if they violate a cherished hypothesis. At first, it is hard to think that maintaining a belief in the face of contradiction could be adaptive. However, Boorstin (1958) has argued that confirmation bias permitted the seventeenth-century New England Puritans prosper: They had no doubts and allowed no dissent, so were freed from religious argument to focus on practical matters. Their doctrine was so clear and strongly held that they had an all-encompassing explanation (Boorstin 1958). Confirmation bias may save energy and allow work on more pressing tasks. Confirmation bias also protects ones' sense of self as a person with a consistent and coherent web of beliefs living in a predictable world.

The confirmation bias has been tied to striatal PE learning through theoretical and quantitative computational models (Doll et al. 2009) as well as genetics (Frank et al. 2007, Heyser et al. 2000). Confirmation bias is increased in individuals with delusions (Balzan et al. 2013). The striatal protein DARPP-32 has been implicated in striatal PE, and risk for schizophrenia (Meyer-Lindenberg et al. 2007). On the other hand, Doll et al. (2014) found that patients with chronic schizophrenia did not show an enhanced fronto-striatal confirmation bias. The relationship with delusions was not examined. It is possible that confirmation biases are specific to delusion contents (that they are encapsulated) rather than a general deficit. Woodward and colleagues showed delusion-related confirmation biases (Balzan et al. 2013).

Examining people's beliefs about themselves and their future reveals other systematic biases. Most people evince superiority illusions, believing they are better and more skilled than most other people, more likely to receive an award, less likely to suffer lung cancer, and less likely to get divorced than their peers (Sharot 2011). It appears that desirable and undesirable information may be used differently to alter self-relevant beliefs (Sharot and Garrett 2016). Again, this seems the antithesis of Bayesian optimality--wherein you updated beliefs in light of new information regardless of how that belief update impacts you, your self-image and well-being.

However, if we allow beliefs to have value in and of themselves, then Bayesian accounts can apply (Sharot and Garrett 2016). Positive beliefs elicit positive feelings, and negative beliefs elicit negative feelings. People thus maintain an optimistic view of themselves and discard negative information. By contrast, when there is no intrinsic or external advantage for holding a belief, asymmetry in updating may be less apparent (Sharot and Garrett 2016). A positivity bias is more likely when information is ambiguous. For instance, when people receive information on how others rated their appearance, the bias is greater than when updating beliefs about self-intelligence after receiving IQ scores (Sharot and Garrett 2016). Attractiveness ratings are more subjective and more open to dispute than relatively objective test scores. There may be more room for personal positivity bias when it comes to attractiveness compared to intelligence.

Overconfidence, ignoring potential negative consequences, in financial traders can lead to market collapses. However, positive overexpectations can reduce stress and improve physical and mental health. In computational simulations overconfident biased agents outperform unbiased, by persevering and claiming resources they could not otherwise attain. Better, but less optimistic, competitors, acquiesce. However, if the costs of errors are raised, overconfidence becomes less adaptive (Sharot and Garrett 2016).

Indeed, not all biases are positive. Personal uncertainty threats (like thinking of a time in one's life when control was lacking) cause compensatory increases in zeal, particularly for self-relevant beliefs (Proulx et al. 2012). Getting spurious feedback about one's academic performance heightens ones' religious conviction (Wichman 2010). Exposure to uncertainty causes participants to increase the extremity and certainty of their convictions about the death penalty and gun control. Furthermore, uncertainty causes people to place a premium on fairness. Participants denied a voice respond most negatively when they first have been made uncertain. Fair process seems to reassure people that the world is an orderly, predicable place when they are feeling uncertain. However, some uncertainty--possibly that which is endogenously generated and experienced with one's own senses, as aberrant prediction errors (PEs), may be reconciled as psychotic symptoms--departures from consensual reality that manifest as hallucinations (percepts without stimulus) and delusions (fixed, false beliefs).

Delusion formation and maintenance

In the psychosis prodrome, attention is drawn toward irrelevant stimuli: People report feeling uncomfortable and confused (Kapur 2003, McGhie and Chapman 1961). This may reflect inappropriate PEs. Functional neuroimaging studies of drug-induced and endogenous early psychosis reveal PEs in frontal cortex in response to unsurprising events--PE intensity correlates with delusion severity (Corlett et al. 2006, 2007). During prodrome, the stress-mediator Cortisol increases by up to 500 percent (Sachar et al. 1963). Heightened stress impairs goal-directed learning and promotes inflexible habit formation (Schwabe and Wolf 2009).

In response to this, confusion and stress occur. Delusions appear in an ana-moment, and flexible processing is disabled. Habitual responses are preserved and possibly even enhanced (Corlett 2009, Corlett et al. 2010). Cortisol falls as delusions crystalize (Sachar et al. 1963), forming the delusion is associated with "insight relief" that helps consolidate it in memory. Cortisol rises once more as delusions conflict with reality (Sachar et al. 1963). As people recover and relinquish their delusions, Cortisol responses normalize (Sachar et al. 1963).

While many delusions have upsetting content, they may solve the overwhelming chaos of the prodrome (Kapur 2003), they may be inferences to the best explanation for that chaos (Coltheart et al. 2010). Delusions are also remarkably elastic: They expand and morph around contradictory data (Garety et al. 1991, Milton et al. 1978, Simpson and Done 2002). Of note, patients can learn about other new things (they do not have an all-encompassing learning deficit) and even critically evaluate others' delusions (Rokeach 1964). However, once a delusion is formed, subsequent PEs are explicable in the context of the delusion and serve to reinforce it (Corlett 2009, Corlett et al. 2010). Hence, the seemingly paradoxical observation that challenging subjects' delusions can actually strengthen their conviction (Milton et al. 1978, Simpson and Done 2002).

Other theories

Cognitive 2-factor explanations of delusions try to explain how delusions might arise from brain damage, like a stroke or closed head injury. They posit a perceptual dysfunction (Factor 1), caused by one type of damage, to one region or regions and a belief evaluation deficit (Factor 2), caused by further damage are necessary for delusions. They make this suggestion because some people have Factor 1 damage but they do not have delusions. The logic and evidence here are somewhat questionable. The Factor 1 people may have damage to regions considered Factor 2. However, the theory is influential and simple in its emphasis of the role of perception and belief in delusion formation and maintenance. McKay and colleagues suggested that motivational processes could influence Factor 2 (McKay et al. 2007), that wishful thinking could change belief evaluations. On the other hand, people may actually sense things differently as a function of their motivated biases (McKay and Dennett 2009), so motivated beliefs may involve Factor 1 and sampling the data differently depending upon ones' desires.

I believe the two factors, perception and belief, are strongly interrelated (Corlett and Fletcher 2015). Differentiating top-down (belief) and bottom-up (sensation) effects is a challenge, since, in a generative system, top-down and bottom-up effects and processes sculpt one another. Learned biases can alter perception; we see illusory stimuli that conform to our expectations rather than the sensory data incident on the retina (Pearson and Westbrook 2015).

Self-deception is also relevant to delusions. It entails simultaneously believing some proposition (p) and its antithesis (not-p; Sackeim and Gur 1979). Self-deceivers are often unaware of their conflicting beliefs. (Sackeim and Gur 1979). Subjects may be psychologically motivated to state one belief but act according to another (Sackeim and Gur 1979). The relevance to delusions is clear, particularly in regard to the double-bookkeeping in which some delusional patients engage (believing that they are being poisoned but nevertheless consuming food provided to them). Clearly the self-deceptive, double-bookkeeping state is an ambiguous one.

Drazen Prelec and Danica Mijovec-Prelec tested self-deception in the lab (Mijovic-Prelec and Prelec 2010). In their task, subjects first predict an uncertain outcome (the gender of a character from the Korea alphabet), then describe the outcome when they see it. Some subjects (self-deceivers) stick with their initial prediction even when presented a contrary outcome (as if they can't see what's right in front of them). They are more likely to engage in this deception when incentivized for correct prediction. The Prelecs call on an actor-critic model, such as those proposed to explain instrumental learning with PE (Sutton and Barto 1998). For them, the mind is organized into multiple interacting agents, each operating on different information. For the Prelecs, the actor chooses an action and the critic gives that action a score. The critic tries to learn the actors' policy and the actor tries to get the best possible score (perhaps even better than they deserve). This architecture portends self-deception--the actor tries to fool the critic (Mijovic-Prelec and Prelec 2010). In reinforcement learning, the critic learns the environmental states and the actor, an action policy given those states. Prediction errors update the actor and critic--that is, the critic can update the actor's policy in order to maximize future reward. Actor and critic have been localized to different striatal subregions (actor--dorsal striatum, critic--ventral (O'Doherty 2004)).

Gur and Sackeim examined self-deception using galvanic skin response (GSR) (Sackeim and Gur 1979, 1985). GSR is a metric of salience. The skin sweats more in response to or anticipation of salient events. In their examination of self-deception, Gur and Sackeim played individuals recordings of their own voice and others' voices and asked them to decide whether what they were hearing was their voice or another person's. Gur and Sackeim found that when people hear their own voice, they show an increase in GSR response. They found that some subjects made self-deceptive responses, where they misidentified their own voice as another's, despite evincing the GSR familiarity response. They did not recognize having made such errors and were more likely to self-deceive in the laboratory if they also gave self-deceptive responses on a personality scale (endorsing the statements "I have never lied" or "I have never stolen"; which are unlikely to be true).

What can we do about strong beliefs?

In the normative approach to delusions that dominates cognitive neuroscience and clinical practice, delusions are conceived of as a symptom to be eradicatedn. However, in describing her own experience of delusions and treatment, Amy Johnson has invoked WB Yeats--suggesting that clinicians and scientists ought to tread lightly in their work, as they tread on patients' delusions (Johnson and Davidson 2013).

The non-clinical situations in which people with radically different belief structures have clashed and come to a resolution may be instructive.

For example, confronting individuals who are against vaccination with reasons that they are wrong can also backfire and strengthen their conviction that vaccines are harmful (Nyhan and Reifler 2015, Nyhan et al. 2013). We, and others, have argued that delusions and other beliefs are often grounded in personal experiences; to the credulous, personal experiences are the most reliable source (Patients often remark "I know it sounds crazy, but I saw it with my own eyes, Doctor"). Relinquishing those beliefs on the basis of others' testimony is strongly related to the credibility of the source (Nyhan et al. 2013), for example, do the individuals trying to change one's mind have a vested reason to disagree, like professional status, roles or affiliations? Perhaps large-scale anti-stigma educational activities in mental health have failed because they failed to employ individuals with lived experience to spread the word about mental illness (Corrigan 2012). With regard to the issue at hand, fixed and distressing delusional beliefs, perhaps peer-support might supplement our standard approaches to mollifying delusions. People with lived experience who have recovered from delusions or learned how to manage them might be better at helping their peers with current delusions.

There are other options. Sharot and colleagues used transcranial magnetic stimulation over left inferior frontal gyrus (the prefrontal cortex), to change the activity of the neurons within a few centimeters of the scalp and skull (Sharot et al. 2012). This decreased the positivity bias, and people who were stimulated were less overconfident in their own attributes (Sharot et al. 2012). This is a proof of principle. We have already discussed how such overconfidence can be adaptive. The importance of the observation is that we may be able to modulate other, less adaptive beliefs. Finally, there are psychological techniques that may help. Following uncertainty induction, people increase their religious zeal (Wichman 2010). However, if subjects engage in a self-affirmation exercise (writing about their positive values and attributes), the impact of uncertainty on religious zeal is mollified (Wichman 2010). This approach has proven useful in depression (Gortner et al. 2006). We posit it may have utility for individuals in the prodrome, about to convert to psychosis since it would inoculate against the mounting psychological distress of that state.

With regard to possible drug interventions, antipsychotic drugs tend to block D2 dopamine receptors. While these drugs mollify delusions and hallucinations, they do not do so in all patients, suggesting that other neurochemical mechanisms may be involved. Dopamine neurons do signal prediction errors, However, there is no single prediction or prediction error signal in the brain but rather multiple hierarchies of inference that converge on a coherent multisensory percept {Apps, 2014 #7511}. There are also many ways in which prediction errors may be perturbed--they may be too precise or not be precise enough, the impairment could occur bottom-up (pathologies of the error signal itself) or top-down (problems with priors), and so forth. The effects may not be consistent within a particular hierarchy or across hierarchies. For example, low-level sensory perturbations can have nonlinear effects on belief higher in the hierarchy--that is, weak sensory priors (and increased low-level prediction errors) may render cognitive priors (higher in the hierarchy) more rigid. This seems to be the case in challenging perceptual inference tasks with ambiguous stimuli, but perhaps less so with less demanding inferences [such as the repeated stimulus trains that give rise to the mismatch negativity that is impaired in psychotic states but not in a manner that correlates with symptom severity].

In conclusion then, uncertainty and ambiguity, either internally or externally generated, by the brain and body or the external world can have rather toxic effects on the mind and brain. Beliefs (delusional and no-delusional) form to halt those effects on the individual, resolving the uncertainty and ambiguity. Ultimately though, since the palliative beliefs do not reflect reality, they may but up against it. I the case of overconfidence, this can have some advantages. However, typically aberrant beliefs generate more uncertainty and ambiguity, and thus cause more problems in the long run.

Works Cited

Adams, RA, KE Stephan, HR Brown, CD Frith, and KJ Friston, 2013, "The Computational Anatomy of Psychosis," Frontiers in Psychiatry 4, p. 47.

Balzan, R, P Delfabbro, C Galletly, and T Woodward, 2013, "Confirmation Biases Across the Psychosis Continuum: The Contribution of Hypersalient Evidence-Hypothesis Matches," The British Journal of Clinical Psychology 52(1), pp. 53-69.

Boorstin, DJ, 1958, The Americans: The Colonial Experience, New York: Vintage Books.

Butts, C, 1998, "A Bayesian Model of Panic in Belief," Computational & Mathematical Organization Theory 4(4), pp. 373-404.

Coltheart, M, P Menzies, J Sutton, 2010, "Abductive Inference and Delusional Belief," Cognitive Neuropsychiatry 15(1), pp. 261-87.

Corlett, PR, 2009, "Why do Delusions Persist?," Frontiers in Human Neuroscience 3, p. 12.

Corlett, PR, and PC Fletcher, 2015, "Delusions and Prediction Error: Clarifying the Roles of Behavioural and Brain Responses," Cognitive Neuropsychiatry 20, pp. 95-105.

Corlett, PR, GD Honey, MRF Aitken, A Dickinson, DR Shanks, AR Absalom, M Lee, E Pomarol-Clotet, GK Murray, PJ McKenna, TW Robbins, ET Bullmore, and PC Fletcher, 2006, "Frontal Responses During Learning Predict Vulnerability to the Psychotogenic Effects of Ketamine: Linking Cognition, Brain Activity, and Psychosis," Archives of General Psychiatry 63(6), pp. 611-21.

Corlett, PR, GD Honey, and PC Fletcher, 2007, "From Prediction Error to Psychosis: Ketamine as a Pharmacological Model of Delusions," Journal of Psychopharmacology 21(3), pp. 238-52.

Corlett, PR, GK Murray, GD Honey, MRF Aitken, DR Shanks, TW Robbins, ET Bullmore, A Dickinson, and PC Fletcher, 2007, "Disrupted Prediction-Error Signal in Psychosis: Evidence for an Associative Account of Delusions," Brain 130(Pt 9), pp. 2387-400.

Corlett, PR, JR Taylor, X-J Wang, PC Fletcher, and JH Krystal, 2010, "Toward a Neurobiology of Delusions," Progress in Neurobiology 92(3), pp. 345-69.

Corrigan, PW, 2012, "Research and the Elimination of the Stigma of Mental Illness," British Journal of Psychiatry 201(1), pp. 7-8.

Doll, BB, WJ Jacobs, AG Sanfey, and MJ Frank, 2009, "Instructional Control of Reinforcement Learning: A Behavioral and Neurocomputational Investigation," Brain Research 1299, pp. 74-94.

Doll, BB, JA Waltz, J Cockburn, JK Brown, MJ Frank, and JM Gold, 2014, "Reduced Susceptibility to Confirmation Bias in Schizophrenia," Cognitive, Affective & Behavioral Neuroscience 14(2), pp. 715-28.

Festinger, L, 1962, "Cognitive Dissonance," Scientific American 207, pp. 93-102.

Festinger, L, HW Riecken, and S Schachter, 1956, When Prophecy Fails, Minneapolis: University of Minnesota.

Fiorillo, CD, 2008, "Towards a General Theory of Neural Computation Based on Prediction by Single Neurons," PLoS ONE 3(10), p. e3298.

FitzGerald, TH, RJ Dolan, and KJ Friston, 2014, "Model Averaging, Optimal Inference, and Habit Formation," Frontiers in Human Neuroscience 8, p. 457.

Frank, MJ, AA Moustafa, HM Haughey, T Curran, and KE Hutchison, 2007, "Genetic Triple Dissociation Reveals Multiple Roles for Dopamine in Reinforcement Learning," Proceedings of the National Academy of Sciences of the United States of America 104(41), pp. 16311-6.

Friston, K, 2005a, "A Theory of Cortical Responses," Philosophical Transactions of the Royal Society of London. Series B, Biological sciences 360(1456), pp. 815-36.

Friston, K, 2005b, "Hallucinations and Perceptual Inferences," Behavioral and Brain Science 28(6), pp. 764-6.

Friston, K, 2009, "The Free-Energy Principle: A Rough Guide to the Brain?," Trends in Cognitive Sciences 13(7), pp. 293-301.

Garety, P, 1991, "Reasoning and Delusions," British Journal of Psychiatry Supplement 14, pp. 14-8.

Garety, PA, 1992, "Making Sense of Delusions," Psychiatry 55(3), pp. 282 91; discussion 292-6.

Garety, PA, DR Hemsley, and S Wessely, 1991, "Reasoning in Deluded Schizophrenic and Paranoid Patients - Biases in Performance on a Probabilistic Inference Task," Journal of Nervous and Mental Disease 179(4), pp. 194-201.

Gortner, EM, SS Rude, and JW Pennebaker, 2006, "Benefits of Expressive Writing in Lowering Rumination and Depressive Symptoms," Behavior Therapy 37(3), pp. 292-303.

Hemsley, DR, 1994, "Perceptual and Cognitive Abnormalities as the Basis for Schizophrenic Symptoms," in AS David, and JC Cutting, eds, The Neuropsychology of Schizophrenia, Hove, UK: Laurence Erlbaum Associates, pp. 97-118.

Heyser, CJ, AA Fienberg, P Greengard, and LH Gold, 2000, "DARPP-32 Knockout Mice Exhibit Impaired Reversal Learning in a Discriminated Operant Task," Brain Research 867(1-2), pp. 122-30.

Johnson, A, and L Davidson, 2013, Recovery to Practice: Dear Amy & Larry, Available from: http://www.dsgonline.com/rtp/special.feamre/2013/2013_05_21/WH_2013_05_21_fullstory.html.

Kahneman, D, P Slovic, and A Tversky, 1982, Judgment Under Uncertainty: Heuristics and Biases, New York: Cambridge University Press.

Kapur, S, 2003, "Psychosis as a State of Aberrant Salience: A Framework Linking Biology, Phenomenology, and Pharmacology in Schizophrenia," American Journal of Psychiatry 160(1), pp. 13-23.

Lee, TS, D Mumford, 2003, "Hierarchical Bayesian Inference in the Visual Cortex," Journal of the Optical Society of America A 20(7), pp. 1434-48.

Lord, CG, L Ross, and MR Lepper, 1979, "Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence," Journal of Personality and Social Psychology 37(11), pp. 2098-109.

McGhie, A, and J Chapman, 1961, "Disorders of Attention and Perception in Early Schizophrenia," British Journal of Medical Psychology 34, pp. 103-16.

McKay, R, R Langdon, and M Coltheart, 2005, ""Sleights of Mind": Delusions, Defences, and Self-Deception," Cognitive Neuropsychiatry 10(4), pp. 305-26.

McKay, R, R Langdon, and M Coltheart, 2007, "Models of Misbelief: Integrating Motivational and Deficit Theories of Delusions," Consciousness and Cognition 16(4), pp. 932-41.

McKay, RT, and DC Dennett, 2009, "The Evolution of Misbelief," The Behavioral and Brain Sciences 32(6), pp. 493 510; discussion 510-61.

Mesulam, M, 2008, "Representation, Inference, and Transcendent Encoding in Neurocognitive Networks of the Human Brain," Annals of Neurology 64(4), pp. 367-78.

Meyer-Lindenberg, A, RE Straub, BK Lipska, BA Verchinski, T Goldberg, JH Callicott, MF Egan, SS Huffaker, VS Mattay, B Kolachana, JE Kleinman, and DR Weinberger, 2007, "Genetic Evidence Implicating DARPP-32 in Human Frontostriatal Structure, Function, and Cognition," Journal of Clinical Investigation 117(3), pp. 672-82.

Mijovic-Prelec, D, and D Prelec, 2010, "Self-Deception as Self-Signalling: A Model and Experimental Evidence," Philosophical Transactions of the Royal Society of London. Series B, Biological sciences 365(1538), pp. 227-10.

Miller, R, 1976, "Schizophrenic Psychology, Associative Learning and the Role of Fore-brain Dopamine," Medical Hypotheses 2(5), pp. 203-11.

Milton, F, VK Patwa, and RJ Hafner, 1978, "Confrontation vs. Belief Modification in Persistently Deluded Patients," British Journal of Medical Psychology 51(2), pp. 127-30.

Nickerson, RS, 1998, "Confirmation Bias: A Ubiquitous Phenomenon in Many Guises," Review of General Psychology 2(2), pp. 175-220.

Nyhan, B, and J Reifler, 2015, "Does Correcting Myths About the flu Vaccine Work? An Experimental Evaluation of the Effects of Corrective Information," Vaccine 33(3), pp. 459-64.

Nyhan, B, J Reifler, and PA Ubel, 2013, "The Hazards of Correcting Myths About Health Care Reform," Medical Care 51(2), pp. 127-32.

O'Doherty, J, 2004, "Dissociable Roles of Ventral and Dorsal Striatum in Instrumental Conditioning," Science 304(5669), pp. 452-4.

Padoa-Schioppa, C, and G Schoenbaum, 2015, "Dialogue on Economic Choice, Learning Theory, and Neuronal Representations," Current Opinion in Behavioral Sciences 5, pp. 16-23.

Pearson, J, and F Westbrook, 2015, "Phantom Perception: Voluntary and Involuntary Nonretinal Vision," Trends in Cognitive Sciences 19(5), pp. 278-84.

Proulx, T, M Inzlicht, and E Harmon-Jones, 2012, "Understanding all Inconsistency Compensation as a Palliative Response to Violated Expectations," Trends in Cognitive Sciences 16 (5), pp. 285-91.

Rokeach, M, 1964, The Three Christs of Ypsilanti, New York: Alfred Knopf.

Sachar, EJ, JW Mason, HS Kolmer, and KL Artiss, 1963, "Psychoendocrine Aspects of Acute Schizophrenic Reactions," Psychosomatic Medicine 25, pp. 510-37.

Sackeim, HA, and RC Gur, 1979, "Self-Deception, Other-Deception, and Self-Reported Psychopathology," Journal of Consulting and Clinical Psychology 47(1), p. 213.

Sackeim, HA, and RC Gur, 1985, "Voice Recognition and the Ontological Status of Self-Deception," Journal of Personality and Social Psychology 48(5), pp. 1365-72.

Schwabe, L, and OT Wolf, 2009, "Stress Prompts Habit Behavior in Humans," The Journal of Neuroscience 29(22), pp. 7191-8.

Sharot, T, 2011, "The Optimism Bias," Current Biology 21(23), pp. R941-5.

Sharot, T, and N Garrett, 2016, "Forming Beliefs: Why Valence Matters," Trends in Cognitive Sciences 20(1), pp. 25-33.

Sharot, T, R Kanai, D Marston, CW Korn, G Rees, and RJ Dolan, 2012, "Selectively Altering Belief Formation in the Human Brain," Proceedings of the National Academy of Sciences of the United States of America 109(42), pp. 17058-62.

Simpson, J, and DJ Done, 2002, "Elasticity and Confabulation in Schizophrenic Delusions," Psychological Medicine 32(3), pp. 451-8.

Soltani, A, and XJ Wang, 2010, "Synaptic Computation Underlying Probabilistic Inference," Nature Neuroscience 13(1), pp. 112-9.

Stephan, KE, and C Mathys, 2014, "Computational Approaches to Psychiatry," Current Opinion in Neurobiology 25, pp. 85 92.

Sutton, RS, and AG Barto, 1998, Reinforcement Learning: An Introduction, Cambridge, MA: MIT Press.

Wichman, AL, 2010, "Uncertainty and Religious Reactivity: Uncertainty Compensation, Repair, and Inoculation," European Journal of Social Psychology 40(1), pp. 35-42.
COPYRIGHT 2017 Association for Religion and Intellectual Life
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Corlett, Philip R.
Publication:Cross Currents
Date:Dec 1, 2017
Words:5247
Previous Article:JAPANESE AMERICAN SPIRITUAL AMBIGUITY AND ARTS OF SILENCE.
Next Article:MORE CHOICE, LESS UNCERTAINTY: THE PARADOXICAL RELATIONSHIP OF POLITICAL IDENTITY AND NEWS EXPOSURE IN THE AMERICAN PUBLIC SPHERE.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters