Printer Friendly

Individual and institutional impediments to ethics: making ethical decisions under risk, threat, and stress.

Introduction

Kurzban and Houser (2005) report considerable success in accurately predicting the level of cooperation in groups. The key to this success (and the direction of the research in the area) is in analyzing the psychological characteristics of the individual members. Evolution has created a polymorphic society made up of individuals with differing characteristics, risk preferences and impediments. We detail below what we feel is the true nature of ethical decision-making and what are the most important impediments, both individual and institutional, that influence this ethical-decision making.

We use the recent advances in moral psychology, neuroethics, neuroeconomics, neuroscience, biological anthropology and behavioral economics to give us significant insights into the nature of moral decision-making. Further, we also use these sciences to show the similarities and the differences in ethical decision-making by individuals. We intend, in later papers, to incorporate these insights into a real-world training program that takes into account these differences and impediments.

Decision Making Under Risk and Uncertainty

Risk and uncertainty are often used interchangeably but there is an important difference. Risk is an objective, measurable concept while uncertainty is a subjective concept that reflects each individual's evolutionary heritage, experiences, knowledge and environment. For example, while the 'risk' or probability of dying in a commercial aircraft is far less than the 'risk' of dying in a car crash, many people feel more uncertainty and are less comfortable in an airplane than in a car.

There is much research supporting the view that when making decisions, people value losses approximately twice as much as they value gains when compared to the status quo. (Tversky and Kahneman 1992) For example, people typically reject a 50150 chance of losing money they already have unless the chance of gain is about twice as much as they might lose. In this classic explanation of 'prospect theory,' Amos Tversky and Nobel Laureate Daniel Kahneman explain humans' tendency toward risk aversion. Also, Kahneman, Knetsch and Thaler (1990) have shown that people require 'substantially' more money when selling objects they possess than what they would pay to buy those same objects.

Additionally, loss aversion explains behavior outside the laboratory (Benartzi and Thaler 1995). More recent research show similar behavior in children as young as five and even capuchin monkeys suggesting a biological explanation supported by functional Magnetic Resonance Imaging (fMRI) (Chen and Lakshminarayanan 2006).

In business and governments, individuals have different 'risk preferences.' That is, their decisions are influenced by whether they are risk takers or risk averters--a condition that may vary in different situations and can change over time. Political decisions frequently are heavily influenced by the decision makers differing intuition for "playing it safe with our constituents" or "the future is in our hands, we must act swiftly and decisively!"

Hedonic Calculus

Jeremy Bentham, the father of Utilitarianism, is the first philosopher to use what has often been called 'hedonic calculus.' (Bentham 1781 [1988]) In the famous dictum of the Utilitarians, morality is creating "the greatest happiness for the greatest number." In making moral decisions, Bentham actually advocated that one could conduct a 'hedonic calculus,' or calculation by actually adding up the happiness quotient per person times the number of people. However,

Bentham stated that we need to measure hedonism (pleasure or pain) by seven circumstances: Purity, Intensity, Propinquity, Certainty, Fecundity, Extent and Duration. On a larger scale, of course, this kind of exercise is impossible, whether or not one agrees with the philosophy of utilitarianism.

In mathematical actuality, Bentham's 'calculus' is algebra and advances in neuroscience and neuroeconomics call for a revision of this 'hedonic calculus.' For the individual, hedonic calculus is the weighting of alternative outcomes of a decision/ action (moral or financial) by the affective result on the person. We could actually measure this neuroscientifically by an fMRI of the 'Ventral Srtiatum, ' the area of the human brain that codes value and registers reward..

We use the term 'hedonic calculus' instead of 'hedonic algebra' because we are dealing in curvilinear geometry. In standard decision-making theory, the economic assumption of ,maximizing the utility of an outcome under uncertainty and subject to certain constraints' is the gold-standard model and this requires derivative calculus. This can either maximize a specific indifference curve between money and morality or maximize a 'utility of moral identity' curve. Traditionally, these curves are curvilinear and not straight lines, so calculus is needed to compute the maximization of these equations.

In the real world, ethical decisions are made under uncertainty and have strong emotional content, so we need to factor both of these into our model. We factor uncertainty into the decision-making model by weighting the possible outcomes by their probabilities of occurrence and we factor the hedonic or affective utility of the outcomes (read 'reward') by adding another relative weighting factor ('H' for 'hedonic') to the outcomes equation.

1. Uncertainty: The traditional way to factor uncertainty into the model is through 'Bayesian Logic' (Kording 2007) or 'Fuzzy Logic' (Kosko 1993). That is, we must weight each of the possible outcomes by its probability of occurring:

(1) Outcome = ([P.sub.1]) X ([Outcome.sub.1]) + ([P.sub.2]) X ([Outcome.sub.2]) E.g. If I invest $100,000 in a business, I could lose 50,000 or make 25,000

Outcome = (.25) X (-$50,000) + (.75) X ($25,000)

Outcome = -$12,500 + $18,750

Outcome = $ 6, 250

ROI = $6,250/ $100,000 = 6.25% return, which would make me rationally indifferent between putting this money in a Certificate of Deposit, where I could likely earn a similar ROI and making this investment.

2. Hedonic Affect: However, when we add the hedonic affect, the decision now has a different outcome according to the hedonic calculus. We weight each of the outcomes not only by the probability but by its relative affect on the individual. In the simplest formulation, the work of Kahneman and others has shown that individuals feel a loss twice as much as they fell an equivalent win. So, the hedonic calculus model would be:

(2) Outcome = ([H.sub.1]) X ([P.sub.1]) X (Outcome,) + ([H.sub.2]) X ([P.sub.2]) X (Outcomm2)

Since Khaneman has shown the double affect of a loss, (Layard 2005, p. 167) we could formulate our decision example in (1) above as:

Outcome = (2) X (.25) X (-$50,000) + (1) X (.75) X (+$25,000)

Outcome = -$25,000 + $18,750

Outcome = -$6,250

ROI = -$6,250/ $100,000 = - 6.25% return, which would make the Certificate of Deposit the optimal choice.

In making ethical decisions in a real world institutional setting under threat, coercion and risk, there are complex emotional and neurological processes influencing our perception of the Final Outcome. In a simplest case, let us say your boss asks you to verify for him a false personal expense report of a trip you jointly made by signing your name to it. Let us use the hedonic calculus on the outcomes.

3.Uncertainty and Hedonic Affect Combined: You have a binary decision choice: to sign or not to sign. If you don't sign, the (usually unspoken) outcome is that you might lose your job but will keep your moral identity or self-image. If you do sign, you will keep your job but lose some or all of your moral identity.

(3) Outcome not,ig,, = (H,) X (P,) X (Lose Utility of Job) +

([H.sub.2]) X ([P.sub.2]) X (Keep Utility of Job)

(4) [Outcome.sub.sign] = ([H.sub.3]) X ([P.sub.3]) X (Lose Utility of Moral Identity) +

([H.sub.4]) X ([P.sub.4]) X (Keep Utility of Moral Identity)

We know that in the case of not signing, that the hedonic affect of losing the Utility of a Job is twice as much (and the Post Traumatic Stress lasts about two years), so a simple reformulation of Outcome (3) above would be:

(5) [Outcome.sub.not Sign] = (2) X ([.sub.1]) X (Lose Utility of Job) +

(1) X ([P.sub.2] ) X (Keep Utility of Job)

Finally, if we sign simple equal probabilities to the two possible results of Outcome (3) then the equations for each outcome are:

(6) [Outcome.sub.not Sign] = (2) X (.5) X (Lose Utility of Job) +

(1) X (.5) X (Keep Utility of Job)

It should be clear from the above formulation and discussion that the hedonic calculus in ethical decisions made in a work context are strongly weighted toward the unethical action.

How rational are most business and government decisions?

For years, rational expected utility theory was the primary normative and descriptive theory used to explain decision making under uncertainty. Now, however, recent studies of monetary gains and losses have distinguished between anticipation of immediate outcomes (called anticipated utility) and actual experience of gaining or losing money (called experienced utility). (Breiter et al. 2001). The results show dramatic divergence in these utilities. Also, much research has supported the view that some choice phenomena affect the standard decision making model of rationality. They include:

Framing issues

The standard model of rational decision making assumes that description of the choices is uniform. However, evidence supports the view that the framing or re framing of the options, in terms of gains or losses, etc. often leads to different preferences. (Tversky and Kahneman, 1992). According to Tversky and Kahneman (1981), the risk preferences of decision makers change significantly when the options are framed in terms of potential losses as opposed to potential gains. How government and business leaders 'frame' the questions they pose to their constituents have an enormous impact on how the issue is perceived and therefore processed in the brain.

Also, marketing research on consumer behavior demonstrates the power of framing. Consumers greatly prefer to buy foods that are '95% fat free' rather than 'only 5% fat content.'

Since evidence shows that framing greatly influences buying habits, it is reasonable to believe framing also exerts a powerful influence on ethical decision making. For example, when asking a government official to misreport the cost of a project in order to gain legislative support, the one suggesting this unethical action might appeal to the patriotism of the group and frame the questions "as a matter of national security". Similarly, in the corporate world, a suggestion to slightly alter financial statements in order to buy some time to fix the problem might be rationalized by appealing to the emotions -"We've got to save the company and our employees' livelihood!"

Decisions made by government officials and business managers often take place in situations where subjective factors (with a high level of uncertainly) predominate rather than objective factors whose downside risk may be reasonably measured. In such situations, framing plays a key role in the decision and thought processes can be manipulated by a less than ethical leader or an ethical one who is suffering from a temporary 'ethical lapse' (Ethical lapse can be defined as a mistake in judgment by an otherwise ethical person).

When issues are framed by focusing almost exclusively on one overriding factor, such as national security or survival of the organization, other relevant factors are minimized or eliminated from consideration. When business decisions are driven by one all consuming value-the increase in the price of the corporation's stock, other key factors may be totally disregarded. The same dynamic occurs when government officials are driven by the all consuming need to be reelected. Numerous examples from recent corporate scandals and government misbehaviors can be cited that support our contention that careful re-framing is an essential element in all instances of ethical misconduct in institutions.

Linear vs. Nonlinear Preferences

Standard rational decision models assume linear outcome probabilities. In contrast, research shows the difference in probabilities between .99 and 1.00 have a greater impact on preferences than the difference between .10 and .11. In government and business decision making, statistics and other numerical summaries are used to influence behavior and can easily be manipulated to paint a rosier picture.

Additionally, Tversky and Kahneman (1992) used their now famous 'prospect theory' to show how a person's utility does not increase in a linear fashion. When presented with options of potential loss vs. potential gain, subjects demonstrated the principle of diminishing marginal utility. As an example of this diminishing marginal utility, saving 600 lives will not feel three times as good as saving 200 lives, so decision makers tend not to take the risk of saving all 600 people. Alternatively, since losing 600 lives will not hurt the decision makers three times as much as losing 200, decision makers tend to take the risk to lose no one.

The Power of Authority

While the defense, "I was only following orders," died at the post-World War 11 Nurenburg Trials, the power of authority plays a key role in government and business every day. This includes those decisions, actions and behaviors that are questionable on moral grounds. Psychologist Stanley Milgram's experiments asked people to administer shocks at increasingly higher levels to unseen subjects (actually actors) who failed to answer questions correctly. Despite cries of pain and pleadings for mercy, 65 percent obeyed due to the tendency to obey persons in position of authority. The mantle of authority in this case was: white lab coats. Milgram (1974) states:
 "Ordinary people, simply doing their jobs, and without any particular
 hostility on their part, can become agents in a terrible destructive
 process. Moreover, even when the destructive effects of their work
 become patently clear and they are asked to carry out actions
 incompatible with fundamental standards of morality, relatively few
 people have the resources needed to resist authority."


Hierarchies in government and business are ripe for this abuse of authority by those in power who use others for their own purposes. How much stronger than a lab coat is the charisma of an elected official or key cabinet minister? How much stronger than a lab coat is the influence of a powerful business leader who promises to promote those who are loyal (an ethical value) to him or her? Tetlock (1991) found that employees are 'intuitive politicians' who can infer the wishes of those to whom they are accountable and act accordingly. Even without specific instructions, authority figures influence behavior through the expectations of their subordinates. Tetlock (1985) also described the 'acceptability heuristic' where subordinates often see their task as finding the answer that will be acceptable to their superiors rather than the honest answer or the one that will displease authority figures.

The Neuroscientific Basis of Decision Making.

People and Sound Judgment

Experimental research shows that the causal reasoning of average adults is highly fallible and they often act on these judgments. On the other hand, these same subjects are highly certain that their judgments are correct. (Kuhn 2007). One of the chief reasons for this is the lack of "decontextualization." Jurors must do this in deciding only on the presented evidence but in everyday life people use gossip, rumor and unrelated facts to make their judgments. The good news, however, is that critical thinking can be taught. It is not like teaching rote answers to the standardized tests of No Child Left Behind - it requires training and practice- but if learned, it is a true skill that will make a person's life better. (Kuhn 2007)

Decision Making and Neuroscience

Social decision-making is the context in which we do ethics - not the laboratory nor the philosopher's armchair. Unfortunately, most work on decision-making - the study of our fundamental ability to process multiple alternatives and to choose the optimal course of action - is in fragmented and not integrated studies. (Sanfey 2007) The interdisciplinary science of 'Neuroeconomics', however, shows the most promise in modeling real-world decisions among multiple alternatives in complex social interactions, according to Sanfey. It uses the precision and mathematical models of Game Theory in economics along with the findings of neuroscience and brain scanning.

Economic Game Theory has as its assumptions the classical fundamentals of economicsthat individuals are rational maximizers of utility-but the real world results of Game Theory experiments show that individuals are generally less selfish and less strategic than the model predicts and also temper their decisions by social factors such as reciprocity and equity. (Camerer 2003) The fundamental experiments of Game Theory are actually quite simple (The Prisoners' Dilemma, The Ultimatum Game, The Trust Game, The Coordination Game, etc.) yet they elicit profound and useful conclusions from observations of the behavior of the participants.

Neuroeconomics has already begun to focus on the broad research themes of (1) social reward, (2) competition, cooperation and coordination and (3) strategic reasoning.

(1) Social Reward

Researchers have discovered that in the human brain the ventral striatum (composed of the caudate, the putamen and the nucleus accumbens) appears to be centrally involved in social decisions. This is also called the 'mesolimbic dopamine system,' since dopamine release and its effects are central to all brain reward mechanisms. This is the basis of decision theorymaximizing reward or 'utility.' Cromwell and Schultz (2003) have shown that neural responses of dopamine cells scale reliably with the magnitude of the reward. As a non-scientific example of this scaling, we all know that a good meal 'feels' good but it doesn't 'feel' as good as sexual intercourse. The amount of dopamine released in each of these examples is consistent with the differential feelings of these activities. (See Figure 1.)

[FIGURE 1 OMITTED]

Furthermore, O'Doherty (2004) and Knutson and Cooper (2005) have shown that activity changes in the striatum scale directly proportionally to the magnitude of monetary reward or punishment. More important for ethics, the human striatum encodes the value of a social partner's decision to reciprocate or not reciprocate cooperation, regardless of the magnitude of the monetary reward. Reciprocated cooperation increases activity in the striatum and eventually leads to the build-up of trust. On the other hand, non-reciprocated cooperation leads to decreased activity in the striatum. (Billing et al. 2002) In addition, increased cooperation in subsequent rounds of an economic game increases proportionately the activity in the striatum. This supports the hypothesis that the striatum (especially the caudate) registers social prediction errors to guide decisions about reciprocity. (King-Cases et al. 2005) The importance of the striatum response in social decisions is further supported by the fact that in experiments where the player is given prior information in the form of general personality profiles (We will later see that in biological anthropology terms we may call this 'gossip.'), caudate activity was reduced in response to both positive and negative moral profiles and unchanged for neutral profiles. (Delgado et al. 2005)

Finally, studies show that the social reward exists in the form of dopamine release in both positive actions (in mutual cooperation) and negative actions (costly punishment of cheaters). The caudate was activated in punishment, even when it cost the punisher points in the game and was proportionately more activated when real money rather than symbolic money (points) were involved in the game (deQuervain et al. 2004)

As we will discuss in more detail below, the hallmarks of ethics are empathy (de Waal 2005. De Waal 2007/2008), cooperation Nowak 2006), reciprocity (Dawkins 1989. Sanfey 2007), altruism (Dugatkin 2006), and costly punishment (Gurek et al. 2006. Hauert et al. 2006). We discussed above how the striatum is activated in both social cooperation and punishment. Further, understanding the brain when the individual is performing altruistic acts is also critical to our study of ethics. In fact, the caudate is activated both by the giving of money to charitable causes and the receiving of it. (Moll et al. 2006) Another study showed that the caudate was activated in observing another donating to charity and also showed that among givers, freelygiven donations activated the caudate more than donations that were forced by the experimenters. (Harbaugh et al. 2007)

Since the dopamine limbic system is fundamentally important to goal-directed performance and the creation of habits, Wickens et al. (2007) reviewed the current research and reported findings with important implications for ethical training. In the early stages of learning, dopamine plays an essential role and the corresponding brain area in the loop is the neostriatum, which is the primary recipient of glutamatergic input from almost all the cortical regions and also the primary recipient of the major dopaminergic input from the mid-brain dopamine neurons involved in reward processing. Thus, information processing in this circuit is controlled by dopamine. The dopamine creates synaptic plasticity in the corticostiratal pathway and also changes the neostriatal neurons. It then influences the choice of actions by the basal ganglia. This mechanism is so important that mice with a dopamine agonist will not approach food even when hungry but will approach with the agonist but a strongly conditioned stimulus/response training. (Choi et al. 2005)

However, as an activity becomes a habit, the role of dopamine diminishes in motivating the activity. In fact, the action becomes dopamine independent with extended training. This actually means that well-established (habitual) behavior becomes no longer outcome-mediated. (Wickens et al. 2007) Neurologically, the control of the habit moves from the striatum to the basal ganglia (the brain area of animal instinct human habit). This suggests, according to Wickens et al. (2007) that during low-dopaminergic transmission it would be difficult to initiate voluntary actions to obtain particular outcomes, whereas during high dopaminergic states it would be difficult to inhibit inappropriate responses. However, training of goal-directed actions so that they become habits can render their performance much less dopamine-dependent.

The implications for ethical training should be obvious. In early training, a clear perception of rewards, positive feedback and positive reinforcement will help form the habit, making it, if possible, dopamine independent. In the lab, rats can be hyper-sensitized to dopamine with amphetamines, thereby accelerating habit formation. Unfortunately, 'the brain on money' can create huge amounts of dopaminergic responses that can overwhelm the best of ethical habitformation. The solution is to firmly imbed the ethical habits so they become rote. While there are clear scientific limits to using the results of animal experiments to draw conclusions about human motivations, fMRI scans of human subjects engaged in economic exchanges show similar activity in the ventral striatum that is scalable to the amount of the reward. (Cromwell and Schultz 2003)

The reward value of money presents a special challenge to creating ethical habits. A recent study of how the brain translates money into a motivational force (Pessiglione et al. 2007), used monetary rewards to motivate subjects to squeeze a hand grip. In addition, the experimenters also measured skin conductance and imaged the reward centers of the brain with fMRI. The amazing results in this study are that even when the monetary reward (English Pound vs. Penny) was flashed so quickly that the subjects' perception was subliminal that the results were the same in the tests as when the subjects could consciously perceive the reward. Activation was seen in the reward and motivational structures of the brain, including the ventral striatum, ventral palladium and amygdala. Further, according to Pessiglione, the results were not just a behavioral stimulus/response mechanism, as this is localized in the basal ganglia, which was not activated in these studies.

The implications of all these studies for ethical decision-making are important and pretty straightforward. Both social rewards (read 'ethics') and monetary rewards activate the same reward circuits in the brain, so they are competing in the decision-making process. Even more importantly, the activation of the brain and the force exerted by the subjects was directly proportional to the amount of monetary reward, whether it was subliminal or conscious!

(2) Competition, Cooperation and Coordination:

The areas associated with emotional processing play the predominant role in cooperation and fairness. These are, of course, the striatum, but also the ventromedial prefrontal cortex (' VMPC'), the orbitofrontal cortex and anterior cingulate cortex, as well as the insula. These areas react negatively to both inequitable and unreciprocated actions in games. (Pillutla and Murnighan 1996) It is likely that they evolved to encourage reciprocal cooperation, to make reputation important in the social group and to encourage the punishment of cheaters. (Nowak et al., 2000) These reactions to inequity are even seen in capuchin monkeys (Brosnan and deWaal 2003) The anterior insula seems to be particularly important to judgment of inequity as its activation has been shown to be directly proportional to the magnitude of the unfairness (Sanfey et al. 2003) The anterior insula is also associated with physically painful and disgusting stimuli and in mapping visceral sensations of autonomic arousal. Its evolutionary role may be creating a sense of disgust at inequity and therefore future distrust of the unfair player. (Sanfey 2007)

The Dorsolateral Prefrontal Cortex also plays a role in the value coding of unfair offers. This is evidenced by the fact that transcranial magnetic inactivation of this region caused the subject to accept unfair offers, compared to the control group.(Knoch et al. 2006) Finally, in this area, it is clear that the neurotransmitter oxytocin enhances feelings of trust in the brain. In one trust experiment involving investments, it was administered intranasally and it led to increased trust by investors (Kosfeld et al. 2005)

(3) Strategic Thinking: Theory of Mind

An important aspect to social decision-making (and therefore ethics) is the processing of the actions and intentions of others. This is collectively known as 'Theory of Mind' ('ToAP). ToMis crucial to strategy and response in social interactions but also to empathic responses, one of the hallmarks of ethics. The areas associated with ToM are primarily the medial prefrontal cortex and the anterior paracingulate cortex. (Frith 2003. Gallagher & Frith 2003) Autistic individuals and some psychiatric patients exhibit severe ToM deficits and this hampers their social abilities.

The Neuroscientific Basis of Decision Making Under Threat and Stress.

We all are well aware of the 'fight or flight' response of the human psyche and autonomic/limbic system of the human body. However, scientists have been able, with virtual reality projectors and fMRI scanners, to pinpoint the exact locations of the responses in the brain. Mobbs et al. (2007) set up subjects to perceive and feel both a distant and a proximate threat from an intelligent predator (a wolf). In the distant threat, part of the prefrontal cortex and the Lateral amygdala responded to the threat. The authors interpret this as the brain trying to come up with a strategy to evade the distant predator. On the other hand, when the virtual predator was up close ('proximate'), the logical brain was completely shut down and the only activity was in the central amygdala and the basal ganglia. These are areas of unconscious mental processing.

The implications of this are significant for making ethical decisions. Since most business ethical decisions are made under stress, the stress triggers the 'fight or flight' response. This turns off the logical, conscious brain. In business ethics classes, we use the analogy that the IQ of a person under stress drops 20 points. Since IQ scores are normalized on 100 and that the 'mentally challenged' have IQ's of 70 or below, we say that the effect of doing a decision under stress is like making a decision in a childhood state.

Are Ethical Decisions Rational, Emotional or Unconscious?

Ethical Decisions Can Be Rational, Emotional and/or Unconscious

Hauser (2006), admittedly taking an idea from Noam Chomsky, the Harvard linguist, states decisively that humans have a moral instinct that is designed to generate rapid judgments about moral matters. It includes some universal principles such as:

* Killing is wrong.

* Helping is good.

* Breaking promises is bad.

* Gratuitously harming someone is evil. Part of this was designed by Darwinian natural selection and parts were evolved over eons of living in groups. The fundamental 'grammar' of this is inborn in the child but social convention and learning give it specific parameters relevant to a particular moral system. Thus, his view is pluralistic.

The moral system creates expectations of norms of behavior and consequences, according to Hauser, such as obligations, promises and commitments. When an expectation is met, we react positively and these positive emotions are rewarding and reinforcing. When expectations are violated, negative emotions are the reaction. These are aversive and can cause us to shun or punish the violator.

The neurobiological correlates of these behaviors have received lots of scientific attention recently. From the standpoint of Biological Anthropology, ethics can be seen as encompassing Empathy, Cooperation, Reciprocity, Altruism (Altruism here is defined in the biological sense, that is giving up some of your individual desires/fitness to receive the benefits of living in a group.) and Punishment of Cheaters, even if it is costly to the punisher. Each of these requires the ability of the individual to control her selfish impulses and to delay gratification. Impulse control is the provenance of the prefrontal cortex's interaction with and control of the amygdala (especially the ventromedial and orbitofrontal regions of the prefrontal cortex). Unfortunately, children, adolescents and those with brain damage to this neurological area find it impossible or very difficult to control their impulses.

Classic delay-of-gratification experiments offer marshmallows to children or money to adolescents. The subjects are told that they may choose to have the reward now, or when the researcher returns in a while (time not specified), a larger reward. The consistent finding across cultures and economic classes, (Hauser 2006) is that children under the age of four have little or no patience and choose the immediate reward. However, those pre-schoolers who wait a few seconds longer to take the current reward grow up to be more successful adolescents and adults. On the other hand, adolescents who take the immediate monetary reward are likely to end up as youthful cheaters and juvenile delinquents and as adults, job-losers and abusers of their partners in romantic relationships.

Although there can be criticism of the generalizability to adults of these tests on Children and adolescents, Laduoceur et al. (2000) have shown that individuals with higher anxiety cannot tolerate uncertainty and therefore usually rush to sub-optimal decisions. These individuals are also more likely than the general population to become addicted to something to alleviate their anxiety. (Paulus 2007)

In order to test his ideas concerning innate moral universals, Hauser created a web-based questionnaire called the 'Harvard Moral Sense Test.' (www.moral.wjh.harvard.edu.) Hauser (2006) reports that during the first year, 60,000 people took the test from 120 countries and from all walks of life. The vast majority of those particiapting agreed on what was moral and immoral concerning certain forms of harm. However, less than 10% were able to articulate a correct rational justification for their opinions.

In subsequent papers, (Cushman,Young & Hauser 2006. Hauser, Cushman, Young, Jin & Mikhail 2007) Hauser and his colleagues analyze the purported rational justifications of the participants in the Harvard Moral Sense Test. The results show that some moral principles are intuitive and generally not available to the conscious, while some are. Thus, a two-system model of moral reasoning is justified, according to Hauser, wherein some moral principles are available to conscious reflection - permitting but not guaranteeing that the individual will use conscious cognition - whereas the intuitionist model characterizes other forms of moral decisions.

Jonathan Haidt (2007) agrees with this two system model and has constructed a dualistic model of moral decision-making, which he terms the 'Social Intuitionist Model,' (Haidt 2001). Further, Haidt shows that the multi-disciplinary field of moral research is converging on three shared principles:

1. The importance of moral intuitions

2. The socially functional (rather than truth-seeking) nature of moral thinking

3. The co-evolution of moral minds with cultural practices and institutions that create diverse moral communities. Zajonc (1980) catalyzed the 'affective revolution' that began in the 1980's and ran in parallel to the 'cognitive revolution' in psychology that dominated the 1960's and 1970's. Zajonc showed that the human mind has both an ancient, automatic, very fast and optimal affective system that constantly and unconsciously evaluates its environment and a newer, slower and motivationally weaker cognitive system, which is preceded, permeated and influenced by the affective system. In moral matters, this affective system has primacy (but is not deterministic) That is, the basis of morality is emotional (Trivers 1971. Hauser 2006) and is evidenced in behaviors we share with other mammals, such as kin-selection (Chapais & Berman, 2004), sympathy for suffering (Miller 2006), anger at and punishment of non-reciprocators and free-riders (Hauset et al. 2007).

According to Bargh & Chartrand (1999), the most functional distinction in moral psychology is not between affect and cognition, but between two different kinds of cognition: moral intuition and moral reasoning. Moral intuition is a fast, automatic emotion-laden process in which the moral decision appears in the consciousness without any iterative reasoning. Moral reasoning, on the other hand, is a conscious weighing of the information in order to reach a moral judgment or decision. Moral reasoning, like many other types of reasoning, is often posthoc, searching for conceptual and verbal evidence to support our intuitive reaction (Haidt, 2007, Hauser 2006).

Neurological studies and brain imaging confirm the importance of the medial prefrontal cortex, including the ventromedial prefrontal cortex and the medial frontal gyrus (Greene et al. 2001. Greene & Haidt 2002), which integrate affect especially expectations of reward and punishment-into decisions. Also, the amygdala and the frontal insula (Luo et al. 2006. Sanfey et al. 2003) figure strongly into intuitive decisions. However, this affective impulse is not deterministic. We can override these intuitions with rational, discursive thinking. In this case, the prefrontal cortex, the area most involved in impulse control and that generates social emotions by integrating signals from the amygdala and other brain structures plays the major role and Koenigs et al. (2007) have shown that patients with damage to the ventromedial prefrontal cortex make inordinately unsocial, highly utilitarian moral judgments.

In practice, there are three main ways that we override our initial intuitions (Haidt 2007):

1. We can use conscious verbal reasoning, as in considering the costs and benefits of various courses of action

2. We can reframe the situation, thereby triggering a new moral intuition

3. We can talk to people whom we trust, who can raise new framing perspectives or raise new arguments, thereby triggering new moral intuitions. Haidt reports that the first two are rarely used and that most moral change of thought and action come about by social interaction.

William James said that 'Thinking is for Doing.' As a paraphrase of this, we may say that 'Moral intuition is for social doing.' Our moral sense is not a truth seeker, but a practical facility that looks for 'what works' in the social groups that we frequent. Dunbar (2004) makes the case that language evolved for bonding social groups, as evidenced by the fact that two-thirds of human conversations are about social topics. He calls this talk generically 'gossip.' Haidt (2007) states three rules for surviving in gossip-filled social groups:

1. Be careful what you do.

2. What you do matters less than what people think you do, you'd better be good at framing your actions.

3. Given rule #2, be prepared for other people's attempts to deceive and manipulate you. That is, in a moral society, it is not necessary to be moral, but only necessary to be good at deceiving others into thinking that you are moral.

Of course, one benefit of being in a gossip group is that you can benefit from 'indirect reciprocity,' (Nowak & Sigmund 2005) finding out someone's reputation for free-riding or cheating without actually having to be 'burnt' by them. Panchanathan & Boyd (2004) show that in mathematical models, the free-rider problem, which previously doomed mathematical models of altruism in earlier simulations, can be solved by the introduction of indirect reciprocity.

Beyond the mammalian instincts of kin-selection, cooperation and reciprocal altruism, humans have evolved other behaviors that promote group solidarity by creating moral communities through culture and religion. The research of Haidt and Graham (2007) have delineated five psychological foundations, each with separate evolutionary origins, upon which human cultures construct their moral communities:

1. Prohibition against personal harm and promotion of care and altruism.

2. Promotion of fairness, reciprocity and justice

3. Ingroup-Outgroup dynamics and the importance of loyalty to the group

4. Importance of respect and obedience for authority.

5. Importance of bodily and spiritual purity and the importance of living in a sanctified rather than a carnal way. These are all social intuitions and thus surface in moral decisions from an unconscious, affective impulse and not through verbal cognitive reasoning.

What Makes Us Want to be Good?

Pinker (2008) points out that morality is close to our conception of the meaning of life and carries so much weight in all human societies that it is "bigger than any of us and outside all of us." Like Hauser and Haidt, he also delineates a difference between human moral intuition and human rational ethical reasoning. The two hallmarks of moral intuition are that the rules it invokes are felt to be universal and that people feel those who commit immoral acts (that is, break the rules) deserve to be punished and even further, they need to be punished

However, an interesting twist to Pinker's discussion of rational moral faculty is his observation that rational moralization is a separate faculty of the brain, a different psychological mind-set that we can turn on and off, so there is variability to the rational moralizing side. For example, moral vegetarians as opposed to health vegetarians consider eating meat immoral in that they refuse to be complicit in the suffering of animals. Lately, smoking has been immoralized. On the other hand, divorce, having illegitimate children, marijuana use and homosexuality have been removed from the list of moral failings in America and redefined as lifestyle choices.

Pinker points out, as others have done, that we have both an innate moral intuition and a rational moral cognitive facility. Nowhere is this more evident in the thought experiment that he affectionately calls 'trolleyology.' This is the Trolley Problem experiment extensively tested on subjects by Hauser and Mikhail (Hauser 2006). In short, you see a runaway trolley heading toward five workers on the tracks you can throw the switch that will divert the trolley to the alternative track where it will kill one man instead of the five. Hauser and Mickael found that almost universally everyone said it was moral to throw the switch. (By everyone we mean over 200,000 people from 100 countries and of all religions who took the Harvard Moral Sense Test online). Alternatively, if you are on a bridge above the tracks and have the opportunity to push a fat man off in front of the trolley to save the five men, almost everyone said this was not moral, even thought from a rational, utilitarian basis it is indistinguishable from throwing the switch. Also, when fMRI was used on subjects contemplating these different dilemmas, different brain areas were activated in the two different dilemmas. (See Figure 2.)

[FIGURE 2 OMITTED]

Further, when Koenigs, Young, Hauser et al. (2007) exposed subjects who had physical damage ('lesions') to the ventromedial prefrontal cortex ('T'MPC' - an area important for emotions and in particular social emotions) to the 'Trolley Problem' and similar dilemmas, such as the well-known case of smothering your crying baby so the Nazis don't find the twenty people hiding with you. These subjects overwhelmingly chose a strictly utilitarian solution to all the problems. That is, 'push the fat man off and 'smother the baby.' This is a clear confirmation of the importance of emotions in moral decisions, as these same subjects with VMPC lesions have diminished emotional responsiveness and very reduced social emotions. As further support for the innate nature of morality, in studies of identical twins raised separately (the gold standard for separating nature from nurture), adults who are diagnosed with 'antisocial personality disorder' or 'psychopahhy,' show psychopathic behavior from their earliest childhoods, torturing animals, bullying, lying and evidencing complete lack of empathy or remorse. (Pinker 2008).

Cognition and Emotion

The most successful theoretical model of the brain we have seen thus far-that is, it integrates and explains the seemingly contradictory moral judgments we have heretofore discussed - is that proposed by Jonathan Cohen (Cohen 2005). Borrowing a term first used by Minsky (1986), Cohen posits that although the parts of the human brain are all interconnected and work together, the brain is best not thought of as a homogenous unit, but as a 'Society of minds,' with each part allocated a specific task or tasks. Thus there is competition among the faster and unconscious 'Emotional Brain,' whose neurological correlates are the ventral stiatum, the brain stem and the amygdala and the slower, conscious, deliberate 'Conscious Brain,' This latter system enables the person to consider and act on abstract goals and principles and has the capability of impulse control (with emphasis on the word, 'capability'), but which, we will find, thinks in words, can only focus on and compare two items at one time and is located in the prefrontal cortex.

Further, the decision-making, moral psychology and economic literatures group the 'Society of Minds' into two general mechanisms: System 1 is for automatic processing and decisions. It works quickly and results in intuitive solutions to problems. System 2 corresponds to the conscious and controlled processes of our mind, rational thought, logic and rumination,. It monitors the correctness of the System 1 answers and sometimes overrides them. In this System 2 mechanism resides what philosophers call 'free will' and it resides in the most recently evolved sections of the brain, namely the neocortex and the prefrontal cortex-the seat of abstract ideas and actions based upon abstract ideas, impulse control over the amygdala and transcendence.

Cohen contends that the seemingly 'irrational' behavior or decisions of individuals is explained by the outcome of the competition of these two systems with each presenting solutions to the problem presented to them and unfortunately, the outcome is not always optimal. If we accept the universally accepted assumption of economic theory people make decisions that maximize their utility-which is a central concept of modern decision theory, then we must posit that people always act rationally in accord with their long-term goals. Clearly they do not do so and a crucial question in decision theory is, "Why not?" (By the way, Herbert Simon and Daniel Kahneman each won Nobel prizes in Economics for demonstrating the irrationality or 'suboptimality' of human decisions.)

The answer, according to Cohen, is that people seek to optimize their utility subject to constraints. These constraints include limited information, specific existential circumstances, limited ability to learn from mistakes, limited ability to focus on the problem, limited ability to control one's own behavior, selfish vs. altruistic orientation, etc. We will discuss virtually all of these constraints below, as they are impediments to ethical decision making. As a result, our neural decision-making systems produce decisions that are locally optimal but not universally optimal. This solution is also the same solution of the process of evolution, which produces locally optimal but not necessarily universally optimal adaptive mechanisms. The extinction of the dinosaurs is an elegant proof of this theory. In the case of hominins, the environment that encouraged the development of the human brain hundreds of thousands of years ago has changed radically in the last 300 years and some of the decision systems (as well as biological systems) that we still possess are mal-adapted to our current circumstances. A good example of a maladapted biological system is our body's stress response. It works for acute emergencies but the Allostatic load that modern life's stressors put on our 'fight-or-flight' system causes heart attacks, Type II diabetes, immune system deficiencies, reproductive problems and a host of other ailments. (Sapolsky 2004).

This is not to say that the rational brain is optimal either. We will see later in this paper that the conscious, 'rational' brain also has severe constraints. It thinks in words, it can only focus on one problem at a time and it is limited to only being able to compare two competing solutions to the problem at one time. (Koechlin and Hyafil 2007) These limitations, we believe, are the cause of the artificial creation by philosophers from Aristotle to the present day of the 'Ethical Dilemmas' so popular in moral philosophical thought. (More on that later in this article.)

System 1 and System 2 have specific localized regions that perform their functions in architecture of the human brain. The reaction to rewarding events or the anticipation of them are localized in the brainstem (that releases the neurotransmitter dopamine) and the striatum (the reacts to the receipt of the released dopamine). These, by the way, are the sites affected by drugs of abuse. (Paulus 2007)

Other sub cortical structures respond to valenced events - that is, positive and negative utility. These include the Medial Prefrontal Cortex, the Orbitoprefrontal Cortex, the Insular Cortex, the Amygdala, and the Striatum. All of these structures have neuronal connections to areas associated with higher cognitive processing, primarily the Anterior Prefrontal Cortex, the Dorsolatersal Prefrontal Cortex and the Temporal Lobe. (See Diagram.) The Prefrontal Cortex comprises one-third of the volume of the neocortex and is the area that has expanded most over our primate relatives in evolution. (See Figure 3.)

[FIGURE 3 OMITTED]

A Society of Minds

Greene et al. (2001) provide strong evidence that different brain areas are involved in the two different "troll eyology" moral challenges. Using fMRI, these authors showed that in the switch throwing scenario the Dorsolatersal Prefrontal Cortex was the most active. This area has been consistently shown to be involved in working memory, abstract reasoning and problem solving, but not emotional processing. This area also deals with are all non-moral rational problem solving. Further, Greene et al. (2004) show that in moral reasoning tasks, activity in the prefrontal cortex precedes and is directly associated with utilitarian moral judgments. This conclusion is also confirmed by recent fMRI studies of patients with damage to the ventromedial prefrontal cortex (VMPC) by Koenigs et al. (2007). The VMPC has been shown to be the seat of the generation of emotions and particularly social emotions. The six patients studied were given the 'trolleyology' test, both scenarios and 20 other moral and non-moral conflicts. The control group consisted of normal subjects and brain-damaged subjects that did not have VMPC lesions. The VMPC patients were overwhelmingly utilitarian ('push the fat man,' 'smother the baby') in high conflict scenarios that pitted aggregate welfare against highly emotionally aversive personal behaviors (e.g., the Fat Man/Trolley Scenario). However, the VMPC subjects reacted like the control groups in general intelligence, logical reasoning and declarative knowledge of social and moral norms.

On the other hand, in the control group, in the up-close and personal Fat Man Scenario and other personal moral dilemmas, the medial prefrontal cortex, the area shown in many studies to be associated with emotional processing, was the most active. Cohen (2005) speculates that this aversion to doing personal harm to an up-close individual may have evolved to allow peaceful aggregation into stable social structures ('cooperation'), whereas, the evolution of our moral apparatus did not anticipate today's modern world, where we can pull a trigger, push a button or throw a switch that causes harm over a long distance. Later, we will discuss not just the importance of proximity but that of perception in moral decisions. Suffice it here to point out that it has long been a strategy of military training to demonize, dehumanize and depersonalize the enemy.

Young and Koenigs (2007) performed a comprehensive meta-analysis of all the recent scientific studies for the importance of emotion in moral cognition. They reviewed recent fMRI studies of moral reasoning, psychological studies of moral judgments, clinical studies of psychopaths and sociopaths and studies of patients with damage to areas of the brain associated with emotional processing. They conclude that emotion plays an integral role in moral cognition. The studies show a consistent association between areas of the brain involved in emotional processing and certain types of moral reasoning and also the clinical findings show that individuals who exhibit abnormal emotional processing also exhibit abnormal moral judgment.

These authors also find in the studies a clear distinction between 'up-close and personal harm' and distant harm. (See 'trolleyology' above.) The studies also support the 'Society of Minds' hypothesis in that subjects who make the most utilitarian responses (the 'ends' vs. the 'means') are shown in the fMRI studies to be overriding the emotional decision of their ventromedial prefrontal cortex (VMPC) with activation of their anterior cingulate and Dorsolateral prefrontal cortex, brain areas associated with cognitive conflict and abstract reasoning. (An often-used scenario that generates this emotional vs. utilitarian conflict is the 'crying baby and the Nazis scenario,' familiar to any student of ethical issues.) In other words, the subjects who make utilitarian judgments in moral dilemmas are shown by fMRI to be overriding their automatic emotional aversion to the harmful act by engaging in rational utilitarian reasoning, a process shown by fMRI to be a conflict in this case between 'cognitive' and emotional processes.

This conclusion is supported by multiple studies of patients who have damage (or 'lesions') to the areas principally involved in emotional processing, collectively known as the ventromedial prefrontal cortex ('VMPC'). Patients with damage to this area are overwhelmingly utilitarian in their moral judgments in the case studies presented to them 'trolleyology,' 'crying baby & the Nazi's' and many others.

As a matter of current interest related to brain injuries, The RAND Corporation recently estimated that there are approximately 900 U.S. veterans of Afghanistan and Iraq that have serious Traumatic Brain Injuries ('TBI ). Even more tragic, however is the RAND Corporation's estimate that over 300,000 U.S. soldiers returning from these conflicts have lesser versions of TBI which are affecting their cognition either somewhat or seriously. (Bergner 2008)

Finally, Young and Koenigs remind us that psychological and neurological studies of moral reasoning are descriptive and not normative. That is, science cannot tell us what is moral and what is not. This is the domain of philosophers Kant, Hume, Bentham and Mill.

We agree with Young and Koenigs on this point. Even though we feel that neuroscience, neuroeconomics and neuroethics give us deeper insight into ethical decision-making, we also do not accept neurological determinism. We do believe that human beings who are not clinically insane do have free will.

As we discussed above, from the viewpoint of Biological Anthropology (formerly termed 'Sociobiology'), ethics is characterized by five hallmark behaviors:

* Empathy

* Cooperation

* Reciprocity/Fairness

* Altruism (Reciprocal and Non-reciprocal)

* Costly Punishment of Free-Riders/Cheaters.

In summary, these five hallmark behaviors, their psychological correlates and their brain or neurological correlates are listed on the following table.

(See Figure 4 for their Neurological Correlates.)

[FIGURE 4 OMITTED]

The Five Hallmark Behaviors of Ethics

Figure 4.The Five Hallmark Behaviors of Ethics

1) Empathy

Empathy is the beginning of ethics and is an essential capacity if we are able to behave ethically. We must be able to feel for another person or imagine what they must be feeling in order to take them into account (This is called 'Theory of Mind' or 'ToM) Needless to say, psychopaths find this impossible and autistic individuals also find this very difficult. 'Mirror Neurons' are what make us capable of empathy, but they also perform a great number of other important and related functions.

Mirror neurons allow us to mimic the actions of others, thereby learning from our parents and peers. They actually fire in an identical manner when we are observing someone's actions as when we are then doing the actions ourselves. (Dinstein et al. 2007).

Further, they allow us to deduce the intentions of others from social and facial and physical cues. (Nakahara et al. 2005. Blakeslee 2006. Miller 2005) This 'Theory of Mind' is radically different from just remembering what another person has done, which is 'episodic memory.' Rosenbaum et al. (2007) have shown that people with severe impairment to their episodic memory perform indistinguishably from healthy controls on objective Theory of Mind tests.

Also, the mirror neurons are credited with allowing the evolution of song in songbirds (Miller 2008) and speech in humans (Holden 2004).

However, for the purposes of this paper, the most salient capability of mirror neurons is to allow us to have empathy, without which ethics would be impossible! The mirror neurons are located in the Medial Prefrontal Cortex ('MPFC ) but amazingly, they are segregated. People use one region of the MPFC to consider the mental state of someone they perceive as similar to themselves and a completely different area of the MPFC to consider someone perceived as dissimilar! (Miller, 2006) This is obviously an evolutionarily adaptive trait, as it is a matter of life or death to distinguish friend from foe, however, it is also an interesting area for future exploration, as this mechanism likely plays a role in prejudice.

These mirror neurons fire when we see another in pain. Experiments in which subjects were administered electric shocks in front of their spouses elicited the firing of the spouses' mirror neurons. Further, all social animals have these mirror neurons and numerous experiments have shown that animals feel empathy. Frans de Waal of Emory University, one of the world's preeminent biological anthropologists, summarizes these experiments in his recent book and article (de Waal, 2005. de Waal 2007/2008).

2) Cooperation

The emergence of cooperation via natural selection is nothing short of a miracle. This is because cooperation has a cost to it and therefore reduces the "fitness" of the cooperator, while ,cheating' or more specifically 'free-riding' has no associated cost. Nowak (2006) summarizes the evolutionary rules for cooperation. A cooperator pays a cost and deals out a benefit to another individual. A defector does not pay a cost and does not deal out a benefit. Therefore in any random population, defectors have a higher average fitness than cooperators and thus natural selection acts to increase the number of defectors in a mixed population. Over time (given the laws of evolution), this would populate the entire world with defectors and eliminate the cooperators.

However, since the payoffs in terms of fitness to a group of cooperators is the highest and the payoffs to a group of defectors is the lowest. Thus, there is some incentive to cooperate. However, we have to ask the question, "How did cooperation evolve?"

Nowak (2006) sets forth the five mechanisms by which cooperation evolved via natural selection. These are:

1. Kin selection (according to Hamilton's Law)

2. Direct Reciprocity

3. Indirect Reciprocity

4. Network Reciprocity

5. Group Selection

Each of these mechanisms works to identify and shun defectors, thereby establishing groups of cooperators (or more specifically, cooperators and reciprocators), who will over time outpopulate the cheaters.

The neural correlates of cooperation are the 'spindle neurons,' also known as 'Von Economo neurons' after their discoverer. (Balter 2006). These neurons exist in certain of the great apes-chimpanzees and gorillas-and in humans. Recently, it has been discovered that they also exist in the large-brained whales, who evolved these neurons between 22 million and 30 million years ago, 7 million years before the great apes evolved them. (Balter 2006)

The exclusive home of the spindle neurons are the anterior cingulate cortex and the frontoinsular cortex. These spindle neurons or 'VE1V' (for 'Von Economo' neurons) have high speed signal transmission rates and are essential to uniquely human aspects of social behavior.

They enable us to unconsciously and rapidly make social judgments that allow us to navigate in our group, tribe, pack or social surroundings. As further proof of this hypothesis, Seeley et al. (2006) found that Frontotemporal Dementia, a type of neurodegenerative disease, destroys these VEN's and concomitantly destroys all the human social behavior of the patient. Seeley points out that this disease is different from Alzheimer's, by the way, where the neuronal loss does not destroy the VEN's.

3) & 5) Reciprocity and Costly Punishment of Free-Riders/Cheaters

The Dorsolateral Prefrontal Cortex is the brain area that reacts to fairness and unfairness. It is the area that lights up under fMRI scans in Ultimatum Games and rejects unfair offers. Also, when it is disrupted by magnetic interruption in subjects playing Ultimatum Games, the subjects complain that the offer is unfair but accept it anyway. (Knoch et al., 2006)

When we perceive an unfair situation, such as the non-reciprocation of an altruistic act or free-riding, we punish the cheater. Groups that punish or expel free-riders stabilize cooperative behavior and outperform groups that do not. (Gureck et al. 2006) The brain area that manages this punishment is the same reward area that is rewarded with dopamine when we cooperate or do altruistic acts. De Quervain et al. (2004) show that the ventral striatum gets a shot of dopamine when we punish cheaters ("Revenge is sweet.")

Mathematical models of mixed societies filled with cooperators, reciprocators and free-riders do not work, unfortunately. The groups break down even in the face of costly punishment if all the reciprocators do not punish equally. (Haidt 2007) However, Panchanathan et al. (2004) and Hauert et al. (2007) show that if free-riders are excluded by the group through indiredt reciprocity (that is, 'gossip' about their reputations) or can opt out of the task, even though they may be part of a group, the mathematical models work very well and create a cooperative society.

4) Altruism

Altruism is the giving of a benefit to another or to your group without receiving an immediate, corresponding benefit. It is different from cooperation, in that there is a time delay involved. In most social situations, we expect 'reciprocal altruism,' which means we expect the receivers of the benefit to pay us back at some future time. If they do not, they are 'free-riders' and we punish them or institutions set up by society punish them. (Gureck et al. 2006). However, there is also 'non-reciprocal altruism,' which does not expect a payback. We do this to our kin, according to 'Hamilton's Law,' which says that we are willing to give non-reciprocal altruism to an individual in direct relation to the number of genes (equals closeness of kinship) that we share with them. (Dugatkin 2006)

The neural correlates of altruism are the mirror neurons, which perceive the similarity or kinship of the individual or group and the ventral striatum, which gives us a dopamine reward for doing the altruism. Those who engage in philanthropy receive the dopamine reward in their ventral striatum.

Most importantly, we can also mentally create 'virtual kinship' via our perceptual mechanisms. (Dugatkin 2006) This is the basis of all religions' saying, "All men are brothers." All the Holy books emphasize the ('virtual') kinship of mankind. The writings of the Bhudda say that we are all brothers because we all suffer equally. The Old Testament says that we are all 'Children of Abraham.' The New Testament and the Koran say that we are 'Children of God.' This gives us all a virtual kinship and brings forth our unconscious 'kin-selection' and therefore altruistic feelings.

Impediments to Ethical Decision Making - Individual and Institutional

(1) Individual Impediments

In economic game theory, significant advances have been made in accurately predicting cooperative behavior in groups by examining the individual characteristics of the group members. Kurzban and Houser (2005) We contend that there are both individual and institutional impediments that interfere with correct ethical decision-making. We further believe that by examining and understanding these impediments and counteracting them, we can significantly improve group cooperation and teamwork.

First, let us examine the most important individual impediments.

Status and Hierarchical Influence

A social comparison of self to community expectations is consistent with what is found in other primates. Primates are genetically predisposed to focus on status and social hierarchy. The underpinnings of this concept come from our evolutionary past and it likewise is an evolutionarily adaptive behavior in a group setting. As humans, this shows up in our obsession with the rich and powerful members of our society. It also is the unconscious basis for our desire to 'keep up with the Joneses.'

A fascinating experiment at Duke University Medical School code-named 'Monkeys PayPer-View' (Deaner, Khera & Platt 2005) shows that primates are hard-wired to pay close attention to high-status individuals. Although there have been voluminous field studies of status among monkey troops, this is the first experimental evidence showing that primates automatically discriminate among images of other monkeys based on social status.

A favorite treat for rhesus macaque monkeys is a slurp of sweet cherry juice. Males were given the choice of pressing a lever to get a reward of the juice or to view images on a computer screen, either of the face of the high status monkey in their troop or the rears of female monkeys. The authors report that despite the fact the monkeys were purposely made thirsty before the experiment, monkey subjects always gave up the cherry juice to view the faces of high status monkeys. However, the same monkeys had to be bribed with juice payment to view the faces of low status monkeys. The paper's authors strongly believe that similar mental processes are at work in human primates due to the fact that we have evolved in the same kinds of social conditions. As a matter of fact, Dunbar (2004) reports that human speech evolved in order to manipulate increasing complex social orders and anthropological field research shows that twothirds or our conversations are 'gossip,' exploring the interpersonal relationships of individuals with power and status in our lives and in our work. Our unconscious programmed interest in status and power creates jobs for the gossip columnists and paparazzi and the tabloid writers who satisfy our thirst for the doings of the rich and famous.

Obedience to Authority: Milgram Experiments

As stated earlier, the effect of authority on behavior of subordinates is nowhere better seen that in the classic 'Milgram Experiments' and their re-enactment in 2007 by ABC News Prime Time. Stanley Milgram first performed his experiments at Yale University in 1961 in the Department of Psychology. He used an experimenter dressed in a white lab coat, who directed paid volunteer 'subjects' to administer electric shocks (These were fake.) up to 450 volts to another 'volunteer' (actually, an actor) each time the 'volunteer' got an answer wrong in an analogies test. The 'volunteer' (actor), who was in another room (the 'remote' version of the experiment), protested or screamed in pain more loudly with each increasing voltage, but the vast majority (65%) of the subjects continued to shock the victim all the way up to the maximum of 450 volts, at which time the experimenter halted the experiment. Usually the subjects hesitated at some point in the experiment and looked towards the experimenter, who always said, "You must continue with the experiment," or a variation of that phrase. The subjects then returned to shocking the victim. (Milgram 1974)

In an interesting permutation of the experiment, Milgram performed the same procedure with 40 adults but had the victim in the same room as the subject. The subject then was able to observe the victim ('proximity') or further, the subject, under the researcher's direction, was ordered to hold the victim's hand down on the electric shock plate ('touch proximity'). Sixty percent of the subjects defied the experimenter in 'proximity' and 70% defied the experimenter in the 'touch-proximity' version. (Milgram 1974)

The American Psychological Association banned all Milgram-type experiments immediately thereafter based on ethical grounds. However, ABC News Prime Time got permission to redo a slightly modified Milgram experiment at the University of Santa Barbara in 2007. The conditions were almost identical to the original 'remote' Milgram experiments, except that the 'voltage' was stopped at 120 volts. The results in the new study were almost identical to Milgram's results. Two-thirds of male subjects and 70% of female subjects 'shocked' the victim up to the maximum 120 volts. Most subjects protested at some time but were ordered to continue by the experimenter. (ABC News, Prime Time, 2007)

In both the original Milgram experiments and the Prime Time re-enactment, exit interviews with the subjects elicited the common response that they were "just following orders" or that the 'volunteer' was there at his own volition and could have left at any time-both of which are a relinquishment of personal responsibility for their actions. The role of a follower is the role of a child and when any group forms the followers relinquish some or all of their personal responsibility for decision-making to the leader. (Peck 1983) The military is especially subject to this relinquishment of responsibility and the tragedies of the My Lai massacre during the Vietnam War and the torture at Abu Gharib Prison are testimonies to this danger. "We were just following orders" was the refrain in both instances. The Military Conduct Manual states that a soldier is free to obey an illegal order but in reality group cohesiveness and the rigid enforcement of the chain of command makes it almost impossible for a lower status soldier to exert personal responsibility in decision making.

Attachment Style

Our attachment style is influenced or more appropriately, determined by the relationship to our primary care-giver. Research has shown that three basic types of emotional attachment style, 'secure,' 'avoidant' & 'anxious.' show up in both babies and young adults. (Begley, 2007) in the following proportions:

1. Secure attachment style 55% of babies and young adults

2. Avoidant attachment style 25% of babies and young adults

3. Anxious attachment style 20% of babies and young adults.

Mikulincer & Shaver (2005) and Mikulincer, Dolev & Shaver (2004) have shown repeatedly that those with the insecure attachment styles avoidant or anxious-are associated with low levels of altruism and empathy for strangers and acquaintances. They are emotionally overwhelmed by the sight of distress and cannot feel empathy (Begley 2007). Hence individuals with either of the insecure attachment styles are especially prejudiced against out-groups, strangers and those in distress.

However, in a series of experiments with young adults, Mikulincer, Shaver and their colleagues have shown that if young adults were 'primed' either consciously or subliminally with words such as 'love,' 'hug,' or 'support,' that all of the three groups significantly increased their empathetic feeling (Mikulincer, Gillath et al., 2001; Mikulincer & Shaver, 2005) their willingness to actually perform altruistic actions to help others (Mikulincer, Shaver et al. 2005) and their tolerance of out-groups in the case of these studies, Israelis vs. Arabs and Israeli Orthodox Jews vs. Reformed Jews. (Mikulincer & Shaver 2001)

Thus, attachment priming can be an extremely useful tool in an ethics training program to create empathy, tolerance and cooperation in among the members of an institution or company. Further, Begley (2007) reports evidence of how priming and meditation can not only change your attachments style and your perception of out-groups but also can actually create Ineuroplasticity" that is, it can physically change your brain.

Time Stress and Pressure

Everyone knows that people are less willing to help other people when they are under time stress. A broken-down motorist will get virtually no help at rush hours but likely will on a Sunday afternoon. One of the classic social psychology experiments in this area is the 'Good Samaritan' experiment on 67 seminary students at Princeton Theological Seminary performed in 1973. (Darley & Batson 1973) The seminarian volunteers in the experiment were variously prepared by the researchers to give a speech on the "jobs that seminary students would be good at" or a talk on the Parable of the Good Samarian. Each group was given the relevant written material to read. Next, the seminarians were sent to an adjacent building to give the prepared speech; some were told they were late, some that they were expected immediately and some that they were early but that they might as well go over. On the way, the student encountered an actor moaning and clearly in distress. Only 10% of the high-hurry students offered help, but 45% of the intermediate-hurry students helped and 63% of the low-hurry students helped. The subject they had studied, that is, the jobs or the Good Samaritan Parable, had no correlation at all with their aid to the victim.

The implications of this and other time-stressed research should be clear to us all. In our quest for productivity, in our statistical monitoring of output performance and in our overscheduled lives, there is a diminishment of altruism and even good neighborliness. This is an important but overlooked impediment in ethical decision-making.

Cheater Types in Society: Evidence from Economic Game Theory

Recent research in economic game theory has emphasized the polymorphic nature of the makeup of the human population and the importance of individual differences in modeling group behavior and dynamics in economic games and decisions problems. In an important seminal article, Kurzban and Houser (2005) detail experiments they performed that were designed to identify and analyze these differences. They find that human subjects fall into three types: reciprocators, cooperators and free-riders.

1. Reciprocators, who make up 63% of the population, contribute to the public good as a positive function of their beliefs about others' contributions in game theory parlance, the use a conditional strategy called 'Tit-for-tat.'

2. Cooperators, who make up 13% of the human population, always contribute to the public good at a cost to themselves whether others do or not

in game theory parlance, they play the absolute strategy of 'always cooperate.'

3. Free-riders, who make up 20% of the human population, do not contribute to the public good but take from it again, in game theory parlance, these agents employ the absolute strategy of 'Always cheat.' (Unable to Classify = 3% did not fit any of the above categories)

Both mathematical modeling and theoretical analysis show that despite their different strategies, the economic payoffs to each of these types are identical (Dugatkin & Wilson 1991. Lomborg 1996. Aktipis 2004) Further, and extremely important to the ethical behavior of groups in institutions, cooperative behavior of a group can be extremely accurately predicted if one knows the statistical makeup of the group in terms of these three types. (In the real world, there are plenty of simple psychological and game theory tests to determine this.) The implications of this and of the mathematical modeling of group behavior on economic games suggest that evolutionary dynamics has generated in our society an evolutionary stable, polymorphic, not homogenous, population made up of individuals that vary in their degree of cooperation in group interactions. (Kurzban & Houser 2005)

Thus, the make-up of the members of an institution or corporation can be a serious impediment to (or conversely an aid to) ethical behavior of the individual. More importantly, since institutions and corporations are not democracies but rather dictatorships, the ethical culture is always created from the top down so the behavioral type of the CEO is critically important to the ethics of the entire organization (Enron is a prime example.)

There is no doubt that having as many cooperators (or at least reciprocators) as possible in your organization is important. This is especially important with today's organizational emphasis on teamwork in the workplace. As proof of this, Kurzban and Houser (2005) found that although the payoffs to individuals in randomly composed groups were identical to each of the three behavioral types, in groups composed of three reciprocators with one cooperator each individual earned approximately 40% more than three reciprocators grouped with a free rider. We saw above that free riders not only reduce payoffs but also 'pollute' the group by reducing the cooperation of the reciprocators.

Richard Dawkins (1989) had shown in his computer game simulations that 'Tit-for-tat' and 'Always cheat' are 'Evolutionary Stable Strategies' ('ESS'), that is, if all the members of a group play either of these strategies, the other strategy cannot do well in it and will eventually be ,extinguished.' That is, if one or a number of 'Always cheats' j oins a group dominated by 'Titfor-tats," they will be shunned or punished. On the other hand, if 'Tit-for-tats' join a group dominated by 'Always cheats,' they will become 'Always cheats' or die.

However, Dawkins also showed that the evolutionary payoffs to a group composed of 'Titfor-tats' both in fitness and procreation (group growth) were significantly larger than an 'Always cheat' group. Moreover, in a polymorphic group, if there are as few as 5% 'Tit-for-tats' they can eventually overwhelm the 'Always cheats' so long as they are allowed to cooperate with each other (which they will) and exclude the 'Always cheats.' More recently, Gurerk et al. (2006) and Hauert et al. (2007) have shown in mathematical models that adding the punishment of cheaters and indirect reciprocity ('gossip') maintains cooperative behavior in the group. The implications of these findings for fostering teamwork and cooperative group behavior should be obvious.

How can this cooperation of Tit-for-tats come about in groups? In early mathematical models of altruism in moderately large groups, cooperation collapsed because of the existence of free-riders in the group. (Pachanathan, 2004) However, as Haidt (2007) reports, a big breakthrough in modeling reciprocal altruism in communities was the ability of individuals to know the reputation of members of the group through gossip ('indirect reciprocity'). (Nowak & Sigmund 2005) When behavioral economics games allow players to know each others' reputations, the rates of cooperation skyrocketed. (Fehr & Henrich 2003) This gives the cooperators and the reciprocators the opportunity to shun (which they will) the freeriders/cheaters and deal only with each other. In the real world, Dunbar (2004) reports that anthropologists have found that two-thirds of our conversation is 'gossip' in the sense of 'indirect reciprocity.'

The Neurological and Psychological Consequences of Money

Many philosophers and religions hold that, "Money is the root of all evil." While we will not debate that topic in this paper, we will show how powerful a force money is. Therefore, it is critically important to take the power of money explicitly into account when thinking about business ethics. Pessiglione et al. (2007) devised experiments to show how the brain translates money into a force. The researchers had their subjects view pictures of money (a penny or a pound) and were told they could keep the amount shown depending on how hard they squeezed a hand grip. The subjects received feedback in the form of a visual thermometer and the researchers also measure subjects' skin conductance response ('SCR' - to measure of autonomic sympathetic arousal) and brain activity. Not surprisingly, the larger the amount shown, the stronger the grip force exercised by the subjects.

The brain scans showed activity in a specific basal forebrain area that includes the ventral striatum - the reward center of the brain (the dopamine processing brain facility), ventral pallidum and extended amygdala. This research supports what other studies have found, namely, that this region created the motivational effect of the money and is a key node in brain circuitry that enables expected rewards to energize behavior. More specifically, O'Dougherty et al. (2004) and Pessiglione et al. (2006) have shown that ventral striatum activity has been linked to reward prediction and reward prediction error during learning. However, the amazing results from this experiment occurred when the subjects were shown the money amounts in display times that were subliminal (17 and 50 ms.) and therefore below the conscious perception of the subjects. The grip force, SCR and brain activity was similar even when the subjects could not consciously 'see' the monetary display! Thus, expected rewards energize behavior without the need for the subjects' awareness.

As to the psychological consequences of money, a recent theory by Lea and Webley (2006) characterizes money as both a 'tool' (an interest in money for what it can be exchanged for) and a 'drug' (an interest in money for itself -a maladaptive function). This theory further emphasizes that people value money for its instrumentality-that is, money enables people to achieve goals without aid from others. On the other hand, Price et al. (2002) show that physical and mental illness after financial strain due to job loss is triggered by reduced feelings of personal control.

To verify these findings, Vohs et al. (2006) devised nine experiments that tested their hypothesis that when people were reminded of money they would feel more self-sufficient and would want to be free from dependency on other people and conversely not want people to depend on them. Another amazing aspect of this experiment is that the subjects were mentally primed with money or neutral concepts subliminally - that is, below the level of their conscious awareness. They then were ordered to perform certain tasks, some of which were actually impossible to perform.

Vohs et al. found that participants who were primed with the concept of money preferred to work alone, play alone and put more physical distance between themselves and a new acquaintance. Reminders of money led to reduced requests for help and reduced helpfulness towards others. These researchers conclude that the self-sufficiency pattern they found helps explain why people view money both as a great good and a great evil. As societies developed, they contend, the acquisition of money allowed people to pursue their goals with diminished reliance on friends and family.

That is, money enhanced individualism but diminished communal motivations, as it still does so today. Grouzet et al. (2005) show that across 15 different cultures, 'financial success' as a goal is in direct opposition to goals concerning 'community' (although less so for poorer cultures).

Many doctors and scientists speak of how our human bodies and systems are not evolutionarily adapted to the modern world. This is the broad view of why there are epidemics of Obesity, Arteriosclerosis and Type II Diabetes in the developed world. Similarly, our brains are ill-adapted to handle money and materialism. This is not surprising, since humans are almost 1 million years old and money has only been around for less than 3,000 years.

Finally, Fliessbach et al. (2007) show that even though money triggers dopamine reactions in the brain's ventral striatum, it is the relative reward rather than the absolute reward that matters in social comparisons. These experimenters used side-by-side brain imaging scanners and a behavioral task in which equal performance was rewarded inequitably. Blood oxygen levels (BOLD) were elevated in the ventral striatum, a brain area that has a central role in responding to and predicting rewards and showed that this region was indeed sensitive to the relative amount of money that was paid. More importantly, this ventral striatum response occurred when no decisions were made, suggesting that the calculation of social standing as indexed by payment-may be automatic.

Resistance to Change

Unfortunately for teachers and trainers in business ethics, the resistance of people to change is huge. Arkowitz and Lilienfeld (2007) report that the Centers for Disease Control and Prevention have established that 20% of American adults continue to smoke, more than 30% are significantly overweight and 15% are binge drinkers. Further, over 50% of medical patients do not follow their doctor-prescribed regimens, even is situations where non-compliance could end in amputation (diabetes) or blindness (glaucoma). On a personal note, we all know what happened to our last New Year's resolution.

Why don't people change? Arkowitz and Lilienfield (2007) cite these reasons:

1."Diablos Conocidos" ("The devils you know are better than the devils you don't know. The status quo is familiar, comfortable and predictable even though it may be extremely painful. Change is unpredictable and anxiety-producing.

2. Fear of Failure: If they fail, people fear they will feel worse.

3. Faulty Beliefs: Some feel they are failures if not 100% successful. Some feel a push to change by another person as a threat and unconsciously resist (called 'reactance').

4. Reward of the Undesirable Behavior: The individual derives some reward from the behavior (e.g., the alcoholic drinks to relieve his anxiety) and this coping behavior may be the only thing the person knows.

Engle and Arkowitz (2006) contend that the only way to change people is to help them want to change, because those resistant to change are pulled in both directions- the change and the status quo. Their solution is 'motivational interviewing,' whereby the therapist enhances the individuals intrinsic motivation to change. This is done by exposing and resolving the ambivalent motivations, with the goal of making the client the advocate for the change. Once the ambivalence is dealt with, change often occurs.

Anxiety-Prone Individuals, Addiction and Decision Making

Uncertainty is necessarily an important component of decision-making. Unfortunately, individuals with generalized anxiety disorder, or those with anxiety caused by any of the stressors in society have an intolerance of uncertainty and thus a tendency to make rapid decisions without adequate information. (Ladouceur et al. 2000) Moreover, ambiguous situations are upsetting to high-anxiety individuals and are positively correlated to the degree of anterior cingulate cortex activity ('ACC').

Genetic research has shown that 40 to 60% of anxiety in humans is heritable and that the genes negatively affect the serotonin receptor system. (Lesch et al. 1996. Parks et al. 1998. Chen et al. 2006) However, the gene is not always expressed- that is, it also takes a complex interaction of genetic predisposition and environmental factors to trigger an anxiety-prone individual personality. However, decision-making, be it moral or non-moral decisions, involves uncertainty and cognitive models of generalized anxiety disorder show these individuals to have a high intolerance to uncertainty (Ladoceur et al., 2000). Additionally, the decision-making by anxious individuals is more highly influenced by ambiguous stimuli (Blanchette & Richards, 2003), is neurologically associated with increased activity in the anterior cingulate cortex ('ACC) and the medial prefrontal cortex ('MPFC) (Paulus et al. 2004), and the intolerance of uncertainty is positively correlated to the magnitude of ACC activity (Krain et al. 2006)

As a consequence, anxiety-prone individuals exhibit reduced risk-taking, increased neuroticism and increased harm-avoidance. (Simmons et al. 2004. Paulus et al. 2003) They tend to interpret ambiguous stimuli during the assessment stage of decision making as threatening and therefore are prone to make decisions that are sub-optimal, biased and increasingly subject to the 'framing effect.' (Paulus, 2007) Martin Paulus (2007) presents a convincing model of decisionmaking as a Homeostatic process. The human is moved out of homeostasis and is required to gather options (visually or cognitively), evaluate them as to their utility-that is, reward-via the limbic system and ventral striatum and maximize utility-reward-via the executive function in the medial pre-frontal cortex. Anxiety-prone individuals exhibit dysfunction in decision-making.

Further, since the current model of addiction holds that there exists an addictive personality type or 'individual -at-risk' that exhibits altered brain activity similar to anxiety-prone individuals, Paulus (2007) holds that addicted individuals make sub-optimal and biased decisions in their systems quest to achieve homeostasis. Dalley et al. (2007) report that chronic drug users have fewer D2/D3 dopamine receptors than non-users in the reward centers of their brain and their experiments with rats show that this is true prior to the drug abuse, not caused by the drug abuse. Consequently, they are naturally less sensitive to the normal pleasures such as food and sex and seek highs from drugs to replace the rewards that they cannot get naturally. All drugs of abuse bind to the dopamine receptors (Paulus 2007) and all addiction, whether it be drugs, sex, shopping or gambling creates a flood of the neurotransmitter dopamine in the brain. (Peoples 2002. Shizgal & Arvanitogiannis 2003)

From the perspective of ethical behavior within an organization, not just people who are addicted, but those who are prone to anxiety present a challenge to cooperation within the organization, as their wills and decision-making abilities are likely reduced.

The Limits of the Human Brain: The Frontopolar Cortex and Executive Function

We discussed briefly above the fallibility of the human reasoning. This is likely correlated with the fact that parts of our brain that engage in rational thought are relative newcomers in the history of evolution. Further, the rational facilities of our neocortex suffer from extreme limitations that not only bias but actually hamper our ability to do complex thought.

For example, Dijksterhuis et al. (2006) interviewed subjects that choose between kitchen accessories at a department store and another group who were choosing furniture at IKEA. For the accessory shoppers, the ones who consciously deliberated more were happier with their purchase. For the IKEA shoppers, those who deliberated more were less happy with their purchase than those who "went with their gut." Dijksterhuis and his colleagues conclude this was due to the limits of the conscious mind and its ability to consider only a few variables at a time.

To verify their hypothesis, the researchers had subjects read descriptions of four automobiles and each only listed four factors. They were asked to think about the cars for four minutes and chose the one that had the best features. Most of the participants got it right. Next, the researchers made the choice more complex by listing twelve attributes and only 25 percent chose the best car- no better than chance. On the other hand, when Dijksterhuis performed the same twelve-feature experiment and then distracted the participants by having them do anagram puzzles for four minutes, more than half picked the automobile with the best overall features.

Koechlin & Hyafil (2007) show exactly how the human conscious brain is limited in the making of complex decisions. The frontopolar cortex ('FPC') is the front-most part of the cortex and it forms the apex of the executive system underlying decision-making. However, these researchers show that the FPC is restricted to the processing of simple cognitive branching, whereby only a single task can be maintained in a pending state at any one time. Computer simulations by the authors show that the neurons in the FPC compare only the two most rewarding tasks at hand. The FPC then selects and maintains the most rewarding task in conscious attention and puts the second most rewarding task at hand in a pending state in other neurons in the FPC. It discards all the other options. If two or more tasks have the same two largest reward values, so that there are three or more tasks to consider, cognitive decision making by the FPC is dramatically impaired. Or, alternatively, if the FPC chooses task three to act on, it loses task one and two in pending. This hypothesis of the authors places severe serial and recursive constraints on human reasoning, problem solving and complex decision-making.

This limitation may seem surprising, given that humans do make complex decisions. However, the Koechlin and Hyafil hypothesize that learned or trained expertise allows spatial navigation of memorized cognitive branching map sets, which are performed by the parietal cortex and hippocampus.

Aristotle and other moral philosophers and psychologists are fond of presenting for our consideration and consternation moral dilemmas that seem to force us to choose between two bad alternatives. We hypothesize that the processing limits of the frontopolar cortex have forced these individuals to create artificial bi-polar dilemmas. In our opinion, almost all ethical problems do not consist only of bi-polar choices, rather, there exist many choices of action to consider. We just are unable to consider them all at once. Weston (2007) presents procedures and practices for expanding options to consider in ethical problem-solving that we find very useful in teaching and training. His methods include practices in not rushing to judgment, in reframing the problem, in opening up additional options and in brainstorming with colleagues and friends.

The Influence of Organizational Culture and Executives on Individuals

Available literature provides clear indications that individuals are strongly influenced by the cultures of the organizations to which they belong and by the executives and managers they report to and/or observe. To summarize, here is what we know about individuals that might be within their organization's ability to influence:

1. Individuals look to executives and managers for cues on how to behave.

2. When they're in the midst of an ethical dilemma and under extreme stress, people are likely to be irrational and instead rely on emotions or ingrained patterns of behavior.

3. The threat of punishment- losing a job, for example- has a profound influence on how people behave in organizations. In fact, the notion of loss may have significantly more impact on individuals than any possibility of gain.

4. People resist change. ('Diablos Conocidos') (Arkowitz & Lilienfeld, 2007)

5. Framing or how people rationalize a situation-can have profound implications on the ethics of their decisions.

6. Culture can have a profound effect on how we behave if the culture creates a sense of kinship among colleagues. This kind of close 'bonding' behavior has been exhibited in a number of organizations where a kind of 'artificial' or 'virtual' kinship has been encouraged and developed among members, such as the military, police departments, fire departments, medical staffs, etc.

7. Gossip within an organization-the 'grapevine' effect is how employees judge the credibility of an organization. Members unconsciously compare 'formal' messages (what and organization says it stands for) with 'informal' messages (what an organization demonstrates in the way of values in other words, what it really stands for). If there is a match, the organization is judged to be credible; if not, the organization is judged to be incredible. This phenomenon has a critical impact on behavior.

While these realities might seem daunting to executives who are charged with building an ethical culture within their organizations, there are steps they can take to improve the odds that people will do the right thing under pressure. Here are some of the most institutional issues to be aware of:

1. Executives and managers need to understand their critical role in building ethical culture.

Organizational behavioralists have long understood that people joining an organization look for clues from their orientation process, initial work experience, and what they are told about the culture ("How things are done around here") by peers and managers. New hires quickly deduce what is expected of them from what they hear and observe. ("What's rewarded here and what's punished?") Consequently, it's not enough for executives and managers to be passive about ethics and integrity, hoping that employees will somehow "see into executives' hearts" to discern their character and intentions.

Studies have shown that executives need to do much more than think ethically to have the desired effect on employees. (Trevino, 2000). Being a moral person is in fact only half of the equation. In addition, executives and managers need to actively 'manage morally' in order for employees to perceive that they are ethical. This means that executives need to first communicate openly about the importance of ethics and values to the organization. They also need to understand that they are important role models and their actions and words have an outsized effect on observers. And, of course, executives' communications must match their actions or the ethical culture will be seriously undermined. Finally, executives must hold people accountable for their behavior. If an employee (especially a visible high performer) behaves unethically, executives must find an appropriate punishment. Similarly, ethical behavior must be rewarded. An effective way to do this across an employee population is through a performance management system, which integrates ethical behavior competencies into its metrics. Of course, performance management systems must be administered fairly and communicated widely if the desired behavior is to become part of the organizational culture.

2. Ethics and integrity need to be strategically and holistically built into a culture as core values.

It's not enough for an organization to intend to be ethical and demonstrate integrity. Organizational cultures will not reflect integrity and ethics until those values are articulated and integrated into the spectrum of organizational life. Studies indicate that organizations which try to build values-based compliance cultures are likely to be significantly more effective at growing ethical cultures than organizations where a strict 'check-the-box' compliance effort is in place. (Trevino et al, 1999). Specifically, research indicates that a holistic ethics culture is one that is rooted in the organization's culture and values, is proactive, involves executives in strategic ways, and is integrated into the organization's performance management system. In such a culture, employees are more likely to be aware of and report unethical and illegal activities they observe, seek help inside the organization, support decision-making across the board, be committed to the organization, and finally they are more likely to be ethical themselves. In contrast, employees in strict compliance cultures are more likely to view the compliance program cynically- as an attempt by the organization to protect itself. In this type of culture, employees are also more likely to seek advice outside of the organization, and they are less likely to observe or report unethical behavior and they are less committed to the organization.

3. Ethics programs need to recognize the importance of repetition in influencing employee behavior.

It is important that various elements in an organizational culture repeat the themes of ethics and integrity and that these themes are reinforced constantly. Only this kind of repetitionmaking ethics and integrity a 'habit'-can override the emotional, irrational response of an individual employee who feels pressured in an ethical dilemma. It's only when employees feel that noncompliance is riskier than compliance that they will be able to override their emotional response to 'go along to get along.'

4. People resist change. (Diablos Conocidos

We know from recent studies that people resist change and that peer pressure is highly influential in encouraging people to change. This is another factor that could be used to an organization's advantage with the creation of an ethical culture. If 'how business is done here' the culture of an organization is rooted in ethics and integrity, the peer pressure will drive individuals in that direction.

5. Time constraints can result in pressure that profoundly influences even the most wellmeaning individual. (Princeton Divinity School's "good Samaritan" experiment). We know that time pressures can encourage workers to make inappropriate decisions, and yet the issue of time is one that organizations can do little to alleviate. What organizations can do, however, is train to employees to ask for additional time if possible when making complex decisions. It is almost never necessary in business to make an instant decision, although many people (especially very young ones) are afraid to ask for additional time when making a decision. Often, just a little extra time can have a significant impact on outcomes.

6. Framing-or how people rationalize a situation-can have profound implications on the ethics of their decisions.

Skilled executives and coworkers with distinct 'agendas' can easily frame issues in ways that might significantly influence individuals to respond (and decide) in a particular way. This is difficult for organizations to address, although perhaps training could help workers understand that this framing phenomenon exists and how they can reframe a problem or issue as part of their decision-making process.

7. Gossip

Organizational grapevines are how most information in the work place is communicated and the news that travels on the grapevine greatly influences employee attitudes and behavior. Sommerfeld et al. (2007) show, through a series of economic game experiments, that the players acted on gossip about other players' cooperation levels, even when it was contradicted by objective data given to them on those same players. For many years, workers worldwide have indicated in communication industry surveys that their most favored communication source is their direct manager. However, due to its evolutionary power (Dunbar 2004), the grapevine will never disappear. Nevertheless, organizations can greatly influence what the grapevine contains by training its managers to communicate often and well with individuals.

Conclusion

A familiar toast expresses the desire that we may always "live in interesting times." Clearly, the advent of medical technology to examine the inner workings of the brain has greatly affected how we view ethical decision-making-resulting in very interesting times indeed. No longer do researchers need to guess at a subject's reasoning we are able to see how a subject evaluates a dilemma and how emotions and reason work together to produce a decision. Improvements in technology and added research into the brain will no doubt explain other cognitive and emotional processes that now are mysterious. These discoveries are not just fascinating forays into how humans "work," but they also provide important clues to how organizations and societies might be able to influence individual decisions.

Questions for Further Study

* What effect does organization culture play in the decisions of its members?

* How does organizational loyalty influence how we make decisions?

* Do effective ethics programs override individual proclivities?

* If individuals are more aware of the effect of emotions on decision making, how can they make ethical decisions more consistently?

* Since ethical decision making is not entirely logical but largely emotional, what are the implications on ethics training programs for businesses and government agencies?

* Can functional MRIs be used to provide biofeedback to business and government leaders to improve the ethical decision making under risk, threat and stress?

References

Aktipis, C.A. 2004. Know when to walk away: contingent movement and the evolution of cooperation. Journal of Theoretical Biology 231: 249.

Arkowitz, H., and D. Engle. 2006. Ambivalence in Psychotherapy, Facilitating Readiness to Change, New York: Guilford Press.

Arkowitz, H., and S.O. Lilienfeld. 2007. Why Don't People Change? Scientific American Mind 18: 82.

Balter, M., 2006. Social Dementia Decimates Special Neurons. Science Now (December 22, 2006).

Balter, M. 2006. Well-Wired Whales. Science Now (27 November, 2006).

Bargh, J. & T. Chartrand. 1999. The Unbearable Automaticity of Being. American Psychologist, 54: 462.

Begley, S. 2007. Train Your Mind, Change Your Brain. NewYork: Ballantine Books (Random House).

Benartzi, S., & R.H. Thaler. 1995. Myopic Loss Aversion and the Equity Premium Puzzle. Quarterly Journal of Economics 73.

Bentham, J., 1988 [1781]. The Principles ofMorals and Legislation. Re-published by New York: Prometheus Books.

Bergner, D. 2008.The Sergeant Lost Within. New York Times Magazine, May 28, p. 41.

Berns G.S., Chappelow J, Zink CF, Pagnoni G, Martin-Skurski ME, Richards J. 2005. Neurobiological correlates of social conformity and independence during mental rotation. Biological Psychiatry 58: 245.

Blanchette, I., and A. Richards. 2003. Anxiety and the Interpretation of Ambiguous Information. Journal of Experimental Psychology General 132: 294.

Blakeslee, S. 2006. Cells That Read Minds. New York Times January 10.

Breiter, H. C., I. Aharon, D. Kahneman, A. Dale, P. Shizgal. 2001. Functional imaging of neural responses to expectancy and experience of monetary gains and losses. Neuron, 30: 619.

Brosnan, S.F. and F.B.M. de Waal. 2003. Monkeys reject unequal pay. Nature 425: 297.

Camerer, C. 2003. Behavioral Game Theory. Princeton: Princeton University Press.

Chapais, B. & C. Berman. 2004. ed., Kinship and Behavior in Primates, New York: Oxford University Press.

Chen, M. K., and L. R. Lakshminarayanan. 2006. How Basic are Behavioral Biases? Evidence from Capuchin Monkey Trading Behavior. Journal ofPolitical Economy 114: 517.

Chen Z., D. Jing, K.G. Bath, A. Ieraci, T. Khan, C. Siao, D.G. Herrera, M. Toth, C. Yang, B. McEwen, B. Hempstead, F. Lee. 2006. Genetic Variant BDNF (Va166Met) Polymorphism Alters Anxiety-Related Behavior. Science 314: 140.

Choi, W.Y., P.D. Balsam, and J.C. Horvitz. 2005. Extended habit training reduces dopamine mediation of appetitive response expression. Journal of Neuroscience 25: 6729.

Cohen, J.D. 2005. The Vulcanization of the Human Brain: A Neural Perspective on Interactions Between Cognition and Emotion. Journal ofEconomic Perspectives 19:3.

Coffee, J. C. Jr. 1981. No Soul to Damn: No Body to Kick. Michigan Law Review 79: 386.

Cromwell, H.C., and W. Schultz. 2003. Effects of Expectalons for Different Reward Magnitudes on Neuronal Activity in Primate Striatum. Journal of Neurophysiology 89: 2823.

Cushman, F., L. Young and M. Hauser. 2006. The Role of Conscious Reasoning and Intuition in Moral Judgment. Psychological Science 17: 1082.

Dalley, J.W., T.D.Fryer, L. Brichard, E.S.J. Robinson, D.E.H. Theobald, K. Laane, E.R. Murphy, Y. Shah, K. Probst, I. Abakumova, F.I. Aigbirhio, H.R. Richards, Y. Hong, J. Baron, B.J. Everitt, T.W. Robbins. 2007. Nucleus Accumbens D2/D3 Receptors Predict Trait Impulsivity and Cocaine Reinforcement. Science 315: 1267.

Darley, J.M. and D. Batson. 1973. From Jerusalem to Jericho: A Study of Situational and Dispositional Variables in Helping Behavior. Journal ofPersonality and Social Psychology 27: 100.

Dawkins, R. 1989. The Selfish Gene. Oxford: Oxford University Press.

Deaner, R.O., A.V. Khera, and M.L. Platt. 2005. Monkeys Pay Per View: Adaptive Valuation of Social Images by Rhesus Macaques. CurrentBiology 15: 543.

de Quervain, D.J. et al. 2004. The Neural Basis of Altruistic Punishment. Science 305: 1254.

de Waal, Frans. 2007/2008. Do Animals Feel Empathy? Scientific American Mind 18: 28.

de Waal, Frans. 2005. Our Inner Ape. London: Penguin Books.

Delgado, MR., R.H.Frank, and E.A. Phelps. 2005. Perceptions of moral character modulate the neural systems of reward during trust games. Nature Neuroscience 8: 1611.

Dobson, John. 2003. Why Ethics Codes Don't Work. Financial Analysts Journal 59: 29.

Dijksterhuis, A., M.W. Bos, L.F. Nordgren, R.B. van Baaren. 2006. On Making the Right Choice: The DeliberationWithout-Attention Effect. Science 311: 1005.

Dinstein, H., U. Hasson, N. Rubin, and D. Heeger. 2007. Brain Areas Selective for Both Observed and Executed Movements. Journal ofNeurophysiology 98: 1415.

Dugatkin, L. A. 2006. The Altruism Equation: Seven Scientists Search for the Origins of Goodness. Princeton: Princeton University Press.

Dugatkin, L.A. and D. Wilson. 1991. Rover: A Strategy for Exploiting Cooperators in a Patchy Environment. American Naturalist 138: 687.

Dunbar, R. I. M. 2004. Gossip in Evolutionary Perspective. Review of General Psychology 8: 100.

Fehr, E., and J. Henrich. 2003. in Genetics Evolution and Cooperation. Ed. By J. Hammerstein. Cambridge: MIT Press.

Fliessbach, K., B. Weber, P. Trautner, T. Dohmen, U. Sunde, C.E. Elger, and A. Falk. 2007. Social Comparison Affects Reward-Related Brain Activity in the Human Ventral Striatum. Science 318: 1305.

Frith, U., and C.D. Frith. 2003. Philosophical Transactions of the Royal Society ofLondon Series B 358: 459.

Gallagher, H.L. and C.D. Frith 2003. Functional imaging of 'theory of mind.' Trends in Cognitive Science 7: 77.

Greene, J.D., R.B. Sommerville, L.E. Nystrom, J.M. Darley, and J.D. Cohen. 2001. An fMRI Investigation of Emotional Engagement in Moral Judgment. Science 293: 2105.

Greene, J., and J. Haidt. 2002. How and Where Does Moral Judgment Work? Trends in Cognitive Science 6: 517.

Greene, J.D., L.E. Nystrom, A.D. Engell, J.M. Darley, and J.D. Cohen. 2004. The Neural Basis of Cognitive Conflict and Control in Moral Judgment. Neuron 44:389.

Grouzet, F.M., A. Ahuvia, Y. Kim, R.M. Ryan, P. Schmuck, T. Kasser, J.M.F. Dols, S. Lau, S. Saunders, and K.M. Sheldon. 2005. The Structure of Goal Contents Across 15 Cultures. Journal ofPersonality and Social Psychology 89(5): 800.

Gurerk et al., 2006. The Competitive Advantage of Sanctioning Institutions. Science: 312, No. 5770, 7 April.

Haldt, J. 2008. The New Synthesis in Moral Psychology. Science 316:1001.

Haldt, J. 2001. The Emotional Dog and its Rational Tail: The Social Intuitionist Approach to Moral Judgment. Psychological Review 108: 814.

Harbaugh, W.T., U. Mayr, and D.R. Burghart. 2007. Neural Responses to Taxation and Voluntary Giving Reveal Motives for Charitable Dowttions. Science 316: 1622.

Harvard Moral Sense Test. http://harvard-coeevlab.ors/MST/versionr/test.html.

Hauert, C., A. Traulsen, H. Brandt, M. Nowak, and K. Sigmund. 2007. Via Freedom to Coercion: The Emergence of Costly Punishment. Science. 316:1905.

Hauser, M.D. 2006. Moral Minds. New York: Harper Collins.

Hauser, M., F. Cushman, L. Young, R.K. Jin, and J. Mikhail. 2007. A Dissociation Between Moral Judgments and Justifications. Mind & Language 22: 1.

Herrmann, B., C. Thoni, and S. Gachter. 2008. Antisocial Punishment Across Societies. Science 319:1363.

Haldt, J., and J. Graham. 2007. When Morality Opposes Justice: Conservatives have Moral Intuitions that Liberals may not Recognize. Social Justice Research 20: 98.

Holden, C. 2004. The Origin of Speech. Science 303: 1316.

Janis, I.L, 1982.Groupthink: Psychological Studies ofPolicy Decisions and Fiascoes. Boston: Houghton-Mifflin,

Kahneman, D., J.L. Knetsch, and R.H. Thaler. 1990. Experimental Tests of the Endowment Effect and the Coase Theorem. Journal of Political Economy 98:1325.

King-Cases, B., et al. 2005. Getting to Know you: Reputation and Trust in a Two-Person Economic Exchange. Science 308: 78.

Knoch, D., A. Pascual-Leone, K. Meyer, V. Treyler, and E. Fehr. 2006. Diminishing Reciprocal Fairness by Disrupting the Right Prefrontal Cortex. Science 314: 329.

Knutson, B., and J.C. Cooper. 2005. Functional magnetic resonance imaging of reward prediction. Current Opinion in Neurology 18: 411.

Koechlin, E., and A. Hyafil. 2007. Anterior Prefrontal Function and the Limits of Human Decision-Making. Science 318: 594-598 (26 October).

Koenigs, M., L. Yound, R. Adolphs, D. Tranel, F. Cushman, M. Hauser, A. Damasio. 2007. Damage to the prefrontal cortex increases utilitarian moral judgments. Nature, 446: 908.

Kording, K. 2007. Decision Theory: What 'Should' the Nervous System Do? Science 318: 606.

Kosfeld, M., M. Heinrichs, P.J. Zak, U. Fischbacher, and E. Fehr. 2005. Oxytocin Increases Trust in Humans. Nature 435: 673.

Kosko, B. 1993. Fuzzy Thinking, The New Science of Fuzzy Logic. New York: Hyperion.

Krain, A.L., S. Hefton, D.S. Pine, M. Ernst, F.X. Castellanos, R.G. Klein, and M.P. Milham. 2006. An fMRI examination of developmental differences in the neural correlates of uncertainty and decision-making. Journal of Child Psychology and Psychiatry 47: 1023.

Kuhn, Deanna, 2007. Jumping to Conclusion. Scientific American Mind February/March: 44-51.

Kurzban, R. and D. Houser. 2005. Experiments Investigating Cooperative Types in Humans. Proceedings of the National Academy of Sciences 102: 1803.

Ladouceur, R., P. Gosselin, and M.J. Dugas. 2000. Behavior Res. Therapy 38: 933.

Layard, R., 2005. Happiness: Lessons from a New Science. New York: Penguin Press.

Lea, S.E.G., and P. Webley. 2006. Money as a tool and a drug. Behavioral Brain Science 29: 161.

Lesch, K.P., D. Bengal, A. Heils, S. Sobol, B.D. Greenberg, S. Petri, J. Benajmin, C.R. Muller, D.H. Hmer, D.L. Murphy, 1996. Association of Anxiety-Related Traits with a Polymorphism in the Serotonin Transporter Regulatory Region. Science 274: 1527.

Lomborg, B. 1996. Nucleus and Shield: The Evolution of Social Structure in the Iterated Prisoners' Dilemma. American Sociological Review 61: 278.

Luo, Q. et al. 2006. The neural basis of implicit moral attitudes. Neuroimage 30:1449.

Milgram, S. 1974. Obedience to Authority: An Experimental View. New York: HarperCollins.

Miller, G. 2005. Reflecting on Another's Mind. Science 308: 945.

Miller, G. 2006. Probing the Social Brain. Science 312:838.

Miller, G. 2008. Mirror Neurons May Help Songbirds Stay in Tune. Science 319: 269.

Minsky, M. 1986. The Society ofMind. New York: Simon and Schuster

Mikulincer, M. & R. Shaver. 2001. Attachment Theory and Intergroup Bias: Evidence that Priming the Secure Base Schema Attenuates Negative Reactions to Out-groups. Journal ofPersonality and Social Psychology 81: 97.

Mikulincer, M., O. Galeth. V. Halevy, N. Avihou, S. Avidan, and N. Eskoli. 2001. Attachment Theory and Reactions to Others' Needs: Evidence That Activation of the Sense of Attachment Security Promotes Empathic Responses. Journal of Personality and Social Psychology 81:1205.

Mikulincer, M., T. Dolev, and R. Shaver. 2004. Attachment-Related Strategies During Thought Suppression: Ironic Rebounds and Vulnerable Self-Representations. Journal ofPersonality and Social Psychology 87: 940.

Mikulincer, M., R. Shaver, O. Gillath, and R.A. Nitzberg. 2005. Attachment, Caregiving and Altruism: Boosting Attachment Security Increases Compassion and Helping. Journal ofPersonality and Social Psychology 81: 817.

Mobbs, D., P. Petrovic, J.L. Marchant, D. Hassabis, N. Weiskopf, B. Seymour, R. Dolan, C. Frith. 2007. When Fear is Near, Threat Imminence Elicits Prefrontal-Periaqueductal Gray Shifts in Humans. Science 317: 1079.

Moll, J., F. Krueger, R. Zahn, M. Pardini, R. de Oliveira-Souza, and J. Grafman. 2006. Human fronot-mesolimbic networks guide decisions about charitablr donations. Proceedings of the National Academy of Sciences, USA 103: 15623.

Nakahara, K. and Y. Myashita. 2005. Understanding Intentions: Through the Looking Glass. Science 308: 644.

Nowak, M.A., K.M. Page,and K.Sigmund. 2000. Fairness vs. Reason in the Ultimatum Game. Science 289: 1773.

Nowak, M.A., and K.Sigmund. 2005. The Evolution of Indirect Reciprocity. Nature 437:1291.

Nowak, M.A. 2006. Five Rules for the Evolution of Cooperation. Science 314: 1560.

Panchanathan, K. and R. Boyd. 2004. Indirect reciprocity can stabilize cooperation without the second-order freerider problem. Nature 432: 499.

Parks, C.L., P.S.Robinson, E. Sibille, T. Shenk, and M. Toth. 1998. Increased Anxiuety of Mice Lacking the SerotoninIA Receptor. Proceedings of National Academy of Sciences 95: 10734.

Paulus, M.P. 2007. Decision-Making Dysfunctions in Psychiatry-Altered Homeostatic Processing? Science 318: 603.

Paulus, M.P., J.S. Feinstein, A. Simmons, M.B. Stein. 2004. Anterior cingulate activiation in high-trait anxious subjects is related to altered error processing during decision-making. Biological Psychiatry 55: 1179.

Peck, M. Scott. 1983. People of the Lie. New York: Simon & Schuster.

Peoples, L. 2002. Will, Anterior Cingulate Cortex and Addiction. Science 296: 1623.

Pessiglioni, M., L. Schmidt, B. Draganski, R. Kalisch, H. Lau, R. Dolan, and C.D. Frith. How the Brain Translates Money into Force: A Neuroimaging Study of Subliminal Motivation. Science 316: 904.

Pillutla, M.M. and J.K. Murninghan. 1996. Unfairness, Anger and Spite: Emotional Rejections of Ultimatum Offers. Organizational Behavior and Human Decision Process 68: 208.

Pinker, S. 2008. The Moral Instinct. New York Times Magazine, January 13: p. 32.

Price, R.H., J.N. Choi, and A.D. Vinokur. 2002. Links in the chain of adversity following job loss. Journal of Occupational Health and Psychology 7: 302. Rilling, J.K., D.A. Gutman, J.R. Zeh, G. Pagnoni, G. Berns, and C.D. Kitts. 2002. A Neural Basis for Social Cooperation. Neuron 35: 395.

Rosenbaum, R.S., D.T. Struss, B. Levine, and E. Tulving. 2007. Theory of Mind is Independent of Episodic Memory. Science 318: 1257.

Sanfey, A. 2007. Social Decision-Making: Insights for Game Theory and Neuroscience. Science 318: 598-602.

Sanfey, A., J.K. Rilling, J.A. Aaronson, L.E. Nystrom, and J.D. Cohen. 2003. The Neural Basis of Economic Decision-Making in the Ultimatum Game. Science 300: 1775.

Sapolsky, R.M. 2004. Why Zebras Don't Get Ulcers. New York: Henry Holt and Company.

Seeley, W. et al. 2006. Early Frontotemporal Dementia Targets Neurons Unique to Apes and Humans. Annals of Neurology 60: 660.

Schneyer, T., 1991. Professional Discipline for Law Firms? Cornell Law Review 77: 1.

Shizgal, P. and A. Arvanitogiannis. 2003. Gambling on Dopamine. Science 299: 5614.

Sommerfeld, R.D., H. Krambeck, D. Semmann, and M. Milinski. 2007. Gossip as an Alternative for Direct Observation in Games of Reciprocity. Proceedings of the National Academy of Sciences 104: 17435.

Tetlock, P.E. 1985. Accountability--The Neglected Social Context of Judgment and Choice. in Research in Organizational Behavior, Edited by B. Straw and A. Cummings, Greenwich Ct. CAI Press.

Tetlock, P.E. 1991. An Alternative Metaphor in the Study of Judgment and Choice--People as Politicians. Theory & Psychology 1: 451.

Trevino, L. 2000. Moral Person and Moral Manager. California Management Review Summer, 2000.

Trevino, L. 1999. Corporate Compliance - What Helps and What Hurts. California Management Review Fall, 1999.

Trivers, R.L. 1971. The Evolution of Reciprocal Altruism. Quarterly Review ofBiology 46:35.

Tversky, A., and D. Kahneman. 1992. Advances in Prospect Theory: Cumulative Representation of Uncertainty. Journal of Risk and Uncertainty 1992: 297.

Tversky, A., and D. Kahneman, 1981. The framing of decisions and the psychology of choice. Science 211: 453.

Vohs, K.D., N.L. Mead, and M.R. Goode. 2006. The Psychological Consequences of Money. Science 314: 1154.

Weston, Anthony. 2007. Creative Problem-Solving in Ethics. New York: Oxford University Press.

Wickens, JR., J.C.Horvitz, R.M. Costa and S. Killcross. 2007. Dopaminergic Mechanisms in Actions and Habits. Journal ofNeuroscience 27(31): 8181.

Young, L. and M. Koenigs. 2007. Investigating emotion in moral cognition: a review of evidence from functional neuroimaging and neuropsychology. British Medical Bulletin 84: 69.

Zaj onc, R.B. 1980. Feeling and Thinking. American Psychologist 35: 151.

Donald T. Wargo, Norman A. Baglini and Katherine A. Nelson

Donald T. Wargo, Department of Economics, College of Liberal Arts

Norman A. Baglini, Department of Risk Management and Insurance, Fox School of Business and Management

Katherine A. Nelson, Department of Human Resource Management, Fox School of Business and Management
Scenario Schematic Description % 'Yes'

1 Denise Denise is a passenger on Is it morally
 a train whose dirver has permissible
 fainted. On the main for Denise
 track ahead are 5 to turn the
 people. The main track train?
 has a side track leading
 off to the left, and 85%
 Denise can turn the train
 on to it. There is 1
 person on the left hand
 track. Denise can turn
 the train, killing the 1;
 or she can refrain from
 turning the train,
 letting the 5 die.

2 Frank Frank is on a footbridge Is it morally
 over the train tracks. permissible for
 He sees a train Frank to shove
 approaching the bridge the man?
 out of control. There are
 5 people on the track. 12%
 Frank knows that the only
 way to stop the train is
 to drop a heavy weight
 into its path. But the
 only available,
 sufficiently heavy weight
 is Harge man, also
 watching the train from
 the foot bridge. Frank
 can shove the 1 man onto
 the track in the path of
 the train, killing him,
 or he can refrain from
 doing this, letting the
 5 die.

3 Ned Ned is walking near the Is it morally
 train tracks when he permissible
 notices a train for Ned to throw
 approaching out of the switch?
 control. Up ahead on the
 track are 5 people. Ned 56%
 is standing next to a
 switch, which he can
 throw to turn the
 train on to a side track.
 There is a heavy object
 on the side track. If the
 train hits the object,
 the object will slow the
 train down, giving the
 men time to escape. The
 heavy object is 1 man,
 standing on the side
 track. Ned can throw the
 switch, preventing the
 train from killing the
 5 people, but killing
 the 1 man. Or he can
 refrain from doing this,
 letting the 5 die.

4 Oscar Oscar is walking near the Is it morally
 train tracks when he permissible
 notices a train for Oscar to
 approaching out of throw the
 control. Up ahead on the switch?
 track are 5 people. Oscar
 is standing next to a 72%
 switch, which he can
 throw to turn the train
 on to aside track. There
 is a heavy object on the
 side track. If the train
 hits the object, the
 object will slow the
 train down, giving the 5
 people time to escape.
Description There is 1 man standing
 on the sidetrack infront
 of the heavy object.
 Oscar can throw the
 switch, preventing the
 train from killing the 5
 people, but killing the
 1 man. Or he can refrain
 from doing this, letting
 the five die.

The Five Hallmark Behaviors of Ethics

 Psychological Brain Correlate
Hallmark Correlate Mirror Neurons

1) Empathy Theory of Mind Spindle Neurons
 (Social Cognition)

2) Cooperation Perception of Right Dorsolateral
 Community Prefrontal Cortex

3) Reciprocity/ Sensitivity to Mirror Neurons
 Fairness Fairness (reciprocal)

4) Altruism Kin Selection or Ventral Striatum
 'Virtual' Kinship (non-reciprocal)

5) Punishment of Disgust or Anger Ventral Striatum
 Cheaters Dopamine Reward)
 ('Free Riders')

(See Figure 4 for their Neurological Correlates.)
COPYRIGHT 2008 Forum on Public Policy
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Wargo, Donald T.; Baglini, Norman A.; Nelson, Katherine A.
Publication:Forum on Public Policy: A Journal of the Oxford Round Table
Date:Mar 22, 2008
Words:19155
Previous Article:Reflections on EFL proficiency requirements in the business context: towards bilingualism in professional education in Finland.
Next Article:In the danger: self censorship, the propaganda model, and the saving grace.

Terms of use | Copyright © 2015 Farlex, Inc. | Feedback | For webmasters