Printer Friendly

From Gadfly to Nudge: The Genesis of Libertarian Paternalism.

Nearly a decade has passed since Richard H. Thaler and Cass R. Sunstein published Nudge: Improving Decisions About Health, Wealth, and Happiness (hereinafter, "Nudge"). (1) The bestselling book drew popular attention to "libertarian paternalism," a policy approach Thaler and Sunstein had previously proposed in their academic writing. (2)

Though somewhat controversial from the start, (3) the notion of libertarian paternalism quickly gained traction among policymakers all over the world. In the United States, President Barack Obama tapped Sunstein to serve as the administrator of the White House Office of Information and Regulatory Affairs, a position often referred to as the nation's "regulatory czar." (4) The U.S. Congress created a new agency, the Consumer Financial Protection Bureau, that was proposed by academics who were strongly influenced by the behavioral economics underlying Nudge (more about behavioral economics below). (5) The British government went so far as to create a Behavioural Insights Team, commonly referred to as the "Nudge Unit." (6) And in Denmark, the Applied Behavioural Science Group--a.k.a. the Danish Nudge Unit--began operating a popular website, "Error! Hyperlink reference not valid.

Given Nudge's success over the last decade in capturing the attention of policymakers and generating concrete policy proposals, it is worth pausing to assess how the libertarian paternalist project is faring. What is working? What is not? How, if at all, should the libertarian paternalist project be adjusted going forward?

On October 21, 2016, a group of prominent law professors and economists --some Nudge enthusiasts (including Sunstein himself), some skeptics--convened at the University of Missouri School of Law for a symposium addressing those questions. The bulk of this issue of the Missouri Law Review consists of articles based on the ideas presented at that symposium, Evaluating Nudge: A Decade of Libertarian Paternalism.

To provide context for the articles that follow, the remainder of this Foreword explains where libertarian paternalism came from--that is, what are its intellectual underpinnings, and how did they arise?

I. FROM GADFLY TO BEHAVIORAL ECONOMICS

Before there was libertarian paternalism, there was Gadfly. A thorn in the side of his economics professor, Gadfly often sticks in the memory of those who have taken a college-level economics course. He majored in something liberal artsy (philosophy, English?) or maybe another of the social sciences (psychology, sociology?). He was not a back-row student; he sat toward the front of the lecture hall, and he was an active participant in class discussion. But he was assuredly not buying what his economics professor was selling. Whenever the professor would suggest that the government should do X to induce people to do Y, or that well-meaning policy A is bad because it will just lead people to take undesirable action B, Gadfly's hand would shoot up. "Real people don't behave that way," he would say. "You're assuming people always act rationally. They often don't."

The economics professor generally gave Gadfly's remarks short shrift. "Yes, people sometimes act irrationally," she replied. "But most people act rationally most of the time. And we can't build a predictive theory of human behavior if we assume people just dart around making irrational, unpredictable decisions."

It turns out both Gadfly and his professor were right. Gadfly was correct in observing that people often act irrationally. The professor was right that people usually act rationally and that economists might as well close up shop if people act in unpredictable ways. But what if people are, to borrow the title of economist Dan Ariely's bestselling book, predictably irrational. (8) That is, what if they generally act rationally but, in certain identifiable contexts, make the same sorts of mistakes over and over again.

In recent years, many social scientists--including scores of economists--have come to believe that this is indeed how humans behave. (9) Numerous studies purporting to document people's systematically irrational behavior have led many scholars to jettison the so-called "rational choice" model of human behavior, under which humans are assumed to act as rational, self-interest maximizers. In its place, they have adopted a "behavioral" model under which people usually act rationally but occasionally, in systematic ways, make irrational decisions. (10)

Libertarian paternalism arose as a policy response to this behavioral model of human decision-making. Its goal is to help people make "better" choices--i.e., choices more in line with the decisions they would make were they not subject to the cognitive and volitional frailties behavioral economists and cognitive psychologists (collectively, "behavioralists") claim to have identified. To understand the libertarian paternalist project, then, one must have some sense of what those purported frailties are.

II. THE BEHAVIORAL MODEL OF HUMAN behavior

Speaking generally, we can group the cognitive and volitional limitations identified by behavioralists into three categories: imperfect optimization, bounded self-control, and non-standard preferences. (11)

A. Imperfect Optimization

Imperfect optimization refers to people's purported tendencies to make systematic mistakes in choosing among alternative courses of action. The rational choice model of human behavior assumes that people make choices that maximize their welfare given their resource constraints. (12) To say that people are prone to imperfect optimization is to say that even when they have fixed preferences and plenty of willpower, they tend to make decisions that fail to wring the greatest possible value (as judged by them) from their resources. People make these mistakes, behavioralists say, because they are boundedly rational, inclined to use heuristics, and subject to systematic biases. (13)

1. Bounded Rationality

Coined by Nobel prize-winning economist Herbert Simon, the term "bounded rationality" refers to the fact that humans face limits on their memories, computational skills, and other mental abilities, and those limits in turn restrict their capacity to gather and process information. (14) To say that people are boundedly rational is not to say that they are irrational; it is simply to acknowledge that humans are not computers. Even non-behavioral economists concede that fact. The notion of bounded rationality is, however, the launching pad for behavioral economics, which has purported to demonstrate what people do in light of the fact that they are not computers.

2. Heuristics

One thing they do, behavioralists say, is employ mental shortcuts or "heuristics." (15) People frequently need to make quick judgments about things yet lack the time or mental resources to gather all available information or even to process carefully all the facts they already know. To use the terminology employed by Daniel Kahneman, they must rely on their reflexive "system one" method of thinking (an approach that is fast, instinctive, subconscious, and often emotional--the sort of thinking one uses when he dodges an object hurtling toward him). (16) They cannot engage in reflective "system two" thinking (an approach that is slower, effortful, conscious, and logical--the approach one uses when she is buying a new car). (17) People therefore tend to use rules of thumb or other mental shortcuts to help them make quick decisions.

One such mental shortcut, the "availability heuristic," assesses the probability of an occurrence based on how easily instances of the event may be called to mind; the more available past instances of the event are in one's recollection, the more probable one will deem the event to be. (18) On first glance, this seems like a sensible mental strategy. The more memories one has of past events occurring, the more probable it is that the event will recur, right?

Not always. Some events, though quite common, fail to stick in people's memories and thus are not available to them when they are assessing probabilities. This can lead people to make irrational judgments. For example, people asked to estimate how many words in a document end in "ing" give higher numbers than those asked to estimate how many of the writing's words have "n" as the next-to-last letter. (19) As a logical matter, the number of words with "n" as the penultimate letter must exceed (or equal) the number of words ending in "ing." But it is easier to call to mind "ing" words than words with a penultimate "n." It is also easier to call to mind "Detroit murders" than "Michigan murders," so it is not surprising that people tend to estimate that there were more murders in Detroit during some time period than there were in the state of Michigan--a logical impossibility. (20)

Closely related to the availability heuristic is a cognitive feature behavioralists call "salience bias." (21) Big, dramatic events command notice. They stick in our minds and are therefore more available to us than are regular, day-to-day events, especially when they engage our emotions. That implies that when the availability heuristic is operating, people will tend to overestimate the likelihood of noticeable (salient) events relative to occurrences that are less noteworthy but perhaps more common. Consistent with this theory, people tend to think that deaths from vehicle accidents are more common than deaths from lung cancer and that more people die from homicide than from emphysema, when in fact more people die from lung cancer than from vehicle accidents and from emphysema than from homicide. (22) Car wrecks and shootings are bloody and newsworthy (under the old newspaper adage, "if it bleeds, it leads."). Lung cancer and emphysema, though horrible, are usually neither gory--eliciting a visceral reaction--or widely reported. Instances of them are less available.

A second well-documented heuristic involves what Kahneman and his longtime co-author, Amos Tversky, called "anchoring" and "adjustment." (23) When people are called upon to reach some conclusion--say, the number of pennies in a jar or the amount they are willing to pay for a new gadget--they often form an initial judgment based on some simple feature and then adjust that estimate, dubbed an anchor, to reach a final conclusion. (24) The adjustment is often quite conservative, causing the final judgment to be biased toward the anchor. (25) In addition, the anchor often has little or nothing to do with the subject matter of the judgment. This can result in some bizarre patterns of judgment.

For example, Kahneman and Tversky famously asked people to estimate what percentage of the countries in the United Nations ("UN") were located in Africa. (26) Before they did so, though, they had respondents spin a wheel of fortune that was secretly rigged to land on either ten or sixty-five. (27) They first asked whether the percentage of African UN members was above or below that number, a relative question, and then proceeded to ask the absolute question (i.e., What precise percentage of UN members are African?). (28) The result of the wheel spin was, of course, wholly unrelated to the matter under consideration, but it still appeared to affect respondents' answers. For subjects who spun a ten, the median answer to the latter question was twenty-five percent; for those spinning a sixty-five, it was forty-five percent. (29) The spin result, the researchers suggested, served as an anchor used by respondents in answering the absolute question. (30)

Kahneman and Tversky observed a similar result when they asked two groups to solve the same math problem within five seconds, but then phrased the problem differently for each group. (31) One group was asked to solve the equation, 8X7X6X5X4X3X2X1= ; the other, 1X2X3X4X5X6X7X8=__. (32) The former group's median answer (2,250) was more than four times as large as the latter's (512). (33) Both were way off the actual answer of 40,320, but Kahneman and Tversky focused on the marked difference between the two sets of answers. (34) They contended that respondents, required to make a quick (non-reflective, "system one") judgment, anchored on the first numbers in the problem when forming their estimates. (35)

Experimental evidence suggests that anchoring and adjustment may also be at play when people form reservation prices--i.e., when they determine how much they are willing to pay for something. in an experiment with students at the Massachusetts Institute of Technology ("MIT"), economists Dan Ariely, Drazen Prelec, and George Loewenstein held up various items--a bottle of wine, a cordless trackball, a textbook--and described each. (36) They then had the MIT students say whether they would purchase each item at a price equal to the last two digits of their social security numbers. (37) Having contemplated a purchase at a pretend price (the social security digits), the subjects were then invited to bid on the items. (38) Consistent with behavioralists' claims about anchoring and adjustment, the wholly irrelevant social security numbers appeared to influence the prices bid. (39) People with high social security numbers bid substantially more than those with low numbers. (40) For example, subjects whose last two digits were between 80 and 99 paid an average of twenty-six dollars for the cordless trackball, whereas those with numbers from 00 to 19 paid an average of nine dollars. (41) This hardly seems rational.

A third mental shortcut, the "representativeness heuristic," involves something that looks like stereotyping. When asked to judge how likely it is that X belongs to category Y, people tend to ignore evidence about the magnitude of category Y--that is, how common it is for something to fall within that category--and rely more heavily on the degree to which X resembles a prototypical member of category Y. Such thinking leads to what statisticians call "base rate neglect." (42)

For example, in an experiment by Kahneman and Tversky, respondents were lumped into three categories. (43) One, the "base rate" group, was asked to estimate the percentages of graduate students enrolled in nine different fields of study (business administration, computer science, engineering, humanities and education, law, library science, medicine, physical and life sciences, and social science and social work). (44) The second, the "similarity" group, was given a detailed description of a young man, Tom W., and was asked to rank the nine fields of study in terms of how prototypical Tom was of a graduate student in each field. (45) The third group, the "prediction" group, was given the same description of Tom W., was told that it was written in Tom's senior year of high school by a psychologist who had subjected Tom to projective tests, and was asked to rank the nine fields of graduate study in terms of the likelihood that Tom is now a student specializing in that area. (46)

One might expect that the prediction group's judgments about the likelihood that Tom would pursue a particular course of study would reflect the estimated popularity of that course of study. But that is not how things turned out. Instead, the prediction group's judgments of likelihood were much closer to the similarity group's rankings of representativeness than to the base rate group's rankings of overall popularity of academic concentrations. (47) For example, ninety-five percent of respondents said Tom was more likely a computer science student than an education or humanities student, even though the base rate group had estimated that far more graduate students concentrate in education and the humanities than in computer science. (48) The findings are consistent with the claim that people make predictions based on how representative (similar) something is, and not so much on what relative base rates are.

The representativeness heuristic misleads people when similarity and frequency diverge, and it may lead to absurd judgments. Consider, for example, an experiment involving a hypothetical woman named Linda. Subjects were told the following: "Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations." (49) Subjects were then asked to rank, in order of likelihood, possible futures for Linda, including "bank teller" and "bank teller and is active in the feminist movement." (50) In multiple iterations of this experiment, large majorities of respondents say the latter option (bank teller and feminist) is more probable than the former (bank teller). (51) But that logically cannot be! Because any feminist bank teller is also a bank teller, there is no way that it could be more likely that Linda would end up as a feminist bank teller than as a bank teller. Behavioralists point to respondents' irrational judgment as another example of humans' reflexive, system one mode of thinking overwhelming their thoughtful, system two mode.

3. Biases

In addition to using heuristics, behavioralists say, people tend to suffer from biases that may lead them to make suboptimal decisions. (52) Perhaps the most important of these is the optimism bias. Most humans, it seems, think they are more likely than average to experience good outcomes and less likely than average to suffer bad ones. For example, about ninety percent of drivers rate themselves as above-average behind the wheel. (53) Similarly, while about half of all marriages fail, most new spouses estimate their chances of divorcing as very low. (54) Entrepreneurs routinely think they are especially likely to succeed. (55) In a recent survey of people starting new businesses, the most common answer to the question, "What do you think is the chance of success for a new business like yours?," was fifty percent; the most common response to "What is your chance of success?" was ninety percent. (56) It seems most of us believe we are children living in Lake Wobegon. (57) And that, behavioralists say, affects our decisions: We are more likely to take risks if we (irrationally) believe we are especially unlikely to experience a bad outcome.

B. Bounded Self-Control

The cognitive frailties discussed above prevent people from knowing what, given their preferences, are their best courses of action. Behavioralists also maintain that people face volitional constraints--limits on their willpower. We know from common experience how hard it can be to forego current consumption in order to secure something better in the future. Indeed, many of us are so aware of our limited willpower that we voluntarily restrict our options so as not to succumb to temptation. (58)

According to behavioralists, many of our volitional frailties stem from our tendency to engage in "hyperbolic discounting." (59) To understand what that is, consider how people trade off present consumption opportunities against opportunities to consume in the future. (60) In general, people prefer to consume things sooner rather than later. Economists have thus long understood that when people are choosing between courses of action that will provide benefits at different times, they implicitly "discount" the value of future consumption opportunities. (61) A person might, for example, deem the right to receive $110 one year from now as worth only $100 today. That person's "discount rate" is ten percent.

Different people exhibit different discount rates. Those who really prefer present over future consumption have high discount rates, meaning that the future consumption opportunity must be significantly better than the current one in order for the person to forego current consumption. Others, by contrast, have low discount rates; they need not receive much compensation for holding off on consumption.

While economists have long understood that discount rates vary among individuals, they have generally assumed that whatever discount rate a person applies to a decision is constant over time. (62) So, for example, a person who would be indifferent between $100 today and $110 one year from now would also be indifferent between $100 one year from now and $110 two years from now. Her ten percent discount rate is the same for both one-year time delays. Economists refer to this as "exponential" discounting. (63)

In recent years, researchers have amassed a significant body of empirical data suggesting that this is not how people really make tradeoffs across

time. Evidence suggests that discount rates are not constant but that people instead discount future rewards at a greater rate when the delay occurs sooner in time. (64) For example, while many people would rather have $100 today than $110 tomorrow, few would prefer $100 in thirty days to $110 in thirty-one. A one-day delay that would require little compensation if experienced a month from now would, for lots of people, be less tolerable--and would therefore require greater compensation--if experienced today. People are said to engage in "hyperbolic" discounting if the rate at which they discount future rewards is not constant but instead rises for earlier and earlier portions of the delay period. (65) A good bit of evidence suggests that this is, in fact, how people make many intertemporal tradeoffs. (66)

Consider, for example, overweight people who come to believe that they need to eat better and exercise more. Gazing into the future, they decide they are willing to forego fattening foods to secure a healthier body. When they compare next month's desserts to next year's better body, they apply a low discount rate, which leads them to ascribe greater value to the better body than to the joy from next month's desserts. But when dessert time rolls around tonight, the discount rate they apply to a future better body shoots up so that consuming dessert tonight seems like the value-maximizing option. After they blow it, they may start thinking again about future consumption tradeoffs--say, enjoying next month's holiday parties versus feeling good in a swimsuit next summer--and jump back on the diet. But, when another immediate consumption opportunity arises, the value of looking fit next summer suddenly seems awfully small. If a person engages in hyperbolic discounting of this sort, then even when her reasoning abilities enable her to ascertain the course of action that will bring her the greatest happiness (given her preferences), she may find herself lacking the willpower to stay on course.

C. Non-Standard Preferences

In addition to assuming that people possess the cognitive and volitional abilities to identify and follow the course of action that maximizes their welfare in light of their preferences, the rational choice model assumes that people's preferences--the degree to which they value one thing over another--are independent of the context in which choices are presented (or, to use some jargon, are "exogenous" rather than "endogenous"). (67) Behavioralists dispute that assumption. They point to evidence suggesting that institutional arrangements, particularly the allocation of property rights and other entitlements, help determine the value people attach to various outcomes.

Most notably, behavioralists claim that people exhibit an "endowment effect" under which the value they ascribe to a thing--a piece of property, a service, a legal right--depends in part on whether or not they own it. (68) Most of the evidence for this effect has been experimental. In dozens of experiments, the average minimum price that would be charged by a group of people who have been given an item (the average "willingness-to-accept" or "WTA") exceeds the average maximum price that a similarly situated group of people who have not been given the item would pay for it (the average "willingness-to-pay" or "WTP"). (69) Both measures should reflect the subjective valuation a person ascribes to the thing at issue: WTA is the minimum amount an owner would have to be paid to part with the thing; WTP is the maximum amount a non-owner would be willing to give up to get it. If owners' WTA for a thing routinely exceeds non-owners' WTP for the exact same thing, behavioralists contend, then the mere fact of ownership must enhance the subjective value attributed to the thing. (70)

In a typical experiment, half the students in a class were given coffee mugs bearing their school's insignia, and the others were directed to examine their neighbors' mugs so that all students would have an idea of the mugs' quality. (71) Mug owners were then invited to sell, and non-owners to buy, the mugs that had been distributed. (72) Specifically, each student was asked to state his or her reservation price--the student's subjective valuation of the mug--by responding to the following prompt, "At each of the following prices, indicate whether you would be willing to (give up your mug /buy a mug)." (73) On average, those who had been given mugs demanded roughly twice as much to sell them (WTA) as non-owners were willing to pay to acquire them (WTP). (74)

In a similar experiment, half the students in a class were given coffee mugs (for some reason, the standard item for these sorts of experiments), and the other half received big chocolate bars that cost roughly the same amount as the mugs. (75) In tests conducted before the experiment, students were as likely to pick one of the items as the other. (76) After they owned one of the items and were given an opportunity to trade it for the other, however, very few made the trade. (77) Only one in ten switched from the item they were given. (78) The suggestion is that owning the items distributed to them caused students to value those items more than they otherwise would.

There is evidence suggesting that the apparent endowment effect may be a function of experimental design. For example, after economists Charles Plott and Kathryn Zeiler conducted standard coffee mug experiments and observed the usual result (WTA > WTP), they repeated the experiments using best practice for experimental design, and the apparent endowment effect disappeared. (79) Although the work by Plott and Zeiler has generated its own controversy, (80) it does call into question the stronger claims about the endowment effect. Most behavioral legal scholars, however, deem the evidence settled: Endowing a person with a thing causes him or her to value it more. (81)

Closely related to the endowment effect is a tendency behavioralists refer to as "loss aversion." In their "prospect theory" of human behavior, Kahneman and Tversky asserted that people tend to evaluate outcomes not in isolation but relative to an initial reference point, and they noted empirical evidence that people weigh losses from a reference point more heavily than correlative gains. (82) (Consistent with the endowment effect, if a person must give up something she owns, she is hurt more than she is pleased if she initially gains that same thing.)

Again, much of the evidence for loss aversion is experimental. In a typical experiment, subjects were asked to imagine a coin toss in which they will win some amount of money (X dollars) if the coin lands on heads but will have to pay $100 if it lands on tails. (83) When asked how large X must be in order for them to participate in the coin toss, most subjects responded with a number near $200. (84) This implies that people hate losses so much that they will give up opportunities worth up to twice the amount of the losses in order to avoid them. (in refusing to engage in the coin toss if the payment for heads is, say, $180, a subject is effectively saying that an expected loss of $50 hurts him more than an expected gain of $90 benefits him.)

The endowment effect and loss aversion, behavioralists assert, give rise to two other predictable quirks. One is a "status quo bias." If people tend to attach extra value to their initial set of entitlements, and if the losses they experience from changing things weigh heavier than the gains they experience from change, they will tend to leave things as they are. (85) in addition, say the behavioralists, people are subject to "framing effects"; since people perceive losses as weighing more than correlative gains, whether an opportunity is framed as a gain or a loss matters. (86)

Consider, for instance, two statements that convey the same information about the risks of a surgical procedure:
Statement A: "Of 100 patients who have this operation, ninety are alive
after five years."

Statement B: "Of 100 patients who have this operation, ten are dead
after five years."


In numerous experiments, people presented with Statement A, which focuses on gains, are much more likely to select the procedure than are people presented with Statement B, which emphasizes losses. (87) Indeed, even experts may be subject to framing effects. Doctors deciding whether to recommend a procedure are more likely to do so if they are told "90 of 100 are alive" after some period of time rather than "10 of 100 are dead." (88)

To the extent they really exist, the endowment effect, loss aversion, status quo biases, and framing effects imply that people's preferences--and the outcomes that follow from them--are largely constructed by government policy. How government allocates entitlements influences people's preferences for those entitlements (endowment effect). People often will not give up what they have in order to get something that they would have perceived as better had it been initially allocated to them (loss aversion / status quo bias). And people's decisions about what outcomes to pursue may turn on whether those outcomes are presented as potential gains or losses (framing effects).

If the rational choice model's homo economicus is Superman, behavioralism's prototypical human is Ralph Hinkley, the hapless protagonist of the early 1980s television show The Greatest American Hero. Hinkley was often confused. He relied on hunches. He had remarkable powers, but he could not control them. He was more confident than he should have been, given his limitations. He was reluctant to disrupt the status quo. And he made lots of blunders. The same goes for us humans, behavioralists say. As a result, we routinely fail to maximize our welfare, and we suffer regret.

III. THE LIBERTARIAN PATERNALIST RESPONSE

So what should the government do about this unhappy situation? One response would be to have government planners override individual decision-making in areas in which people's cognitive and volitional limitations are likely to lead them to make decisions other than those they would make if they were fully rational and had boundless self-control. Under such an approach--hard paternalism--the government could force people to save more, eat better, or otherwise act or forbear as government planners believe people would do were they not so limited in reason and willpower.

There are two obvious drawbacks to such an approach. First, it requires a tremendous amount of knowledge on the part of the planners, who are not privy to individuals' true preferences and values. We may call that difficulty paternalism's "knowledge problem." (89) Second, hard paternalism creates "public choice concerns" by endowing government planners with vast discretionary authority that may be manipulated by private interests for ends that do not maximize social welfare. (90)

In light of these two difficulties, governments in liberal societies have generally eschewed paternalistic solutions to judgment errors resulting from decision-makers' cognitive and volitional limitations. (91) While one finds occasional examples of such paternalism--for example, the ban on soft drugs whose use entails few obvious third-party effects--most seemingly paternalistic policies are conceivably justifiable on grounds of preventing harm to others. Absent a threat of significant externalities, there seems to be little enthusiasm in liberal societies for government meddling in individual decision-making, even when personal choices may be influenced by the choosers' cognitive and volitional constraints. (92)

Consider, for example, people's career decisions. Planning and preparing for one's career involves making a host of predictions (where heuristics and biases play a role) and may demand a great deal of volitional fortitude (which hyperbolic discounting may impair). Thus, as behavioralists would predict, people regularly make career decisions that they end up regretting. Yet, it is virtually unheard of in liberal societies for government officials to coerce people--or even to cajole them--into one career over another. The same goes for the rest of life's major decisions, many of which may be less than optimal because of people's various human frailties. Governments typically leave those decisions to individuals themselves, knowing full well that people will often choose poorly.

Indeed, in most areas of decision-making, the approach liberal governments have taken to personal decision-making resembles that promoted by John Stuart Mill, who wrote that
the only purpose for which power can be rightfully exercised over any
member of a civilized community, against his will, is to prevent harm
to others. His own good, either physical or moral, is not a sufficient
warrant. He cannot rightfully be compelled to do or forbear because it
will be better for him to do so, because it will make him happier,
because, in the opinion of others, to do so would be wise, or even
right. (93)


Mill's libertarian approach avoids the knowledge problem and public choice concerns arising from paternalism, but it does nothing to address the harm (regret, etc.) that results when individuals, because of their cognitive and volitional limitations, make decisions that are different from those they would have made if they were not so limited.

The central claim of libertarian paternalism is that there is middle ground between paternalism, which employs bans and commands to override individual decision-making even absent third-party effects, and libertarianism, under which government stays its hand in influencing individual decision-making unless intervention is needed to prevent third-party harm. Within that territory, libertarian paternalists say, lies an approach under which planners construct "choice architecture" that steers people toward ends that are best for them (as they themselves would judge were they operating free of cognitive and volitional limitations), while simultaneously protecting people's freedom of choice by allowing them to opt out of specified arrangements should they choose to do so. The approach is paternalistic in that its stated goal is to help people do what is best for them, not to prevent harm to others. It is libertarian, proponents contend, in that it ultimately leaves people free to choose; it "nudges" rather than coerces.

Perhaps the classic example of a libertarian paternalist nudge is switching the default rule on participation in employer-sponsored savings plans--401Ks, etc.--from opt-in (where the employee must sign up for the plan) to opt-out (where the employee is automatically enrolled but may withdraw if she chooses). Behavioralism predicts that the status quo bias and people's tendency to engage in hyperbolic discounting will prevent many employees from enrolling in savings plans even when they would "really" prefer to save more. Altering the choice architecture from an opt-in to an opt-out default rule, libertarian paternalists assert, harnesses the status quo bias to nudge employees in the direction that is, by employees' own considered lights, good for them. (94) But, because participation at the standard savings rate is merely a default rule, any employee who prefers another course of action may freely take it. Nudge presents a number of similar ideas for implementing libertarian paternalism.

IV. THE ARTICLES THAT FOLLOW: SECOND GENERATION SCHOLARSHIP

So how is libertarian paternalism faring? Does actual experience suggest that it is viable regulatory strategy? Has it led to unintended consequences, and how, if at all, should it be tweaked going forward? Addressing these and similar questions, the articles that follow represent a second generation of scholarship on libertarian paternalism. First generation scholarship, produced around or soon after the time of Nudge's publication, considered libertarian paternalism's theoretical promise and limitations. (95) The articles that follow supplement that first generation scholarship with insights based on actual experience. Such experience has shown, for example, that automatic enrollment in employer-sponsored savings plans--libertarian paternalism's poster child--may increase employee participation rates but reduce the overall amounts saved by locking participating employees into default savings rates. (96)

The remaining contributions to this symposium issue highlight a number of additional lessons (and raise many issues for future debate):

* Professor Sunstein himself kicks things off by revisiting "forced choosing," a form of choice architecture endorsed in Nudge (97) He contends that forced choosing is itself paternalistic (though not necessarily bad). (98)

* Jacob Goldin draws a distinction between consistent and inconsistent choosers and considers the effect of choice architecture on each. (99) He then introduces the notion of "quasi-paternalistic" nudges and shows how their existence may influence the policy choice between nudges and mandates. (100)

* Jonathan Lee and Hengchen Dai explain the notion of "fresh starts" and describe some fascinating empirical findings that may suggest ways for policymakers to craft more efficacious nudges. (101)

* Gregory Mitchell observes that the term "nudge" has expanded to encompass all sorts of interventions that could in no way be deemed libertarian paternalism. (102) He queries whether the nudge label is being "used opportunistically, as cover for run-of-the-mill paternalism," (103) and he sets forth criteria for identifying which nudges could be accurately labeled as libertarian.

* Arden Rowell addresses a potential problem resulting from libertarian paternalism's success in achieving traction among policymakers. (104) Observing that the proliferation of nudges will entail "nudge-nudge interactions," she counsels policymakers to develop strategies for managing situations in which different nudges conflict with, undermine, and/or strengthen one another. (105)

* Victoria Shaffer reviews evidence on the effectiveness of nudges in the health care arena. (106) She concludes that nudging, while no panacea, is one effective tool for generating significant and salutary behavioral changes in the area of human health. (107)

* Adam Smith addresses the issue of public choice, contending that the structure of the institutions crafting choice architecture matters a great deal for libertarian paternalism's success. (108) He draws lessons from a comparison of two behaviorally informed policymaking institutions: the U.S. Consumer Financial Protection Bureau and the U.K. Behavioural Insights Team. (109)

* Todd Zywicki, Geoffrey Manne, and Kristian Stout counsel caution in the use of behavioral law and economics in judicial proceedings. (110) Focusing on a recent Supreme Court case, the authors highlight what they perceive to be a misuse of behavioral theories in constitutional argument. (111) They maintain that proponents of behavioral law and economics have often paid too little attention to conflicting empirical evidence and have given short shrift to rational explanations for observed behavior. (112)

With that, the stage is set. Let the Nudge Fest commence.

Thomas A. Lambert (*)

(*) Thomas A. Lambert holds the Wall Family Chair in Corporate Law and Governance at the University of Missouri School of Law. This article is adapted from chapter nine of THOMAS A. LAMBERT, HOW TO REGULATE: A GUIDE FOR POLICYMAKERS (2017) (Cambridge University Press).

(1.) RICHARD H. THALER & CASS R. SUNSTEIN, NUDGE: IMPROVING DECISIONS ABOUT HEALTH, WEALTH, AND HAPPINESS (2008).

(2.) See, e.g., Richard H. Thaler & Cass R. Sunstein, Libertarian Paternalism, 93 AM. ECON. REV. 175 (2003); Cass R. Sunstein & Richard H. Thaler, Libertarian Paternalism Is Not an Oxymoron, 70 U. CHI. L. REV. 1159 (2003).

(3.) See, e.g., Gregory Mitchell, Libertarian Paternalism Is an Oxymoron, 99 NW. U. L. REV. 1245 (2005); Mario J. Rizzo & Douglas Glen Whitman, The Knowledge Problem of the New Paternalism, 2009 BYU L. REV. 905 (2009); Mario J. Rizzo & Douglas Glen Whitman, Little Brother is Watching You: New Paternalism on the Slippery Slopes, 51 ARIZ. L. REV. 685 (2009); Joshua D. Wright & Douglas H. Ginsburg, Behavioral Law and Economics: Its Origins, Fatal Flaws, and Implications for Liberty, 106 NW. U. L. REV. 1033 (2012); Jonathan Klick & Gregory Mitchell, Government Regulation of Irrationality: Moral and Cognitive Hazards, 90 MINN. L. REV. 1620 (2006).

(4.) Jonathan Weisman & Jess Bravin, Obama's Regulatory Czar Likely to Set a New Tone, WALL ST. J. (Jan. 8, 2009, 12:01 AM), https://www.wsj.com/arti-cles/SB123138051682263203.

(5.) See Tim Chen, The Soft Power of the Consumer Financial Protection Bureau, FORBES (June 17, 2011, 10:30 AM), https://www.forbes.com/sites/money-builder/2011/06/17/the-soft-power-of-the-consumer-financial-protection-bu-reau/#21db6d553705.

(6.) See Katrin Bennhold, Britain's Ministry of Nudges, N.Y. TIMES (Dec. 7, 2013), http://www.nytimes.com/2013/12/08/business/international/britains-ministry-of-nudges.html.

(7.) See INUDGEYOU, http://inudgeyou.com/ (last visited Aug. 19, 2017).

(8.) DAN ARIELY, PREDICTABLY IRRATIONAL: THE HIDDEN FORCES THAT SHAPE OUR DECISIONS (2008).

(9.) See generally RICHARD H. THALER, MISBEHAVING: THE MAKING OF BEHAVIORAL ECONOMICS (2015).

(10.) See generally id.

(11.) WILLIAM J. CONGDON, JEFFREY R. KLING, & SENDHIL MULLAINATHAN, POLICY AND CHOICE: PUBLIC FINANCE THROUGH THE LENS OF BEHAVIORAL ECONOMICS 7 (2011). We cannot, of course, delve deeply into all the quirks and limitations behavioral economists purport to observe. For a more exhaustive overview, see BEHAVIORAL LAW AND ECONOMICS (Cass R. Sunstein ed., 2000) and THALER, supra note 9.

(12.) Robert C. Ellickson, Bringing Culture and Human Frailty to Rational Actors: A Critique of Classical Law and Economics, 65 CHI.-KENT L. REV. 23, 23 (1989) (summarizing the rational choice model).

(13.) See Jeffrey J. Rachlinski, Selling Heuristics, 64 ALA. L. REV. 389, 390 (2012).

(14.) Herbert A. Simon, Rational Choice and the Structure of the Environment, in HERBERT A. SIMON, MODELS OF MAN: SOCIAL AND RATIONAL 261, 271-73 (Herbert A. Simon, ed. 1957).

(15.) Rachlinski, supra note 13, at 390 ("One of the basic lessons of cognitive psychology over the last four decades has been that people use simple mental shortcuts, known as heuristics, to manage complexity and uncertainty.").

(16.) DANIEL KAHNEMAN, THINKING, FAST AND SLOW 19-30 (2011).

(17.) Id.

(18.) Amos Tversky & Daniel Kahneman, Availability: A Heuristic for Judging Frequency and Probability, 5 COGNITIVE PYSCHOL. 207, 208 (1973).

(19.) Jon D. Hanson & Douglas A. Kysar, Taking Behavioralism Seriously: The Problem of Market Manipulation, 74 N.Y.U. L. REV. 630, 663 n. 141 (1999) (citing Amos Tversky & Daniel Kahneman, Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment, 90 PSYCHOL. REV. 293, 295 (1983)).

(20.) Daniel Kahneman & Shane Frederick, Representativeness Revisited: Attribute Substitution in Intuitive Judgment, in HEURISTICS AND BIASES: THE PSYCHOLOGY OF INTUITIVE JUDGMENT 49, 78 (Thomas Gilovich, Dale Griffin & Daniel Kahneman eds., 2002). The result here may also be influenced by the "representativeness" heuristic discussed later in the text. See Norbert Schwarz & Leigh Ann Vaughn, The Availability Heuristic Revisited: Ease of Recall and Content of Recall as Distinct Sources of Information, in HEURISTICS AND BIASES: THE PSYCHOLOGY OF INTUITIVE JUDGMENT, supra, 103, 103-19.

(21.) See generally Shelley E. Taylor, The Availability Bias in Social Perception and Interaction, in JUDGMENT UNDER UNCERTAINTY: HEURISTICS AND BIASES 190, 192 (Daniel Kahneman, Paul Slovic & Amos Tversky eds., 1982) ("Salience biases refer to the fact that colorful, dynamic, or other distinctive stimuli disproportionately engage attention and accordingly disproportionately affect judgments."); Paul Slovic, Perception of Risk, 236 SCIENCE 280, 283 (1987) (discussing "dread risk").

(22.) Paul Slovic, Baruch Fischhoff & Sarah Lichtenstein, Fact Versus Fears: Understanding Perceived Risk, in JUDGMENT UNDER UNCERTAINTY: HEURISTICS AND BIASES, supra note 21, at 463, 469.

(23.) Amos Tversky & Daniel Kahneman, Judgment Under Uncertainty: Heuristics and Biases, in JUDGMENT UNDER UNCERTAINTY: HEURISTICS AND BIASES, supra note 21, at 3, 14.

(24.) See id. at 14.

(25.) See id.

(26.) Id.

(27.) Id.

(28.) Id.

(29.) Id.

(30.) Id.

(31.) Id. at 15.

(32.) Id.

(33.) Id.

(34.) Id.

(35.) Id. at 15-16.

(36.) Dan Ariely, George Loewenstein & Drazen Prelec, "Coherent Arbitrariness": Stable Demand Curves Without Stable Preferences, 118 Q.J. ECON. 73, 76 (2003).

(37.) Id.

(38.) Id.

(39.) Id.

(40.) Id.

(41.) Id.

(42.) Suppose, for example, that a disease afflicts one in a thousand people. A test for the disease correctly detects every instance of infection and is ninety-five percent accurate when the person does not have the disease. If you test positive for the disease, what is the chance that you actually have it? Most people say something like ninety-five percent, reflecting the test's high degree of accuracy. In reality, your chance of having the disease is less than two percent. Out of 1,000 people tested, fifty-one would test positive--one accurately, and fifty falsely. A positive test result, then, would reflect infection in only one of fifty-one cases (1.96% of the time). People tend to err here because they focus only on the test's high accuracy rate and ignore the low base rate of infection.

(43.) Daniel Kahneman & Amos Tversky, On the Psychology of Prediction, in JUDGMENT UNDER UNCERTAINTY: HEURISTICS AND BIASES, supra note 21, at 48, 49-50.

(44.) Id. at 49-50.

(45.) Tom W. was described as follows:
Tom W. is of high intelligence, although lacking in true creativity. He
has a need for order and clarity, and for neat and tidy systems in
which every detail finds its appropriate place. His writing is rather
dull and mechanical, occasionally enlivened by somewhat corny puns and
by flashes of imagination of the sci-fi type. He has a strong drive for
competence. He seems to have little feel and little sympathy for other
people and does not enjoy interacting with others. Self-centered, he
nonetheless has a deep moral sense.


Id. at 49.

(46.) Id. at 49-50.

(47.) Id. at 50-51.

(48.) Id. at 50.

(49.) See, e.g., Amos Tversky & Daniel Kahneman, Judgments of and by Representativeness, in JUDGMENT UNDER UNCERTAINTY: HEURISTICS AND BIASES, supra note 21, at 84, 92.

(50.) Id.

(51.) Id. at 93.

(52.) See, e.g., Hillel J. Einhorn, Learning from Experience and Suboptimal Rules in Decision Making, in JUDGMENT UNDER UNCERTAINTY: HEURISTICS AND BIASES, supra note 21, at 268, 268-83.

(53.) THALER & SUNSTEIN, supra note 1, at 32.

(54.) Id.

(55.) Id.

(56.) Id. (citing Arnold C. Cooper, Carolyn Y. Woo & William C. Dunkelberg, Entrepreneurs' Perceived Chances for Success, 3 J. BUS. VENTURING 97, 97-108 (1988)).

(57.) Garrison Keillor, host of the popular radio program A Prairie Home Companion regularly described the show's setting, Lake Wobegon, as a place "where all the women are strong, all the men are good-looking and all the children are above average." Sarah Begley, Garrison Keillor to Say So-Long to Lake Wobegon, TIME (July 20, 2015), http://time.com/3965277/garrison-keillor-retiring/.

(58.) This author, for example, strictly limits the number of roasted almonds he carries to his easy chair when he's reading the newspaper before dinner. Even though he knows he will ultimately experience more pleasure if he eats just a few nuts and saves room for a well-balanced supper, experience has taught him that he simply cannot avoid eating too many nuts if he takes the whole can to the lounger!

(59.) George Ainslie, Procrastination: The Basic Impulse, in THE THIEF OF TIME: PHILOSOPHICAL ESSAYS ON PROCRASTINATION 11, 12-13 (Chrisoula Andreou & Mark D. White eds., 2010).

(60.) Id. at 18.

(61.) See PAUL HEYNE, PETER J. BOETTKE & DAVID L. PRYCHITKO, THE ECONOMIC WAY OF THINKING 169-71, 191-96 (10th ed. 2003).

(62.) See THALER, supra note 9, at 89-94 (explaining how notion of a constant discount rate became dominant in economics).

(63.) The term exponential discounting is used because the time delay in the formula for assessing the present value of a future consumption opportunity is an exponent. A future reward is adjusted by a factor of 1/[(1+k).sup.t], where k is the discount rate and t is the number of years until consumption. For example, for a person with a ten percent discount rate, receiving $110 one year in the future requires adjusting $110 by a factor of 1/[(1+.10).sup.1] or .9091. (To complete the math, $110 * .9091 = $100.) See Colin F. Camerer & George Loewenstein, Behavioral Economics: Past, Present, Future, in ADVANCES IN BEHAVIORAL ECONOMICS 3, 22-27 (Colin F. Camerer, George Loewenstein & Matthew Rabin eds., 2004).

(64.) Id. at 22.

(65.) The term hyperbolic discounting is used because the formula for the factor by which a future reward must be adjusted is the same as the generalized function for a hyperbola. (The math is beyond our scope.) Id. at 23.

(66.) Id. at 23-27 (summarizing empirical evidence that people engage in hyperbolic discounting).

(67.) Id. at 12 ("Standard preference theory... assumes that preferences are 'reference independent'--i.e., they are not affected by the individual's transient asset position... [and] are invariant with respect to superficial variations in the way that options are described....").

(68.) Id. at 15.

(69.) THALER & SUNSTEIN, supra note 1, at 33.

(70.) Camerer & Loewenstein, supra note 63, at 15-16.

(71.) THALER & SUNSTEIN, supra note 1, at 33.

(72.) Id.

(73.) Id.

(74.) Daniel Kahneman, Jack L. Knetsch & Richard H. Thaler, Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias, 5 J. ECON. PERSPS. 193, 196 (1991).

(75.) THALER & SUNSTEIN, supra note 1, at 34.

(76.) Id.

(77.) Id.

(78.) Id.

(79.) Charles R. Plott & Kathryn Zeiler, The Willingness to Pay-Willingness to Accept Gap, the "Endowment Effect," Subject Misconceptions, and Experimental Procedures for Eliciting Valuations, 95 AM. ECON. REV. 530, 530 (2005); see also Charles R. Plott & Kathryn Zeiler, Exchange Asymmetries Incorrectly Interpreted as Evidence of Endowment Effect Theory and Prospect Theory?, 97 AM. ECON. REV. 1449, 1450 (2007) (finding same).

(80.) See Andrea Isoni, Graham Loomes & Robert Sugden, The Willingness to Pay-Willingness to Accept Gap, the "Endowment Effect," Subject Misconceptions, and Experimental Procedures for Eliciting Valuations: Comment, 101 AM. ECON. REV. 991 (2011); Charles R. Plott & Kathryn Zeiler, The Willingness to Pay-Willingness to Accept Gap, the "Endowment Effect," Subject Misconceptions, and Experimental Procedures for Eliciting Valuations: Reply, 101 AM. ECON. REV. 1012 (2011).

(81.) In the five years following the publication of the Plott and Zeiler studies in the high-profile American Economic Review, fewer than ten percent of legal publications referring to the endowment effect bothered to cite Plott and Zeiler's work. Joshua D. Wright & Douglas H. Ginsburg, Behavioral Law and Economics: Its Origins, Fatal Flaws, and Implications for Liberty, 106 Nw U. L. REV. 1033, 1047-48 (2012).

(82.) Daniel Kahneman & Amos Tversky, Prospect Theory: An Analysis of Decision Under Risk, 47 ECONOMETRICA 263, 277-79 (1979); Amos Tversky & Daniel Kahneman, Loss Aversion in Riskless Choice: A Reference-Dependent Model, 106 Q.J. ECON. 1039, 1041-42 (1991).

(83.) THALER & SUNSTEIN, supra note 1, at 33-34.

(84.) Id. at 34.

(85.) Id. at 34-35.

(86.) Id. at 36-37; Camerer & Loewenstein, supra note 63, at 12-14.

(87.) THALER & SUNSTEIN, supra note 1, at 36.

(88.) Id.

(89.) See generally F. A. Hayek, The Use of Knowledge in Society, 35 AM. ECON. REV. 519 (1945).

(90.) See William F. Shughart II, Public Choice, in THE CONCISE ENCYCLOPEDIA OF ECONOMICS 427, 427-30 (David R. Henderson ed., 2008). Public choice theory assumes that the motivations of actors in the political process--from voters to lobbyists to bureaucrats to politicians--are no different from those of people participating in grocery, housing, or car markets. Id. at 428. Voters "vote their pocketbooks"; lobbyists seek the money and prestige that comes from securing competitive advantages for their clients; bureaucrats strive for job advancement, with its enhanced power and income; and politicians seek election and re-election. Id. While all these parties may seek to mask this "crass" self-interest by paying lip service to altruistic considerations, self-interest lurks beneath the surface. See Bruce Yandle, Bootleggers and Baptists: The Education of a Regulatory Economist, 7 REGULATION 12 (1983). In the memorable words of Nobel laureate James Buchanan, public choice is simply "politics without romance." James M. Buchanan, Politics Without Romance: A Sketch of Positive Public Choice Theory and Its Normative Implications, in THE THEORY OF PUBLIC CHOICE--II 11 (James M. Buchanan & Robert D. Tollison eds., 1984).

(91.) See Buchanan, supra note 90, at 15.

(92.) See id. at 20.

(93.) John Stuart Mill, On Liberty, in ON LIBERTY 5, 13 (Stefan Collini ed., Cambridge Univ. Press 1989) (1859).

(94.) See THALER & SUNSTEIN, supra note 1, at 108-09 (describing automatic enrollment as a salutary libertarian paternalist intervention).

(95.) See sources cited supra notes 2-3 (citing early scholarship advocating and criticizing libertarian paternalism).

(96.) See, e.g., James J. Choi et al., For Better or for Worse: Default Effects and 401(k) Savings Behavior 2 (Nat'l Bureau of Econ. Research, Working Paper No. 8651, 2002); Ryan Bubb & Richard H. Pildes, How Behavioral Economics Trims Its Sails and Why, 127 HARV. L. REV. 1593, 1618-25 (2014); see also Anne Tergesen, 401(k) Law Suppresses Saving for Retirement, WALL ST. J. (July 7, 2011), https://www.wsj.com/articles/SB10001424052702303365804576430153643522780.

(97.) THALER & SUNSTEIN, supra note 1, at 109-10.

(98.) See Cass R. Sunstein, Forcing People to Choose Is Paternalistic, 82 MO. L. REV. 643 (2017).

(99.) Jacob Goldin, Libertarian Quasi-Paternalism, 82 MO. L. REV. 669 (2017).

(100.) Id.

(101.) Jonathan Lee & Hengchen Dai, The Motivating Effects of Temporal Landmarks: Evidence from the Field and Lab, 82 MO. L. REV. 683 (2017).

(102.) Gregory Mitchell, Libertarian Nudges, 82 MO. L. REV. 695 (2017).

(103.) Id. at 696.

(104.) Arden Rowell, Once and Future Nudges, 82 MO. L. REV. 709 (2017).

(105.) Id. at 719-22.

(106.) Victoria A. Shaffer, Nudges for Health Policy: Effectiveness and Limitations, 82 Mo. L. REV. 727 (2017).

(107.) Id. at 736.

(108.) Adam C. Smith, Utilizing Behavioral Insights (Without Romance) An Inquiry into the Choice Architecture of Public Decision-Making, 82 Mo. L. REV. 737 (2017).

(109.) Id. at 752-61.

(110.) Todd J. Zywicki, Geoffrey A. Manne & Kristian Stout, Behavioral Economics Goes to Court: The Fundamental Flaws in the Behavioral Law & Economics Arguments Against No-Surcharge Laws, 82 Mo. L. REV. 769 (2017).

(111.) Id. at 800-08.

(112.) Id. at 840-41.
COPYRIGHT 2017 University of Missouri-Columbia School of Law
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:a; Evaluating Nudge: A Decade of Libertarian Paternalism
Author:Lambert, Thomas A.
Publication:Missouri Law Review
Date:Jun 22, 2017
Words:8701
Previous Article:Missed the Mark: The Supreme Court of Missouri's faulty application of strict scrutiny to the right to bear arms.
Next Article:Forcing People to Choose Is Paternalistic.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters