Introduction

The advancement of science in large part depends upon observation of behavior that has either never been encountered before or, if previously encountered, remains inadequately explained. Observation of what is presently inscrutable propels scientific inquiry. In the fields of cosmology and physics, for instance, observations of particles and the “arrow of time” have propelled the search for our universe’s origin and evolution (Hawking, 2017). In the field of neuroscience, observation of the genetic barcode of a mouse’s brain cells has enlightened our understanding of how human cells mature with age, how tissues regenerate, and how disease impacts these processes (Pennisi, 2018). And so it is with what has come to be known as behavioral economics, a field of inquiry melding psychology’s long-running exploration of human cognition and social norms with the long-standing axioms of omniscient rationality that economists have traditionally ascribed to human choice behavior. Behavioral economics is the long-awaited advancement in economic theory and experimentation that involves both deconstructing and reconstructing the economist’s rational-choice, neoclassical model to better explain the choices individuals actually make on a daily basis, and ultimately to better inform public policy. Through their keen observations of human choice behavior in a wide variety of contexts, behavioral economists have propelled scientific inquiry.

As aptly pointed out by Samson (2019), observations of choice behavior in both private and social settings demonstrate the extent to which human decisions are influenced by context, including how choices are presented to us. The observations demonstrate ways in which our choice behavior is subject to cognitive biases, emotions, heuristics, and social influences. Because these biases, emotions, and influences have, in turn, been shown in a myriad of well-designed laboratory and field experiments and empirical studies to govern choice behavior in ways unpredicted by economists’ rational-choice models, we cannot help but celebrate the emergence of behavioral economics as a separate field of inquiry. In some sense, behavioral economics can be thought of as an overt partnership between the complementary fields of psychology and economics—a natural blending of the former’s insights on human cognition and the latter’s focus on choice behavior. As we will learn in this textbook, behavioral economics is a beacon, not only for the revision and generalization of key features of the economist’s rational-choice model of human behavior but also for what Thaler and Sunstein (2009) have popularized as “nudges” that can improve the outcomes of public policymaking.

Five examples depict the reach of behavioral economics as a separate field of inquiry and illustrate its emergence as a canon of human choice behavior. The first two examples demonstrate precisely how this behavior deviates from the economist’s rational-choice model in the confines of laboratory and field experimentation. The third example demonstrates how policymakers have leveraged these experimental findings to nudge private decisions toward more preferable social outcomes. The fourth example shows how researchers have tested the findings with real-world data obtained from unexpected places. And the fifth example demonstrates what is known as “behavioral game theory,” outcomes of well-known economic games that depart from theoretical predictions, sometimes in rather dramatic fashion.

Example 1

The Invariance Axiom is central to expected utility theory, i.e., rational choice behavior under uncertainty. Simply put, the axiom holds that an individual’s preference ordering of different lotteries (e.g., ranking from most to least preferred lottery) does not depend upon (i.e., is invariant to) how the lotteries are described to the individual. Kahneman and Tversky (1984) test this axiom with a simple experiment involving two subject groups, each group totaling roughly 150 students.

Group 1 was presented with the following lottery:

Imagine that your hometown is preparing for the outbreak of an unusual disease that is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

If Program A is adopted, 200 people will be saved.

If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.

Which of the two programs would you favor?

Group 2’s lottery was this:

Imagine that your hometown is preparing for the outbreak of an unusual disease that is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

If Program C is adopted, 400 people will die.

If Program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.

Which of the two programs would you favor?

If you look closely at the two lotteries, you will note that they are identical. Program A from Group 1’s lottery is identical to Program C from Group 2’s lottery, and Group 1’s Program B is identical to Group 2’s Program D.[1] Thus, we expect the percentages of Group 1 students choosing among Programs A and B in their lottery to be roughly equal to the corresponding percentages of Group 2 students choosing among Programs C and D in their lottery. This would be in keeping with the Invariance Axiom.

Instead, Kahneman and Tversky (1984) found that 72% of Group 1 students chose Program A and 28% chose Program B, while only 22% of Group 2 students chose Program C and 78% chose Program D, a dramatic refutation of the Invariance Axiom. The authors concluded that because the “reference points” of the two lotteries differed in this experiment—Group 1’s is that people are “saved” and Group 2’s is that people “die”—the Invariance Axiom was not necessarily destined to hold in this context, which runs counter to the rational-choice model’s presumption that the axiom holds in any context. As we will see, this insight led to Kahneman and Tversky’s notions of “reference dependence” and “framing” in human choice behavior; notions which had been ignored by the rational-choice model, yet are crucial to our understanding of how humans make decisions under uncertainty. In short, context matters.

Example 2

Heath and Tversky (1991) engaged roughly 200 subjects in the following lottery:

Choose between lotteries A and B:

A    A stock is selected at random from the New York Stock Exchange. You guess whether its price will go up or down at close tomorrow. If your guess is correct you win $100.

B    A stock is selected at random from the New York Stock Exchange. You guess whether its price went up or down at close yesterday. You cannot check the newspaper or online. If your guess is correct you win $100.

Bearing in mind that the internet was not yet in widespread use in 1991, and thus lottery B was indeed failsafe, we would expect the subjects to be indifferent between the two lotteries, resulting in a 50-50 split of those choosing A versus B.[2] Instead, 67% of the subjects chose lottery A and 33% percent chose B, which supports what the authors labeled a “competency effect.” The supermajority of subjects preferred the future bet because their “relative ignorance” was easier to defend this way. In a sense, they appeared less incompetent by choosing lottery A.

Example 3

This example highlights a nudge to public policy (in the form of a single company’s benefits policy) that leverages our understanding of framing from Example 1. In particular, the example explores how framing a new retirement-savings program appropriately can overcome what is known as “status quo bias” among a company’s employees.

As Thaler and Benartzi (2004) point out, US companies have been switching their retirement plans over time from defined-benefit to defined-contribution plans. Under defined-contribution plans, employees bear more responsibility for making decisions about how much of their salaries to save. Employees who participate in a given plan at a very low level save at less-than-predicted life-cycle (i.e., rational) savings rates. One explanation for this irrational behavior is a lack of self-control among low-saving employees, suggesting that at least some of these workers are making a mistake and would welcome help in making decisions about their retirement savings. It could also be that some employees suffer from the competency effect portrayed in Example 2. Either way, employees tend to exhibit status quo bias when it comes to optimizing their retirement-savings plans.

To counteract this problem, Thaler and Benartzi (2004) devised a new savings plan for a large company called the Save More Tomorrow (SMarT) plan. The essence of the plan is straightforward: people commit now to increasing their savings rate later (i.e., each time they get a pay raise). As will be explained further in Section 4 of this textbook, the authors found that the average saving rates for SMarT participants increased from 3.5% to 13.6% over the course of the plan’s first 40 months, while employees who chose an alternative retirement plan increased their saving rate to a lesser extent. Those who declined both the SMarT and alternative plans saw no increase in their savings rates.

The question naturally arose as to how the company might entice more of its employees to enroll in the SMarT plan. One suggestion was to frame the choice of retirement plans as an “opt-out” rather than an “opt-in” decision. Under opt-out, new employees are automatically enrolled in the SMarT plan and therefore must take it upon themselves to switch to another plan. Opt-out ingeniously harnesses employees’ natural tendencies toward status quo bias for their own betterment (at least regarding retirement savings decisions).[3]

Example 4

Pope and Schweitzer (2011) explore whether reference dependence (such as that described in Example 1), and “loss aversion,” (which is one of behavioral economics’ most renowned discoveries in laboratory experiments), are present in the behavior of professional golfers.[4] Loss aversion governs choice behavior when an individual perceives the pain of losing as more powerful than the pleasure of winning (or, gaining). Loss-averse individuals are more willing to take risks or behave dishonestly to avoid a loss than to achieve a gain (behavioraleconomics.com, 2019).

As Pope and Schweitzer (2011) point out, golf provides a natural setting to test for loss aversion because golfers are rewarded for the total number of strokes they take during a tournament, yet each hole has a salient reference point, putting for par. Loss-averse golfers suffer more psychologically from scoring “over par” (bogeying) on any given hole than “under par” (birdying). The authors analyzed over 2.5 million putts measured by laser technology and found evidence that even the best golfers—including Tiger Woods in his heyday—show evidence of loss aversion. Specifically, when PGA golfers are under par on any given hole (i.e., putting for a birdie), they are 2% less likely to make the putt than when they are putting for par or are over par (i.e., putting for a bogey).

Example 5

The Ultimatum Bargaining game is one of the most widely tested games in the history of behavioral game theory. It has been tested with students in the US and Europe, as well as tribes in Africa, the Amazon, Papua New Guinea, Indonesia, and Mongolia. The game is described as follows:

Two players – a Proposer and a Responder – bargain over $10. The Proposer offers some portion, x, of the $10 to the Responder, leaving the Proposer with $(10-x). If the Responder accepts the offer, then she gets $x and the Proposer gets $(10-x). If the Responder rejects the offer, both players get nothing.

Camerer (2003) points out that by going first, and because the game is played in “one shot,” the Proposer has all of the bargaining power. Therefore, we should expect, per the rational-choice model, that the Proposer will exploit the fact that a similarly self-interested Responder will take whatever is offered. The Proposer should thus offer an $x very close to $0.

Instead, in a multitude of experiments conducted worldwide, Proposers typically offer roughly half of the total. Offers of roughly 20% are rejected about half of the time as punishment for what Responders interpret as Proposers not having behaved fairly. Variants of the game have considered more than one Proposer, repeated play between a Proposer and Respondent with “stranger matching” (i.e., new pairings among the pool of subjects), higher stakes, and added risk associated with the Responder not knowing for certain what the stakes are. Again and again, the behavior of participants in the game deviates from the expected, rational outcome.

To reiterate and sum up our introductory remarks, human beings do not always behave as the self-interested, net benefit maximizing individuals with stable preferences that the traditional rational-choice model of economic decision making would have us believe. Let’s face it. Most of our choices are not the result of careful deliberation. We are influenced by readily available information in our memories and automatically generated, salient information in the environment. We live in the moment and thus tend to resist change, are poor predictors of future behavior, subject to distorted memory, and affected by physiological and emotional states of mind. We are social animals with social preferences, susceptible to social norms and a need for self-consistency (Samson, 2019). All of this we sense intuitively; these are normal human behaviors. Behavioral economics studies how this normality plays out in economic and social contexts, and in the process identifies where traditional rational-choice theory has fallen short of correctly predicting individual and social choice behavior.


  1. The latter identity results because a one-third probability that 600 people will “be saved” under Program B means 0.33 x 600 = 200 people are expected to be saved, which is the same number of people who are not expected “to die” under Program D. Similarly, the 0.67 x 600 = 400 people who are not expected to be saved under Program B is the same number of people who are expected to die under Program D. Hence the two lotteries are indeed identical.
  2. The technical terminology for the rational-choice axiom, in this case, is “additivity of subjective probability.”
  3. The opt-out approach has been shown to work in other instances as well, most famously for organ donor programs. Davidai et al. (2012) point out that Spain, Belgium, Austria, and France have among the highest organ-donation consent rates worldwide, precisely because they use opt-out defaults (known as “presume consent”) when it comes to registering citizens in their respective programs. To not donate their organs upon death, citizens must take it upon themselves to opt out (i.e., they must overcome status quo bias with respect to donating their organs).
  4. If you are wondering why professional golfers, it is because of the plethora of data that exists from the various Professional Golfers Association (PGA) tournaments held each year.

License

Icon for the Creative Commons Attribution 4.0 International License

A Practicum in Behavioral Economics Copyright © 2022 by Arthur J. Caplan is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book