Theory

Prospect Theory for Policy Readers: Reference Points Matter

How Kahneman and Tversky's 1979 paper overturned expected utility theory — and why loss aversion, reference dependence, and probability weighting reshape how we think about tax policy, insurance, and retirement savings.

Reckonomics Editorial ·

The Paper That Changed the Question

For most of the twentieth century, economics had a clean story about how people make decisions under uncertainty. They maximize expected utility — a framework descended from Daniel Bernoulli’s 1738 insight that people care about the usefulness of money, not its raw amount, and formalized by John von Neumann and Oskar Morgenstern in 1944. The theory was elegant, axiomatically grounded, and wrong in systematic, predictable ways that two Israeli psychologists would spend a decade documenting.

Daniel Kahneman and Amos Tversky published “Prospect Theory: An Analysis of Decision under Risk” in Econometrica in 1979. It was, and remains, the most cited paper in that journal’s history — a remarkable fact given that neither author was an economist. What they proposed was not a minor patch to expected utility but a different architecture for understanding choice. Where expected utility theory starts from final wealth states, prospect theory starts from changes relative to a reference point. Where expected utility treats gains and losses symmetrically, prospect theory insists they are psychologically different categories. Where expected utility assumes people weight probabilities linearly, prospect theory shows they do not.

The paper mattered not because it said people are “irrational” — a word Kahneman grew to dislike — but because it offered a precise, testable alternative to the reigning model. It predicted specific patterns of behavior that expected utility could not: why people simultaneously buy insurance and lottery tickets, why they hold losing stocks too long and sell winners too quickly, why the same policy framed as a gain or a loss elicits dramatically different public reactions.

Understanding prospect theory is now essential for anyone who designs, evaluates, or votes on public policy. But the theory is also frequently oversimplified, over-applied, and invoked as a vague gesture toward “people are biased.” This essay lays out what the theory actually says, where it applies, and where it does not.

Reference Dependence: The First Revolution

Expected utility theory evaluates outcomes in terms of final states of wealth. If you have $100,000 and are offered a gamble, the theory says you should evaluate the possible outcomes — say, $110,000 or $95,000 — against your utility function over total wealth. The shape of that function (typically assumed to be concave, meaning diminishing marginal utility) determines your risk preferences.

Kahneman and Tversky noticed something that should have been obvious but wasn’t: people don’t think that way. They think in terms of gains and losses relative to where they are now — their reference point. A person who started the day with $100,000 and gained $10,000 feels differently from a person who started with $120,000 and lost $10,000, even though both end the day at $110,000. Expected utility theory says they should feel the same, because they have the same final wealth. Prospect theory says they feel very different, because one experienced a gain and the other a loss.

This is reference dependence, and it is the foundation of everything else in the theory. The reference point is usually the status quo — what you have now, what you expected to have, what you feel entitled to. But it can shift. If you were promised a $5,000 bonus and received $3,000, the reference point is the promise, and the $3,000 feels like a $2,000 loss even though you are objectively richer. If house prices in your neighborhood have been rising and yours stagnates, the reference point is the neighbor’s trajectory, and your unchanged wealth feels like deprivation.

Reference dependence matters for policy because policy changes are always changes from something. A tax increase is a loss relative to the current rate. A tax cut is a gain. A new regulation is a loss of freedom for the regulated party and a gain in protection for the beneficiary. The same objective outcome — say, a carbon tax that costs the median household $500 per year but returns $600 in dividends — can feel like a net loss or a net gain depending on which component people attend to first and what they treat as their baseline.

The Value Function: Losses Loom Larger

The most famous feature of prospect theory is loss aversion: losses hurt more than equivalent gains please. Kahneman and Tversky estimated the ratio at roughly 2:1 — losing $100 feels about as bad as gaining $200 feels good. Later studies have found ratios ranging from about 1.5:1 to 2.5:1 depending on the domain, the stakes, and the method of measurement, but the basic asymmetry is robust across dozens of experiments and field studies.

The value function in prospect theory is S-shaped. For gains (above the reference point), it is concave — showing diminishing sensitivity, just as classical utility theory predicts. The difference between gaining $100 and $200 feels larger than the difference between gaining $1,100 and $1,200. For losses (below the reference point), the function is convex — also showing diminishing sensitivity, but in the loss domain. The difference between losing $100 and $200 feels larger than between losing $1,100 and $1,200.

Crucially, the loss side of the curve is steeper than the gain side. This asymmetry generates a cluster of real-world behaviors:

The endowment effect. Once people own something, giving it up is a loss, and they demand more to part with it than they would have paid to acquire it. Richard Thaler’s famous mug experiments showed that people given a coffee mug demanded roughly twice as much to sell it as non-owners were willing to pay. This is not sentimentality; it is reference-dependent valuation. Ownership shifts the reference point.

Status quo bias. Changing from the current state means accepting a sure loss of what you have for an uncertain gain of something new. Because losses loom larger, people stick with defaults even when alternatives are objectively better. This has enormous consequences for policy design, as we will see.

The disposition effect. Investors sell winning stocks (locking in a gain, which feels good) and hold losing stocks (avoiding the realization of a loss, which would feel terrible). This is irrational by any standard financial theory, but it is one of the most reliably documented patterns in household finance.

Risk attitudes that flip. In the gain domain, diminishing sensitivity makes people risk-averse — they prefer a sure $500 to a 50% chance of $1,000. In the loss domain, the same diminishing sensitivity makes people risk-seeking — they prefer a 50% chance of losing $1,000 to a sure loss of $500. This reversal is precisely what expected utility theory cannot explain if risk aversion is a stable trait.

Probability Weighting: Small Chances Are Overweighted

The second major departure from expected utility is probability weighting. In standard theory, a 10% chance of winning $100 contributes exactly 10% of the utility of $100 to the expected value calculation. Kahneman and Tversky found that people do not weight probabilities linearly. Instead, they apply a weighting function that overweights small probabilities and underweights moderate-to-high probabilities.

This explains the simultaneous purchase of lottery tickets and insurance — a combination that expected utility theory struggles with. Buying a lottery ticket means overweighting a tiny probability of a large gain (risk-seeking in the gain domain for small probabilities). Buying insurance means overweighting a small probability of a large loss (risk-aversion in the loss domain for small probabilities). Prospect theory predicts both behaviors from the same model, without needing to assume people are confused or inconsistent.

The probability weighting function has a specific shape: it is steepest near the endpoints (0 and 1), reflecting the psychological power of certainty and impossibility. Moving from 0% to 5% — from impossibility to possibility — has a disproportionate psychological impact. So does moving from 95% to 100% — from almost certain to certain. This “certainty effect” explains why people pay a premium for guarantees and why “zero risk” messaging is so powerful in public health and consumer safety, even when reducing risk from 5% to 0% is far more expensive per unit of risk reduction than reducing it from 30% to 25%.

For policymakers, probability weighting means that how risks are communicated matters enormously. Telling people a medical procedure has a “95% survival rate” versus a “5% mortality rate” produces different decisions, even though the information is identical. Framing a policy as eliminating a risk entirely versus reducing it substantially generates different levels of public support, controlling for the actual magnitude of the benefit.

How Prospect Theory Differs from Expected Utility

It is worth being precise about the structural differences, because pop-behavioral-economics often muddles them.

Expected utility theory says: (1) people evaluate outcomes in terms of final wealth states; (2) they weight outcomes by their objective probabilities; (3) they are globally risk-averse if their utility function is concave. The theory is normative — it describes how a rational agent should decide — and it was also widely treated as descriptive, as a reasonable approximation of how people actually decide.

Prospect theory says: (1) people evaluate outcomes as gains or losses relative to a reference point; (2) they weight outcomes by a nonlinear transformation of probabilities; (3) they are risk-averse for gains and risk-seeking for losses, with losses weighted more heavily than gains. The theory is explicitly descriptive — it aims to predict actual behavior, not prescribe ideal behavior.

The difference is not merely philosophical. Expected utility generates clean predictions: risk-averse agents buy insurance, diversify portfolios, and prefer certain outcomes to risky ones of equal expected value. Prospect theory generates messier but more accurate predictions: the same agent may be risk-averse in one frame and risk-seeking in another, depending on whether the decision is coded as involving gains or losses relative to the reference point.

Kahneman and Tversky later developed cumulative prospect theory (1992), which extended the model to handle gambles with many outcomes and addressed some technical problems with the original formulation — specifically, the original theory could violate first-order stochastic dominance in certain edge cases. Cumulative prospect theory applies the weighting function to cumulative probabilities rather than individual ones, preserving the key insights while improving formal rigor.

Policy Applications: Where Prospect Theory Earns Its Keep

The practical payoff of prospect theory for policy comes in several domains.

Tax policy and framing. Whether a tax provision is framed as a bonus or as the avoidance of a penalty changes compliance rates and public acceptance. The Affordable Care Act’s individual mandate was framed as a penalty for not having insurance — a loss — rather than a bonus for having it. Behavioral research suggests the penalty frame may have been more motivating for some populations (because losses loom larger) but also generated more political resentment. Framing effects are not just communication tricks; they reflect genuine differences in how people encode policy changes relative to their reference points.

Retirement savings and defaults. The power of defaults is partly a status quo bias story and partly a loss aversion story. Opting out of a 401(k) plan requires actively choosing to lose employer matching contributions — a salient loss. Auto-enrollment, which makes participation the default, exploits this asymmetry. Thaler and Benartzi’s “Save More Tomorrow” program went further: it asked employees to commit to increasing their savings rate with future raises, so the increase never felt like a loss from current take-home pay. The reference point was the current paycheck, and the savings increase came out of money people had not yet received. Participation rates in these programs have been dramatically higher than in opt-in designs.

Insurance design. Prospect theory predicts that people will overpay for low-deductible insurance (overweighting small probabilities of loss) and underinsure against catastrophic but rare events (unless those events are salient — after a flood, flood insurance purchases spike, then decay). This has direct implications for how governments structure disaster insurance, health insurance deductibles, and deposit insurance communication.

Regulation and loss framing. When new environmental or safety regulations impose costs on existing businesses, those costs are coded as losses relative to the pre-regulation status quo. When the same regulations prevent future harms, the benefits are coded as avoided losses — but only if the potential harm was already salient. This asymmetry helps explain why it is politically easier to block a new regulation (framed as preventing a loss of the current situation) than to repeal an existing one (framed as imposing a loss of current protections). The “endowment effect” operates on policy entitlements as well as coffee mugs.

Negotiations and diplomacy. Loss aversion makes concessions in negotiations feel disproportionately painful. Each side evaluates proposed deals relative to their current position or their aspirations, and giving something up looms larger than what they receive in return. This dynamic helps explain why peace negotiations, trade deals, and labor contracts often stall even when mutually beneficial agreements exist in principle.

What Prospect Theory Does Not Predict

The theory’s influence has been so large that it is now routinely invoked far beyond its proper domain. A few clarifications are in order.

Prospect theory is a theory of risky choice — decisions with known probabilities. It is not a general theory of irrationality, cognitive error, or human foolishness. It does not explain all biases; it does not say people are stupid; it does not claim that markets are inefficient (though it has implications for certain market anomalies). Using “loss aversion” as a catch-all explanation for any behavior you find puzzling is not prospect theory — it is hand-waving.

The theory does not specify where reference points come from. It tells you that people evaluate outcomes relative to a reference point, but it does not have a fully developed theory of reference point formation. Are reference points set by expectations, entitlements, social comparisons, recent experiences, or aspirations? Probably all of these, in different contexts. This is a genuine limitation: if you can choose the reference point freely, you can “explain” almost anything after the fact.

Prospect theory also does not by itself tell you what to do. It is descriptive, not prescriptive. Knowing that people are loss-averse does not automatically tell a policymaker whether to frame a policy as a gain or a loss, because the ethical question of whether to exploit psychological asymmetries for policy goals is separate from the empirical question of whether those asymmetries exist.

Criticisms and Replication

No theory in behavioral economics has been more scrutinized than prospect theory, and the scrutiny has produced both confirmations and qualifications.

The core finding of loss aversion has been replicated many times, but its magnitude varies. A meta-analysis by Yechiam (2019) argued that loss aversion is domain-specific rather than a universal psychological constant — it is strong for large stakes and mixed gambles but weaker or absent in some small-stakes contexts. Some researchers, notably Gal and Rucker (2018), have gone further, arguing that loss aversion as a general principle is overstated and that many phenomena attributed to it can be explained by other mechanisms. Their critique sparked vigorous debate but has not overturned the consensus that losses and gains are psychologically asymmetric in most economically relevant situations.

Probability weighting has proven harder to pin down precisely. Different elicitation methods produce different weighting functions, and the degree of overweighting of small probabilities varies across populations, cultures, and experimental designs. The qualitative pattern — overweighting of small probabilities, underweighting of moderate ones — is fairly robust, but the exact parameters are less stable than early presentations suggested.

More broadly, behavioral economics as a field has faced replication challenges. Some specific demonstrations — the “Asian disease” framing effect, certain endowment-effect magnitudes — have proven sensitive to experimental details. This does not invalidate prospect theory’s framework, but it does caution against treating any single experiment as gospel.

The deepest criticism may be methodological: prospect theory, like expected utility theory, is a model of individual choice in laboratory-like settings. Translating its predictions to markets, organizations, and political systems requires additional assumptions about aggregation, learning, and institutional constraints that the theory itself does not provide. People may behave one way in a one-shot lab gamble and quite differently after years of market experience, professional training, or institutional feedback.

Why It Still Matters

Despite these qualifications, prospect theory’s core contributions are durable. Reference dependence is real: people evaluate outcomes relative to where they started, not in terms of absolute levels. Loss aversion is real, even if its precise magnitude is debatable: taking something away from people provokes a stronger reaction than giving them something of equivalent value. Probability weighting is real: people overreact to vivid, low-probability events and underreact to diffuse, high-probability ones.

These facts reshape how we think about policy design, political communication, and institutional architecture. A policymaker who ignores reference dependence will be surprised when a revenue-neutral tax reform provokes fury from those who lose deductions but little gratitude from those who gain lower rates. A regulator who ignores loss aversion will underestimate the political cost of imposing new requirements on existing industries. A public health official who ignores probability weighting will struggle to communicate risks effectively.

Kahneman and Tversky did not claim to have discovered that people are foolish. They claimed to have discovered that people are predictably different from the agents in standard economic models — and that those differences are systematic enough to be modeled, tested, and, occasionally, designed around. Nearly fifty years after its publication, that claim holds up.