Tversky and Kahneman, in “The Framing of Decisions and the Psychology of Choice”, “describe decision problems in which people systematically violate the requirements of consistency and coherence, and […] trace these violations to the psychological principles that govern the perception of decision problems and the evaluation of options.”
They start with the following example,
Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows:
[one set of respondents, condition 1]
If Program A is adopted, 200 people will be saved. [72 percent]
If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved. [28 percent]
[second set of respondents, condition 2]
If Program C is adopted 400 people will die. [22 percent]
If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. [78 percent]
They add:
“[I]t is easy to see that the two problems are effectively identical. The only difference between them is that the outcomes are described in problem 1 by the number of lives saved and in problem 2 by the number of lives lost. The change is accompanied by a pronounced shift from risk aversion to risk taking.”
Given the empirical result, they propose “prospect theory” which we can summarize as people exhibit risk aversion when faced with gains, and risk-seeking when faced with losses.
Why is Program A less “risky†than Program B?
Expected utility of A can be seen as equal to that of B over repeated draws. However, over the next draw – which we can assume to the question’s intent, Program A provides a certain outcome of 200, while Program B is a toss-up between 0 or 600. Hence, Program B can be seen as risky.
Looked at more closely, however, the interpretation of Program B is still harder –
Probability is commonly understood as over repeated draws. Here – Given infinite draws – 1/3 of the times it will yield a 600, and the rest of the 2/3 of times a 0. (~ Frequentist) Tversky and Kahneman share the frequentist take on probability (though they frame it differently) – “The utility of a risky prospect is equal to the expected utility of its outcomes, obtained by weighting the utility of each possible outcome by its probability.†(This takes directly from statistical decision theory that defines risk as integral of the loss function. The calculation is inapplicable for any one draw.)
What is the meaning of probability for the next draw? If it is a random event, then we have no knowledge of the next toss. The way it is used here however is different – we know that it isn’t a ‘random event’ and that we have some knowledge of the outcome of the next toss, and we are expressing ‘confidence’ in the outcome of the next toss. (~ Bayesian) Transcribing Program B’s description in Bayesian framework, we are 33% confident that all 600 will be saved, while 66% confident that we will fail utterly. (N.B. The probability distribution for the predicted event emanates likely from a threshold process – all or nothing kind of gambling event. Alternate processes may entail that the counterfactual to utter failure is a slightly less than utter failure, and so on and so forth on a continuum.) Two-third confidence in utter failure (all die), makes the decision task ‘risky’.
Argument about Rationality and Equality of utility (between A, B, C, and D)
According to Tversky and Kahneman, utility of Program A is same as Program B. As we can infer from above, if we constrain estimation of utility to the next draw – which is in line with the way the question is put forth, Program A is superior to Program B. An alternate way to put the question could have been – “over the next 100,000 draws, programs provide these outcomes. Which one would you prefer?†Looked at in that light, the significant majority who choose Program A over B can be seen as rational.
However, the central finding of Tversky and Kahneman is “preference reversal” between battery 1 (gains story) and battery 2 (losses story). We see a reversal from majority preferring ‘risk aversion’ to a majority preferring ‘risk taking’ between the two ‘conditions’. Looked independently, the majority’s support in each condition seems logical, but why is that the case? We have already made a case for battery 1, and for battery 2 the case would run something like this – given overwhelming number of fatalities, one would want to try a risky option. Except of course, mortality figures in A and C, and B and D, are the same, and so is the risk calculus.
For Tversky and Kahneman’s findings to be seen as a testimony of human irrationality, Program A should basically be seen as equivalent to Program C, and Program B to Program D. And the lack of ‘consistency’ between choices an indicator of irrationality. In condition 1, our attention is selectively moored towards the positive, while in condition 2, towards the negative, and respondents evaluate risk based on different DVs (even though they are the same). The findings are unequivocally normatively problematic, and provide a manuscript for strategic actors for how to “frame†policy choices in ways that will garner support.
Brief points about measurement and experiment design
1) There is no “control” group. One imagines that the rational split would be one gotten in condition A, or condition B, or as the authors indicate some version of 50-50 split. There is reason to believe that 50-50 split is not the rational split in either of the conditions (with perhaps 100-0 split in either conditions being ‘rational’. This doesn’t overturn the findings but merely provides an interpretation of the control. Definitions of control are important as they allow us to see the direction of bias. Here – it allows us to see that condition 1 allows for more people to reach the ‘correct decision’ than condition 2.)
2) To what extent is the finding an artifact of the way the question is posed? It is hard to tell.
- A 50-50 split response condition would be achieved if respondents think that both choices are equivalent, and hence pick one choice randomly. But given respondents are liable to imagine that a ‘unique’ solution exists, given they have been brought into a university laboratory and asked a question, people are likely to try to read the tea leaves. Of course, people systematically reading tea-leaves in one way means something else is perhaps going on, but still it is very likely that deviations from 50-50 split would be much less if one were to provide a response option that both choices are equivalent. This is so because some number will choose 3, and then you can either eliminate that sub-sample, and calculate new percentages of deviations from 50-50 by constraining to the two choices (which will likely yield a larger percentage swing) or include everyone, and find a smaller percentage swing.
- The stump for condition B (that offers Program C or Program D) is the same as stump for condition A – the disease is ‘expected to kill 600 people’. In light of that, description of Program C (“If Program C is adopted 400 people will dieâ€) offers no information about the other 200. Respondent can imagine that 200 will be saved, but isn’t particularly sure of their fate. On the other hand, with the same stump, information about ‘200 will be saved’ allows us to weakly infer that 400 people will die. This biases the swing in favor of the results that we see.
- If we are to imagine that results are driven entirely differential risk aversion in losses and profits, and not some other cognitive malaise, then it would have been nice to see clearer enunciation of the outcomes. For example, description of Program A could have reworded as ‘If Program A is adopted, 200 people will be saved, and 400 people will not. Or only 200 out of the 600 people will be saved’. This would have likely attenuated the swing that we see, though it is open to empirical investigation. The larger point perhaps is that there are multiple ‘manipulations’, and that the results we see may be artifactual, coming instead from another source. For example – Kuhberger (1995) noted that outcomes in the Asian disease problem are inadequately specified. When Kuhberger made the outcomes explicit (e.g. stating that 200 will be saved and 400 will die), ‘‘framing’’ effects vanish[ed].â€
Further Reading
Bless, H., Betsch, T. & Franzen, A. (1998). Framing the framing effect: The impact of context cues on solutions to the ‘Asian disease’ problem. European Journal of Social Psychology, 28, 287–291.
Druckman, J. N. (2001). Evaluating framing effects. Journal of Economic Psychology, 22, 91–101.
Kahneman, D. & Tversky, A. (1979). Prospect Theory: An analysis of decision under risk. Econometrica, 47, 263–291.
Kühberger, A. (1995). The framing of decisions: A new look at old problems. Organizational Behavior and Human Decision Processes, 62, 230–240.
Levin, I. P., Schneider, S. L. & Gaeth, G. J. (1998). All frames are not created equal: A typology and critical analysis of framing effects. Organizational Behavior and Human Decision Making Processes, 76, 149–188.