Prejudice is the bane of humanity. Measurement of prejudice, in turn, is a bane of social scientists. Self-reports are unsatisfactory. Like talk, they are cheap and thus biased and noisy. Implicit measures don’t even pass the basic hurdle of measurement—reliability. Against this grim background, economic games as measures of prejudice seem promising—they are realistic and capture costly behavior. Habyarimana et al. (HHPW for short) for instance, use the dictator game (they also have a neat variation of it which they call the ‘discrimination game’) to measure ethnic discrimination. Since then, many others have used the design, including prominently, Iyengar and Westwood (IW for short). But there are some issues with how economic games have been set up, analyzed, and interpreted:
- Revealing identity upfront gives you a ‘no personal information’ estimand: One common aspect of how economic games are setup is the party/tribe is revealed upfront. Revealing the trait upfront, however, may be sub-optimal. The likelier sequence of interaction and discovery of party/tribe in the world, especially as we move online, is regular interaction followed by discovery. To that end, a game where players interact for a few cycles before an ‘irrelevant’ trait is revealed about them is plausibly more generalizable. What we learn from such games can be provocative—-discrimination after a history of fair economic transactions seems dire.
- Using data from subsequent movers can bias estimates. “For example, Burnham et al. (2000) reports that 68% of second movers primed by the word “partner” and 33% of second movers primed by the word “opponent” returned money in a single-shot trust game. Taken at face value, the experiment seems to show that the priming treatment increased by 35 percentage-points the rate at which second movers returned money. But this calculation ignores the fact that second movers were exposed to two stimuli, the 14 partner/opponent prime and the move of the first player. The former is randomly assigned, but the latter is not under experimental control and may introduce bias. ” (Green and Tusicisny) IW smartly sidestep the concern: “In both games, participants only took the role of Player 1. To minimize round-ordering concerns, there was no feedback offered at the end of each round; participants were told all results would be provided at the end of the study.”
- AMCE of conjoint experiments is subtle and subject to assumptions. The experiment in IW is a conjoint experiment: “For each round of the game, players were provided a capsule description of the second player, including information about the player’s age, gender, income, race/ethnicity, and party affiliation. Age was randomly assigned to range between 32 and 38, income varied between $39,000 and $42,300, and gender was fixed as male. Player 2’s partisanship was limited to Democrat or Republican, so there are two pairings of partisan similarity (Democrats and Republicans playing with Democrats and Republicans). The race of Player 2 was limited to white or African American. Race and partisanship were crossed in a 2 × 2, within-subjects design totaling four rounds/Player 2s.” The first subtlety is that AMCE for partisanship is identified against the distribution of gender, age, race, etc. For generalizability, we may want a distribution close to the real world. As Hainmeuller et al. write: “…use the real-world distribution (e.g., the distribution of the attributes of actual politicians) to improve external validity. The fact that the analyst can control how the effects are averaged can also be viewed as a potential drawback, however. In some applied settings, it is not necessarily clear what distribution of the treatment components analysts should use to anchor inferences. In the worst-case scenario, researchers may intentionally or unintentionally misrepresent their empirical findings by using weights that exaggerate particular attribute combinations so as to produce effects in the desired direction.” Second, there is always a chance that it is a particular higher-order combination, e.g., race–PID, that ‘explains’ the main effect.
- Skew in outcome variables means that the mean is not a good summary statistic. As you see in the last line of the first panel of Table 4 (Republican—Republican Dictator Game), if you can take out the 20% of the people who give $0, the average allocation from others is $4.2. HHPW handle this with a variable called ‘egoist’ and IW handle it with a separate column tallying people giving precisely $0.
- The presence of ‘white foreigners’ can make people behave more generously. As Dube et al. find, “the presence of a white foreigner increases player contributions by 19 percent.” The point is more general, of course.
With that, here are some things we can learn from economic games in HHPW and IW:
- People are very altruistic. In HPPW: “The modal strategy, employed in 25% of the rounds, was to retain 400 USh and to allocate 300 USh to each of the other players. The next most common strategy was to keep 600 USh and to allocate 200 USh to each of the other players (21% of rounds). In the vast majority of allocations, subjects appeared to adhere to the norm that the two receivers should be treated equally. On average, subjects retained 540 shillings and allocated 230 shillings to each of the other players. The modal strategy in the 500 USh denomination game (played in 73% of rounds) was to keep one 500 USh coin and allocate the other to another player. Nonetheless, in 23% of the rounds, subjects allocated both coins to the other players.” In IW, “[of the $10, players allocated] nontrivial amounts of their endowment—a mean of $4.17 (95% confidence interval [3.91, 4.43]) in the trust game, and a mean of $2.88 (95% confidence interval [2.66, 3.10])” (Note: These numbers are hard to reconcile with numbers in Table 4. One plausible explanation is that these numbers are over the entire population and Table 4 numbers are a subset on partisans and independents are somewhat less generous than partisans.)
- There is no co-ethnic bias. Both HHPW and IW find this. HHPW: “we find no evidence that this altruism was directed more at in-group members than at out-group members. [Table 2]” IW: “From Figure 8, it is clear that in comparison with party, the effects of racial similarity proved negligible and not significant—coethnics were treated more generously (by eight cents, 95% confidence interval [–.11, .27]) in the dictator game, but incurred a loss (seven cents, 95% confidence interval [–.34, .20]) in the trust game.”
- A modest proportion of people discriminate against partisans. IW: “The average amount allocated to copartisans in the trust game was $4.58 (95% confidence interval [4.33, 4.83]), representing a “bonus” of some 10% over the average allocation of $4.17. In the dictator game, copartisans were awarded 24% over the average allocation.” But it is less dramatic than that. The key change in the dictator game is the number of people giving $0. The change in the percentage of people giving $0 is 7% among Democrats. So the average amount of money given to R and D by people who didn’t give $0 is $4.1 and $4.4 respectively which is a ~ 7% diff.
- More Republicans than Democrats act like ‘homo-economicus.’ I am just going by the proportion of respondents giving $0 in dictator games.
p.s. I was surprised that there are no replication scripts or even a codebook for IW. The data had been downloaded 275 times when I checked.