Nothing to See Here: Statistical Power and “Oversight”

13 Aug

“Thus, when we calculate the net degree of expressive responding by subtracting the acceptance effect from the rejection effect—essentially differencing off the baseline effect of the incentive from the reduction in rumor acceptance with payment—we find that the net expressive effect is negative 0.5%—the opposite sign of what we would expect if there was expressive responding. However, the substantive size of the estimate of the expressive effect is trivial. Moreover, the standard error on this estimate is 10.6, meaning the estimate of expressive responding is essentially zero.

https://journals.uchicago.edu/doi/abs/10.1086/694258

(Note: This is not a full review of all the claims in the paper. There is more data in the paper than in the quote above. I am merely using the quote to clarify a couple of statistical points.)

There are two main points:

  1. The fact that estimate is close to zero and the s.e. is super fat are technically unrelated. The last line of the quote, however, seems to draw a relationship between the two.
  2. The estimated effect sizes of expressive responding in the literature are much smaller than the s.e. Bullock et al. (Table 2) estimate the effect of expressive responding at about 4% and Prior et al. (Figure 1) at about ~ 5.5% (“Figure 1(a) shows, the model recovers the raw means from Table 1, indicating a drop in bias from 11.8 to 6.3.”). Thus, one reasonable inference is that the study is underpowered to reasonably detect expected effect sizes.

What Academics Can Learn From Industry

9 Aug

At its best, industry focuses people. It demands that people use everything at their disposal to solve an issue. It puts a premium on being lean, humble, agnostic, creative, and rigorous. Industry data scientists use qualitative methods, e.g., directly observe processes and people, do lean experimentation, build novel instrumentation, explore relationships between variables, and “dive deep” to learn about the problem. As a result, at any moment, they have a numerical account of the problem space, an idea about the blind spots, the next five places they want to dig, the next five ideas they want to test, and the next five things they want the company to build—things that they know work.

The social science research economy also focuses its participants. Except the focus is on producing broad, novel insights (which may or may not be true) and demonstrating intellectual heft and not on producing cost-effective solutions to urgent problems. The result is a surfeit of poor theories, a misunderstanding of how much the theories explain the issue at hand, and how widely they apply, a poor understanding of core social problems, and very few working solutions. 

The tide is slowly turning. Don Green, Jens Hainmeuller, Abhijit Banerjee, Esther Duflo, among others, form the avant-garde. Poor Economics by Banerjee and Duflo, in particular, comes the closest in spirit to how the industry works. It reminds me of how the best start-ups iterate to a product-market fit.

Self-Diagnosis

Ask yourself the following questions:

  1. Do you have in your mind a small set of numbers that explain your current understanding of the scale of the problem and some of its solutions?
  2. If you were to get a large sum of money, could you give a principled account of how you would spend it on research?
  3. Do you know what you are excited to learn about the problem (or potential solutions) in the next three months, year, …?

If you are committed to solving a problem, the answer to all the questions would be an unhesitant yes. Why? A numerical understanding of the problem is needed to make judgments about where you need to invest your time and money. It also guides what you would do if you had more money. And a focus on the problem means you have broken down the problem into solved and unsolved portions and know which unsolved portions of the problem you want to solve next. 

How to Solve Problems

Here are some rules of thumb (inspired by Abhijit Banerjee and Esther Duflo):

  1. What Problems to Solve? Work on Important Problems. The world is full of urgent social problems. Pick one. Calling whatever you are working on as important when it has a vague, multi-hop relation to an important problem doesn’t make it so. This decision isn’t without trade-offs. It is reasonable to fear the consequences when we substitute endless breadth with some focus. But we have tried that way and it is probably as good a time as any to try something else.
  2. Learn About The Problem: Social scientists seem to have more elaborate theory and “original” experiments than descriptions of data. It is time to switch that around. Take for instance malnutrition. Before you propose selling cut-rate rice, take a moment to learn whether the key problem that poor face is that they can’t afford the necessary calories or that they don’t get enough calories because they prefer tastier, more expensive calories than a full quota of calories. (This is an example from Poor Economics.) 
  3. Learn Theories in the Field: Judging by the output—books, and articles—the production of social science seems to be fueled mostly by the flash of insight. But there is only so much you can learn sitting in an armchair. Many key insights will go undiscovered if you don’t go to the field and closely listen and think. Abhijit Banerjee writes: “We then ran a similar experiment across several hundred villages where the goal was now to increase the number of immunized children. We found that gossips convince twice as many additional parents to vaccinate their children as random seeds or “trusted” people. They are about as effective as giving parents a small incentive (in the form of cell-phone minutes) for each immunized child and thus end up costing the government much less. Even though gossips proved incredibly successful at improving immunization rates, it is hard to imagine a policy of informing gossips emerging from conventional policy analysis. First, because the basic model of the decision to get one’s children immunized focuses on the costs and benefits to the family (Becker 1981) and is typically not integrated with models of social learning.”
  4. Solve Small Problems And Earn the Right to Saying Big General Things: The mechanism for deriving big theories in academia is the opposite of that used in the industry. In much of social science, insights are declared and understood as “general.” And important contextual dependencies are discovered over the years with research. In the industry, a solution is first tested in a narrow area. And then another. And if it works, we scale. The underlying hunch is that coming up with successful applications teaches us more about theory than the current model: come up with theory first, and produce posthoc rationalizations and add nuances when faced with failed predictions and applications. Going yet further, you could think that the purpose of social science is to find ways to fix a problem, which leads to more progress on understanding the problem and theory is a positive externality.

Suggested Reading + Sites

  1. Poor Economics by Abhijit Banerjee and Esther Duflo
  2. The Economist as Plumber by Esther Duflo
  3. Immigration Lab that asks, among other questions, why immigrants who are eligible for citizenship do not get citizenship especially when there are so many economic benefits to it. 
  4. Cronbach (1975) highlights the importance of observation and context. A couple of memorable quotes:

    “From Occam to Lloyd Morgan, the canon has referred to parsimony in theorizing, not in observing. The theorist performs a dramatist’s function; if a plot with a few characters will tell the story, it is more satisfying than one with a crowded stage. But the observer should be a journalist, not a dramatist. To suppress a variation that might not recur is bad observing.”

    “Social scientists generally, and psychologists, in particular, have modeled their work on physical science, aspiring to amass empirical generalizations, to restructure them into more general laws, and to weld scattered laws into coherent theory. That lofty aspiration is far from realization. A nomothetic theory would ideally tell us the necessary and sufficient conditions for a particular result. Supplied the situational parameters A, B, and C, a theory would forecast outcome Y with a modest margin of error. But parameters D, E, F, and so on, also influence results, and hence a prediction from A, B, and C alone cannot be strong when D, E, and F vary freely.”

    “Though enduring systematic theories about man in society are not likely to be achieved, systematic inquiry can realistically hope to make two contributions. One reasonable aspiration is to assess local events accurately, to improve short-run control (Glass, 1972). The other reasonable aspiration is to develop explanatory concepts, concepts that will help people use their heads.”

Unsighted: Why Some Important Findings Remain Uncited

1 Aug

Poring over the first 500 citations of the over 900 citations for Fear and Loathing across Party Lines on Google Scholar (7/31/2020), I could not find a single study citing the paper for racial discrimination. You may think the reason is obvious—the paper is about partisan prejudice, not racial prejudice. But a more accurate description of the paper is that the paper is best known for describing partisan prejudice but has powerful evidence on the lack of racial discrimination among white Americans–in fact, there is reasonable evidence of positive discrimination in one study. (I exclude the IAT results, weaker than Banaji’s results, which show Cohen’s d ~ .22, because they don’t speak directly to discrimination.)

There are the two independent pieces of evidence in the paper about racial discrimination.

Candidate Selection Experiment

“Unlike partisanship where ingroup preferences dominate selection, only African Americans showed a consistent preference for the ingroup candidate. Asked to choose between two equally qualified candidates, the probability of an African American selecting an ingroup winnerwas .78 (95% confidence interval [.66, .87]), which was no different than their support for the more qualified ingroup candidate—.76 (95% confidence interval [.59, .87]). Compared to these conditions, the probability of African Americans selecting an outgroup winner was at its highest—.45—when the European American was most qualified (95% confidence interval [.26, .66]). The probability of a European American selecting an ingroup winner was only .42 (95% confidence interval [.34, .50]), and further decreased to .29 (95% confidence interval [.20, .40]) when the ingroup candidate was less qualified. The only condition in which a majority of European Americans selected their ingroup candidate was when the candidate was more qualified, with a probability of ingroup selection at .64 (95% confidence interval [.53, .74]).”

Evidence from Dictator and Trust Games

“From Figure 8, it is clear that in comparison with party, the effects of racial similarity proved negligible and not significant—coethnics were treated more generously (by eight cents, 95% confidence interval [–.11, .27]) in the dictator game, but incurred a loss (seven cents, 95% confidence interval [–.34, .20]) in the trust game. There was no interaction between partisan and racial similarity; playing with both a copartisan and coethnic did not elicit additional trust over and above the effects of copartisanship.”

There are two plausible explanations for the lack of citations. Both are easily ruled out. The first is that the quality of evidence for racial discrimination is worse than that for partisan discrimination. Given both claims use the same data and research design, that explanation doesn’t work. The second is that it is a difference in base rates of production of research on racial and partisan discrimination. A quick Google search debunks that theory. Between 2015 and 2020, I get 135k results for racial discrimination and 17k for partisan polarization. It isn’t exact but good enough to rule it out as a possibility for the results I see. This likely leaves us with just two explanations: a) researchers hesitate to cite results that run counter to their priors or their results, b) people are simply unaware of these results.

Gaming Measurement: Using Economic Games to Measure Discrimination

31 Jul

Prejudice is the bane of humanity. Measurement of prejudice, in turn, is a bane of social scientists. Self-reports are unsatisfactory. Like talk, they are cheap and thus biased and noisy. Implicit measures don’t even pass the basic hurdle of measurement—reliability. Against this grim background, economic games as measures of prejudice seem promising—they are realistic and capture costly behavior. Habyarimana et al. (HHPW for short) for instance, use the dictator game (they also have a neat variation of it which they call the ‘discrimination game’) to measure ethnic discrimination. Since then, many others have used the design, including prominently, Iyengar and Westwood (IW for short). But there are some issues with how economic games have been set up, analyzed, and interpreted:

  1. Revealing identity upfront gives you a ‘no personal information’ estimand: One common aspect of how economic games are setup is the party/tribe is revealed upfront. Revealing the trait upfront, however, may be sub-optimal. The likelier sequence of interaction and discovery of party/tribe in the world, especially as we move online, is regular interaction followed by discovery. To that end, a game where players interact for a few cycles before an ‘irrelevant’ trait is revealed about them is plausibly more generalizable. What we learn from such games can be provocative—-discrimination after a history of fair economic transactions seems dire. 
  2. Using data from subsequent movers can bias estimates. “For example, Burnham et al. (2000) reports that 68% of second movers primed by the word “partner” and 33% of second movers primed by the word “opponent” returned money in a single-shot trust game. Taken at face value, the experiment seems to show that the priming treatment increased by 35 percentage-points the rate at which second movers returned money. But this calculation ignores the fact that second movers were exposed to two stimuli, the 14 partner/opponent prime and the move of the first player. The former is randomly assigned, but the latter is not under experimental control and may introduce bias. ” (Green and Tusicisny) IW smartly sidestep the concern: “In both games, participants only took the role of Player 1. To minimize round-ordering concerns, there was no feedback offered at the end of each round; participants were told all results would be provided at the end of the study.”
  3. AMCE of conjoint experiments is subtle and subject to assumptions. The experiment in IW is a conjoint experiment: “For each round of the game, players were provided a capsule description of the second player, including information about the player’s age, gender, income, race/ethnicity, and party affiliation. Age was randomly assigned to range between 32 and 38, income varied between $39,000 and $42,300, and gender was fixed as male. Player 2’s partisanship was limited to Democrat or Republican, so there are two pairings of partisan similarity (Democrats and Republicans playing with Democrats and Republicans). The race of Player 2 was limited to white or African American. Race and partisanship were crossed in a 2 × 2, within-subjects design totaling four rounds/Player 2s.” The first subtlety is that AMCE for partisanship is identified against the distribution of gender, age, race, etc. For generalizability, we may want a distribution close to the real world. As Hainmeuller et al. write: “…use the real-world distribution (e.g., the distribution of the attributes of actual politicians) to improve external validity. The fact that the analyst can control how the effects are averaged can also be viewed as a potential drawback, however. In some applied settings, it is not necessarily clear what distribution of the treatment components analysts should use to anchor inferences. In the worst-case scenario, researchers may intentionally or unintentionally misrepresent their empirical findings by using weights that exaggerate particular attribute combinations so as to produce effects in the desired direction.” Second, there is always a chance that it is a particular higher-order combination, e.g., race–PID, that ‘explains’ the main effect. 
  4. Skew in outcome variables means that the mean is not a good summary statistic. As you see in the last line of the first panel of Table 4 (Republican—Republican Dictator Game), if you can take out the 20% of the people who give $0, the average allocation from others is $4.2. HHPW handle this with a variable called ‘egoist’ and IW handle it with a separate column tallying people giving precisely $0. 
  5. The presence of ‘white foreigners’ can make people behave more generously. As Dube et al. find, “the presence of a white foreigner increases player contributions by 19 percent.” The point is more general, of course. 

With that, here are some things we can learn from economic games in HHPW and IW:

  1. People are very altruistic. In HPPW: “The modal strategy, employed in 25% of the rounds, was to retain 400 USh and to allocate 300 USh to each of the other players. The next most common strategy was to keep 600 USh and to allocate 200 USh to each of the other players (21% of rounds). In the vast majority of allocations, subjects appeared to adhere to the norm that the two receivers should be treated equally. On average, subjects retained 540 shillings and allocated 230 shillings to each of the other players. The modal strategy in the 500 USh denomination game (played in 73% of rounds) was to keep one 500 USh coin and allocate the other to another player. Nonetheless, in 23% of the rounds, subjects allocated both coins to the other players.” In IW, “[of the $10, players allocated] nontrivial amounts of their endowment—a mean of $4.17 (95% confidence interval [3.91, 4.43]) in the trust game, and a mean of $2.88 (95% confidence interval [2.66, 3.10])” (Note: These numbers are hard to reconcile with numbers in Table 4. One plausible explanation is that these numbers are over the entire population and Table 4 numbers are a subset on partisans and independents are somewhat less generous than partisans.) 
  2. There is no co-ethnic bias. Both HHPW and IW find this. HHPW: “we find no evidence that this altruism was directed more at in-group members than at out-group members. [Table 2]” IW: “From Figure 8, it is clear that in comparison with party, the effects of racial similarity proved negligible and not significant—coethnics were treated more generously (by eight cents, 95% confidence interval [–.11, .27]) in the dictator game, but incurred a loss (seven cents, 95% confidence interval [–.34, .20]) in the trust game.”
  3. A modest proportion of people discriminate against partisans. IW: “The average amount allocated to copartisans in the trust game was $4.58 (95% confidence interval [4.33, 4.83]), representing a “bonus” of some 10% over the average allocation of $4.17. In the dictator game, copartisans were awarded 24% over the average allocation.” But it is less dramatic than that. The key change in the dictator game is the number of people giving $0. The change in the percentage of people giving $0 is 7% among Democrats. So the average amount of money given to R and D by people who didn’t give $0 is $4.1 and $4.4 respectively which is a ~ 7% diff. 
  4. More Republicans than Democrats act like ‘homo-economicus.’ I am just going by the proportion of respondents giving $0 in dictator games.

p.s. I was surprised that there are no replication scripts or even a codebook for IW. The data had been downloaded 275 times when I checked.

Rocks and Scissors for Papers

17 Apr

Zach and Jack* write:

What sort of papers best serve their readers? We can enumerate desirable characteristics: these papers should

(i) provide intuition to aid the reader’s understanding, but clearly distinguish it from stronger conclusions supported by evidence;

(ii) describe empirical investigations that consider and rule out alternative hypotheses [62];

(iii) make clear the relationship between theoretical analysis and intuitive or empirical claims [64]; and

(iv) use language to empower the reader, choosing terminology to avoid misleading or unproven connotations, collisions with other definitions, or conflation with other related but distinct concepts [56].

Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship:

1. Failure to distinguish between explanation and speculation.

2. Failure to identify the sources of empirical gains, e.g. emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning.

3. Mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g. by confusing technical and non-technical concepts.

4. Misuse of language, e.g. by choosing terms of art with colloquial connotations or by overloading established technical terms.

Funnily Zach and Jack fail to take their own advice, forgetting to distinguish between anecdotal evidence (they claim a ‘troubling trend’ without presenting systematic evidence for it). But the points they make are compelling. The second and third points are especially applicable to economics though they apply to a lot of scientific production.


* It is Zachary and Jacob.

Citing Working Papers

2 Apr

Public versions of working papers are increasingly the norm. So are citations to them. But there are four concerns with citing working papers:

  1. Peer review: Peer review improves the quality of papers, but often enough it doesn’t catch serious, basic issues. Thus, a lack of peer review is not as serious a problem as is often claimed.
  2. Versioning: Which version did you cite? Often, there is no canonical versioning system. The best we have is tracking which conference was the paper presented at. This is not good enough.
  3. Availability: Can I check the paper, code, and data for a version? Often enough, the answer is no.

The solution to the latter two is to increase transparency through the entire pipeline. For instance, people can check how my paper with Ken has evolved on Github, including any coding errors that have been fixed between versions. (Admittedly, the commit messages can be improved. Better commit messages—plus descriptions—can make it easier to track changes across versions.)

The first point doesn’t quite deserve addressing in that the current system draws an optimistic line on the quality of published papers. Peer review ought not to end when a paper is published in a journal. If we accept that, then all concerns flagged by peers and non-peers can be addressed in various commits or responses to issues and appropriately credited.

Stemming Link Rot

23 Mar

The Internet gives many things. But none that are permanent. That is about to change. Librarians got together and recently launched https://perma.cc/ which provides a permanent link to stuff.

Why is link rot important?

Here’s an excerpt from a paper by Gertler and Bullock:

“more than one-fourth of links published in the APSR in 2013 were broken by the end of 2014”

If what you are citing evaporates, there is no way to check the veracity of the claim. Journal editors: pay attention!

Sometimes Scientists Spread Misinformation

24 Aug

To err is human. Good scientists are aware of that, painfully so. The model scientist obsessively checks everything twice over and still keeps eyes peeled for loose ends. So it is a shock to learn that some of us are culpable for spreading misinformation.

Ken and I find that articles with serious errors, even articles based on fraudulent data, continue to be approvingly cited—cited without any mention of any concern—long after the problems have been publicized. Using a novel database of over 3,000 retracted articles and over 74,000 citations to these articles, we find that at least 31% of the citations to retracted articles happen a year after the publication of the retraction notice. And that over 90% of these citations are approving.

What gives our findings particular teeth is the role citations play in science. Many, if not most, claims in a scientific article rely on work done by others. And scientists use citations to back such claims. The readers rely on scientists to note any concerns that impinge on the underlying evidence for the claim. And when scientists cite problematic articles without noting any concerns they very plausibly misinform their readers.

Though 74,000 is a large enough number to be deeply concerning, retractions are relatively infrequent. And that may lead some people to discount these results. Retractions may be infrequent but citations to retracted articles post-retraction are extremely revealing. Retractions are a low-low bar. Retractions are often a result of convincing evidence of serious malpractice, generally fraud or serious error. Anything else, for example, a serious error in data analysis, is usually allowed to self-correct. And if scientists are approvingly citing retracted articles after they have been retracted, it means that they have failed to hurdle the low-low bar. Such failure suggests a broader malaise.

To investigate the broader malaise, Ken and I exploited data from an article published in Nature that notes a statistical error in a series of articles published in prominent journals. Once again, we find that approving citations to erroneous articles persist after the error has been publicized. After the error has been publicized, the rate of citation to erroneous articles is, if anything, higher, and 98% of the citations are approving.

In all, it seems, we are failing.

The New Unit of Scientific Production

11 Aug

One fundamental principle of science is that there is no privileged observer. You get to question what people did. But to question, you first must know what people did. So part of good scientific practice is to make it easy for people to understand how the sausage was made—how the data were collected, transformed, and analyzed—and ideally, why you chose to make the sausage that particular way. Papers are ok places for describing all this, but we now have better tools: version controlled repositories with notebooks and readme files.

The barrier to understanding is not just lack of information, but also poorly organized information. There are three different arcs of information: cross-sectional (where everything is and how it relates to each other), temporal (how the pieces evolve over time), and inter-personal (who is making the changes). To be organized cross-sectionally, you need to be macro organized (where is the data, where are the scripts, what do each of the scripts do, how do I know what the data mean, etc.), and micro organized (have logic and organization to each script; this also means following good coding style). Temporal organization in version control simply requires you to have meaningful commit messages. And inter-personal organization requires no effort at all, beyond the logic of pull requests.

The obvious benefits of this new way are known. But what is less discussed is that this new way allows you to critique specific pull requests and decisions made in certain commits. This provides an entirely new way to make progress in science. The new unit of science also means that we just don’t dole out credits in crude currency like journal articles but we can also provide lower denominations. We can credit each edit, each suggestion. And why not. The third big benefit is that we can build epistemological trees where the logic of disagreement is clear.

The dead tree edition is dead. It is also time to retire the e-version of the dead tree edition.

Sigh-tations

1 May

In 2010, Google estimated that approximately 130M books had been published.

As a species, we still know very little about the world. But what we know already far exceeds what any of us can learn in a lifetime.

Scientists are acutely aware of the point. They must specialize, as chances of learning all the key facts about anything but the narrowest of the domains are slim. They must also resort to shorthand to communicate what is known and what is new. The shorthand that they use is—citations. However, this vital building block of science is often rife with problems. The three key problems with how scientists cite are:

1. Cite in an imprecise manner. This broad claim is supported by X. Or, our results are consistent with XYZ. (Our results are consistent with is consistent with directional thinking than thinking in terms of effect size. That means all sorts of effects are consistent, even those 10x as large.) For an example of how I think work should be cited, see Table 1 of this paper.

2. Do not carefully read what they cite. This includes misstating key claims and citing retracted articles approvingly (see here). The corollary is that scientists do not closely scrutinize papers they cite, with the extent of scrutiny explained by how much they agree with the results (see the next point). For a provocative example, see here.)

3. Cite in a motivated manner. Scientists ‘up’ the thesis of articles they agree with, for instance, misstating correlation as causation. And they blow up minor methodological points with articles whose results their paper’s result is ‘inconsistent’ with. (A brief note on motivated citations: here).