What’s the Next Best Thing to Learn?

10 Oct

With Gaurav Gandhi

Recommendation engines are everywhere. These systems recommend what shows to watch on Netflix and what products to buy on Amazon. Since at least the Netflix Prize, the conventional wisdom is that recommendation engines have become very good. Except that they are not. Some of the deficiencies are deliberate. Netflix has made a huge bet on its shows, and it makes every effort to highlight its Originals over everything else. Some other efficiencies are a result of a lack of content. The fact is easily proved. How often have you done a futile extended search for something “good” to watch?

Take the above two explanations out, and still, the quality of recommendations is poor. For instance, Youtube struggles to recommend high-quality, relevant videos on machine learning. It fails on relevance because it either recommends videos that are too difficult or too easy. And it fails on quality—the opaqueness of explanation makes most points hard to understand. When I look back, most of the high-quality content on machine learning that I have come across is a result of subscribing to the right channels—human curation. Take another painful aspect of most recommendations: a narrow understanding of our interests. You watch a few food travel shows, and YouTube will recommend twenty others.

Problems

What is the next best thing to learn? It is an important question to ask. To answer it, we need to know the objective function. But the objective function is too hard to formalize and yet harder to estimate. Is it money we want, or is it knowledge, or is it happiness? Say we decide its money. For most people, after accounting for risk, the answer may be: learn to program. But what would the equilibrium effects be if everyone did that? Not great. So we ask a simpler question: what is the next reasonable unit of information to learn?

Meno’s paradox states that we cannot be curious about something that we know, and, using a Rumsfeld-ism, we cannot be curious about things we don’t know we don’t know. The domain of things we can be curious about hence are things we know that we don’t know. For instance, I know that I don’t know enough about dark matter. But the complete set of things we can be curious about includes things we don’t know we don’t know.

The set of things that we don’t know is very large. But that is not the set of information we can know. The set of relevant information is described by the frontier of our knowledge. The unit of information we are ready to consume is not a random unit from the set of things we don’t know but the set of things we can learn given what we know.  As I note above, a bunch of ML lectures that YouTube recommends are beyond me. 

There is a further constraint on ‘relevance.’ Of the relevant set, we are only curious about things we are interested in. But it is an open question about how we entice people to learn about things that they will find interesting. It is the same challenge Netflix faces when trying to educate people about movies people haven’t heard or seen.

Conditional on knowing the next best substantive unit, we care about quality.  People want content that will help them learn what they are interested in most efficiently. So we need to solve for the best source to learn the information.

Solutions

Known-Known

For things we know, the big question is how do we optimally retain things we know. It may be through Flashcards or what have you.

Exploring the Unknown

  1. Learn What a Person Knows (Doesn’t Know): The first step is in learning the set of information that the person doesn’t know is to learn what a person knows. The best way to learn what a person knows is to build a system that monitors all the information we consume on the Internet. 
  1. Classify. Next, use ML to split the information into broad areas. 
  1. Estimate The Frontier of Knowledge. To establish the frontier of knowledge, we need to put what people know on a scale. We can derive that scale by exploiting syllabi and class structure (101, 102, etc.) and associated content and then scaling all the content (Youtube video, books, etc.) by estimating similarity with the relevant level of content. (We can plausibly also establish a scale by following the paths people follow–videos that they start but don’t finish are good indications of being too easy or too hard, for instance.)

    We can also use tools like quizzes to establish the frontier, but the quizzes will need to be built from a system that understands how information is stacked.
  1. Estimate Quality of Content. Rank content within each topic and each level by quality. Infer quality through both explicit and implicit measures. Use this to build the relevant set.
  1. Recommend from the relevant set. Recommend a wide variety of content from the relevant set.

Distance Function For Matched Randomization

6 Oct

What is the right distance function for creating greater balance before we randomize? One way to do it is not to think too much about the distance function at all. For instance, this paper takes all the variables, treats them as the same (you can do normalization if you want to) and you can use Mahalanobis distance, or what have you. There are two decisions here: about the subspace and about the weights.

Surprisingly, these ad hoc choices don’t have serious pitfalls except that the balance we get finally in the Y_0 (which is the quantity of interest) may not be great. There is also one case where the method will fail. The point is best illustrated with a contrived example. Imagine there is just one observed X and let’s say it is pure noise. If we were to match on noise and then randomize, it will be the case that it will increase the imbalance in the Y_0 half the time and decrease it another half the time. In all, the benefit of spending a lot of energy on improving balance in an ad hoc space, which may or may not help the true objective function, is likely overstated.

If we have a baseline survey and baseline Ys and we assume that Y_lagged predicts Y_0, then the optimal strategy would be to match on lagged Y. If we have multiple time periods for which we have surveys, we can build a supervised learning model to predict Y in the next time period and match on the Y_hat. The same logic applies when we don’t have lagged_Y for all the rows. We can impute them with supervised learning.

Unmatched: The Problem With Comparing Matching Methods

5 Oct

In many matching papers, the key claim proceeds as follows: our matching method is better than others because on this set of contrived data, treatment effect estimates are closest to those from the ‘gold standard’ (experimental evidence).

Let’s side-step concerns related to an important point: evidence that a method works better than other methods on some data is hard to interpret as we do not know if the fact generalizes. Ideally, we want to understand the circumstances in which the method works better than other methods. If the claim is that the method always works better, then prove it.

There is a more fundamental concern here. Matching changes the estimand by pruning some of the data as it takes out regions with low support. But the regions that are taken out vary by the matching method. So, technically the estimands that rely on different matching methods are different—treatment effect over different sets of rows. And if the estimate from method X comes closer to the gold standard than the estimate from method Y, it may be because the set of rows method X selects produce a treatment effect that is closer to the gold standard. It doesn’t however mean that method X’s inference on the set of rows it selects is the best. (And we do not know how the estimate technically relates to the ATE.)

Optimal Recruitment For Experiments: Using Pair-Wise Matching Distance to Guide Recruitment

4 Oct

Pairwise matching before randomization reduces s.e. (see here, for instance). Generally, the strategy is used to create balanced control and treatment groups from available observations. But we can use the insight for optimal sample recruitment especially in cases where we have a large panel of respondents with baseline data, like YouGov. The algorithm is similar to what YouGov already uses, except it is tailored to experiments:

  1. Start with a random sample.
  2. Come up with optimal pairs based on whatever criteria you have chosen.
  3. Reverse sort pairs by distance with the pairs with the largest distance at the top.
  4. Find the best match in the rest of the panel file for one of the randomly chosen points in the pair. (If you have multiple equivalent matches, pick one at random.)
  5. Proceed as far down the list as needed.

Technically, we can go from step 1 to step 4 if we choose a random sample that is half the size we want for the experiment. We just need to find the best matching pair for each respondent.

Rent-seeking: Why It Is Better to Rent than Buy Books

4 Oct

It has taken me a long time to realize that renting books is the way to go for most books. The frequency with which I go back to a book is so low that I don’t really see any returns on permanent possession that accrue from the ability to go back.

Renting also has the virtue of disciplining me: I rent when I am ready to read and it incentives me to finish the book (or graze and assess whether the book is worth finishing) before the rental period expires.

For e-books, my format of choice, buying a book is even less attractive. One reason why people buy a book is for the social returns from displaying the book on a bookshelf. E-books don’t provide that, though in time people may devise mechanisms to do just that. Another reason why people prefer buying books is that they want something ‘new.’ Once again, the concern doesn’t apply to e-books.

From a seller’s perspective, renting has the advantage of expanding the market. Sellers get money from people who would otherwise not buy the book. These people may, instead, substitute it by copying the book or borrowing it from a friend or a library or getting similar content elsewhere, e.g., Youtube or other (cheaper) books, or they may simply forgo reading the book.

STEMing the Rot: Does Relative Deprivation Explain Low STEM Graduation Rates at Top Schools?

26 Sep

The following few paragraphs are from Sociation Today:


Using the work of Elliot (et al. 1996), Gladwell compares the proportion of each class which gets a STEM degree compared to the math SAT at Hartwick College and Harvard University.  Here is what he presents for Hartwick:

Students at Hartwick College

STEM MajorsTop ThirdMiddle ThirdBottom Third
Math SAT569472407
STEM degrees55.0%27.1%17.8

So the top third of students with the Math SAT as the measure earn over half the science degrees. 

    What about Harvard?   It would be expected that Harvard students would have much higher Math SAT scores and thus the distribution would be quite different.  Here are the data for Harvard:

Students at Harvard University

STEM MajorsTop ThirdMiddle ThirdBottom Third
Math SAT753674581
STEM degrees53.4%31.2%15.4%

     Gladwell states the obvious, in italics, “Harvard has the same distribution of science degrees as Hartwick,” p. 83. 

    Using his reference theory of being a big fish in a small pond, Gladwell asked Ms. Sacks what would have happened if she had gone to the University of Maryland and not Brown. She replied, “I’d still be in science,” p. 94.


Gladwell focuses on the fact that the bottom-third at Harvard is the same as the top third at Hartwick. And points to the fact that they graduate at very different rates. It is a fine point. But there is more to the data. The top-third at Harvard have much higher SAT scores than the top-third at Hartwick. Why is it the case that they graduate with a STEM degree at the same rate as the top-third at Hartwick? One answer to that is that STEM degrees at Harvard are harder. So harder coursework at Harvard (vis-a-vis Hartwick) is another explanation for the pattern we see in the data and, in fact, fits the data better as it explains the performance of the top-third at Harvard.

Here’s another way to put the point: If preferences for graduating in STEM are solely and almost deterministically explained by Math SAT scores, like Gladwell implicitly assumes, and the major headwinds are because of relative standing, then we should see a much higher STEM graduation rate for the top-third at Harvard. We should ideally see an intercept shift across schools, which we don’t see, but a common differential between the top and the bottom third.

Campaigns, Construction, and Moviemaking

25 Sep

American presidential political campaigns, big construction projects, and big-budget moviemaking have a lot in common. They are all complex enterprises with lots of moving parts, they all bring together lots of people for a short period, and they all need people to hit the ground running and execute in lockstep to succeed. Success in these activities relies a lot on great software and the ability to hire competent people quickly. It remains an open opportunity to build great software for these industries, software that allows people to plan and execute together.

Dismissed Without Prejudice: Evaluating Prejudice Reduction Research

25 Sep

Prejudice is a blight on humanity. How to reduce prejudice, thus, is among the most important social scientific questions. In the latest assessment of research in the area, a follow-up to the 2009 Annual Review article, Betsy Paluck et al., however, paint a dim picture. In particular, they note three dismaying things:

Publication Bias

Table 1 (see below) makes for grim reading. While one could argue that the pattern is explained by the fact that lab research tends to have smaller samples and has especially powerful treatments, the numbers suggest—see the average s.e. of the first two rows (it may have been useful to produce a $sqrt(1/n)$ adjusted s.e.)—that publication bias very likely plays a large role. It is also shocking to know that just a fifth of the studies have treatment groups with 78 or more people.

Light Touch Interventions

The article is remarkably measured when talking about the rise of ‘light touch’ interventions—short exposure treatments. I would have described them as ‘magical thinking’ for they seem to be founded in the belief that we can make profound changes in people’s thinking on the cheap. This isn’t to say light-touch interventions can’t be worked into a regime that affects profound change—repeated light touches may work. However, as far as I could tell, no study tried multiple touches to see how the effect cumulates.

Near Contemporaneous Measurement of Dependent Variables

Very few papers judged the efficacy of the intervention a day or more after the intervention. Given the primary estimate of interest is longer-term effects, it is hard to judge the efficacy of the treatments in moving the needle on the actual quantity of interest.   

Beyond what the paper notes, here are a couple more things to consider:

  1. Perspective getting works better than perspective-taking. It would be good to explore this further in inter-group settings.
  2. One way to categorize ‘basic research interventions’ is by decomposing the treatment into its primary aspects and then slowly building back up bundles based on data:
    1. channel: f2f, audio (radio, etc.), visual (photos, etc.), audio-visual (tv, web, etc.), VR, etc.
    2. respondent action: talk, listen, see, imagine, reflect, play with a computer program, work together with someone, play together with someone, receive a public scolding, etc.
    3. source: peers, strangers, family, people who look like you, attractive people, researchers, authorities, etc.
    4. message type: parable, allegory, story, graph, table, drama, etc.
    5. message content: facts, personal stories, examples, Jonathan Haidt style studies that show some of the roots of our morality are based on poor logic, etc.

everywhere: meeting consumers where they are

1 Sep

Content delivery is not optimized for the technical stack used by an overwhelming majority of people. The technical stack of people who aren’t particularly tech-savvy, especially those who are old (over ~60 years), is often a messaging application like FB Messenger or WhatsApp. They currently do not have a way to ‘subscribe’ to Substack newsletters or podcasts or Youtube videos in the messaging application that they use (see below for an illustration of how this may look on the iPhone messaging app.) They miss content. And content producers have an audience hole.

Credit: Gaurav Gandhi

A lot of the content is distributed only via email or distributed within a specific application. There are good strategic reasons for that—you get to monitor consumption, recommend accordingly, control monetization, etc. But the reason why platforms like Substack, which enable independent content producers, limit distribution to email is not as immediately clear. It is unlikely a deliberate decision. It is likely a decision based on a lack of infrastructure that connects publishing to various messaging platforms. The future of messaging platforms is Slack—a platform that integrates as many applications as possible. As Whatsapp rolls out its business API, there is a potential to build an integration that allows producers to deliver premium content, leverage other kinds of monetization, like ads, and even build a recommendation stack. Eventually, it would be great to build that kind of integration for each of the messaging platforms, including iMessage, FB Messenger, etc.

Let me end by noting that there is something special about WhatsApp. No one has replicated the mobile phone-based messaging platform. And the idea of enabling a larger stack based on phone numbers remains unplumbed. Duo and FaceTime are great examples but there is potential for so much more. For instance, a calendar app. that runs on the mobile phone ID architecture.

The (Mis)Information Age: Provenance is Not Enough

31 Aug

The information age has bought both bounty and pestilence. Today, we are deluged with both correct and incorrect information. If we knew how to tell apart correct claims from incorrect, we would have inched that much closer to utopia. But the lack of nous in telling apart generally ‘obvious’ incorrect claims from correct claims has brought us close to the precipice of disarray. Thus, improving people’s ability to identify untrustworthy claims as such takes on urgency.

http://gojiberries.io/2020/08/31/the-misinformation-age-measuring-and-improving-digital-literacy/

Inferring the Quality of Evidence Behind the Claims: Fact Check and Beyond

One way around misinformation is to rely on an expert army that assesses the truth value of claims. However, assessing the truth value of a claim is hard. It needs expert knowledge and careful research. When validating, we have to identify with which parts are wrong, which parts are right but misleading, and which parts are debatable. All in all, it is a noisy and time-consuming process to vet a few claims. Fact check operations, hence, cull a small number of claims and try to validate those claims. As the rate of production of information increases, thwarting misinformation by checking all the claims seems implausibly expensive.

Rather than assess the claims directly, we can assess the process. Or, in particular, the residue of one part of the process for making the claim—sources. Except for claims based on private experience, e.g., religious experience, claims are based on sources. We can use the features of these sources to infer credibility. The first feature is the number of sources cited to make a claim. All else equal, the more number of sources saying the same thing, the greater the chances that the claim is true. None of this is to undercut a common observation: lots of people can be wrong about something. A harder test for veracity if a diverse set of people say the same thing. The third test is checking the credibility of the sources.

Relying on the residue is not a panacea. People can simply lie about the source. We want the source to verify what they have been quoted as saying. And in the era of cheap data, this can be easily enabled. Quotes can be linked to video interviews or automatic transcriptions electronically signed by the interviewee. The same system can be scaled to institutions. The downside is that the system may prove onerous. On the other hand, commonly, the same source is cited by many people so a public repository of verified claims and evidence can mitigate much of the burden.

But will this solve the problem? Likely not. For one, people can still commit sins of omission. For two, they can still draft things in misleading ways. For three, trust in sources may not be tied to correctness. All we have done is built a system for establishing provenance. And establishing the provenance is not enough. Instead, we need a system that incentivizes both correctness and presentation that makes correct interpretation highly likely. It is a high bar. But it is the bar—correct and liable to correctly interpreted.

To create incentives for publishing correct claims, we need to either 1. educate the population, which brings me to the previous post, or 2. find ways to build products and recommendations that incentivize correct claims. We likely need both.

The (Mis)Information Age: Measuring and Improving ‘Digital Literacy’

31 Aug

The information age has bought both bounty and pestilence. Today, we are deluged with both correct and incorrect information. If we knew how to tell apart correct claims from incorrect, we would have inched that much closer to utopia. But the lack of nous in telling apart generally ‘obvious’ incorrect claims from correct claims has brought us close to the precipice of disarray. Thus, improving people’s ability to identify untrustworthy claims as such takes on urgency.

Before we find fixes, it is good to measure how bad things are and what things are bad. This is the task the following paper sets itself by creating a ‘digital literacy’ scale. (Digital literacy is an overloaded term. It means many different things, from the ability to find useful information, e.g., information about schools or government programs, to the ability to protect yourself against harm online (see here and here for how frequently people’s accounts are breached and how often they put themselves at risk of malware or phishing), to the ability to identify incorrect claims as such, which is how the paper uses it.)

Rather than build a skill assessment kind of a scale, the paper measures (really predicts) skills indirectly using some other digital literacy scales, whose primary purpose is likely broader. The paper validates the importance of various constituent items using variable importance and model fit kinds of measures. There are a few dangers of doing that:

  1. Inference using surrogates is dangerous as the weakness of surrogates cannot be fully explored with one dataset. And they are liable not to generalize as underlying conditions change. We ideally want measures that directly measure the construct.
  2. Variable importance is not the same as important variables. For instance, it isn’t clear why “recognition of the term RSS,” the “highest-performing item by far” has much to do with skill in identifying untrustworthy claims.

Some other work builds uncalibrated measures of digital literacy (conceived as in the previous paper). As part of an effort to judge the efficacy of a particular way of educating people about how to judge untrustworthy claims, the paper provides measures of trust in claims. The topline is that educating people is not hard (see the appendix for the description of the treatment). A minor treatment (see below) is able to improve “discernment between mainstream and false news headlines.”

Understandably, the effects of this short treatment are ‘small.’ The ITT short-term effect in the US is: “a decrease of nearly 0.2 points on a 4-point scale.” Later in the manuscript, the authors provide the substantive magnitude of the .2 pt net swing using a binary indicator of perceived headline accuracy: “The proportion of respondents rating a false headline as “very accurate” or “somewhat accurate” decreased from 32% in the control condition to 24% among respondents who were assigned to the media literacy intervention in wave 1, a decrease of 7 percentage points.” The .2 pt. net swing on a 4 point scale leading to a 7% difference is quite remarkable and generally suggests that there is a lot of ‘reverse’ intra-category movement that the crude dichotomization elides over. But even if we take the crude categories as the quantity of interest, a month later in the US, the 7 percent swing is down to 4 percent:

“…the intervention reduced the proportion of people endorsing false headlines as accurate from 33 to 29%, a 4-percentage-point effect. By contrast, the proportion of respondents who classified mainstream news as not very accurate or not at all accurate rather than somewhat or very accurate decreased only from 57 to 55% in wave 1 and 59 to 57% in wave 2.

Guess et al. 2020

The opportunity to mount more ambitious treatments remains sizable. So does the opportunity to more precisely understand what aspects of the quality of evidence people find hard to discern. And how we could release products that make their job easier.

Another ANES Goof-em-up: VCF0731

30 Aug

By Rob Lytle

At this point, it’s well established that the ANES CDF’s codebook is not to be trusted (I’m repeating “not to be trusted to include a second link!). Recently, I stumbled across another example of incorrect coding in the cumulative data file, this time in VCF0731 – Do you ever discuss politics with your family or friends?

The codebook reports 5 levels:

Do you ever discuss politics with your family or friends?

1. Yes
5. No

8. DK
9. NA

INAP. question not used

However, when we load the variable and examine the unique values:

# pulling anes-cdf from a GitHub repository
cdf <- rio::import("https://github.com/RobLytle/intra-party-affect/raw/master/data/raw/cdf-raw-trim.rds")


unique(cdf$VCF0731)
## [1] NA  5  1  6  7

We see a completely different coding scheme. We are left adrift, wondering “What is 6? What is 7?” Do 1 and 5 really mean “yes” and “no”?

We may never know.

For a survey that costs several million dollars to conduct, you’d think we could expect a double-checked codebook (or at least some kind of version control to easily fix these things as they’re identified).

AFib: Apple Watch Did Not Increase Atrial Fibrillation Diagnoses

28 Aug

A new paper purportedly shows that the release of Apple Watch 2018 which supported ECG app did not cause an increase in AFib diagnoses (mean = −0.008). 

They make the claim based on 60M visits from and 1270 practices across 2 years.

Here are some things to think about:

  1. Expected effect size. Say the base AF rate as .41%. Let’s say 10% has the ECG app + Apple watch. (You have to make some assumptions about how quickly people downloaded the app. I am making a generous assumption that 10% do it the day of release.) For the 10%, say it is .51%. Add’l diagnoses expected = .01*30M ~ 3k.
  2. Time trend. 2018-19 line is significantly higher (given the baseline) than 2016-2017. It is unlikely to be explained by the aging of the population. Is there a time trend? What explains it? More acutely, diff. in diff. doesn’t account for that.
  3. Choice of the time period. When you have observations over multiple time periods pre-treatment and post-treatment, the inference depends on which time period you use. For instance,  if I do an “ocular distortion test”, the diff. in diff. with observations from Aug./Sep. would suggest a large positive impact. For a more transparent account of assumptions, see diff.healthpolicydatascience.org (h/t Kyle Foreman).
  4. Clustering of s.e. Some correlation in diagnosis because of facility (doctor) which is unaccounted for.

Survey Experiments With Truth: Learning From Survey Experiments

27 Aug

Tools define science. Not only do they determine how science is practiced but also what questions are asked. Take survey experiments, for example. Since the advent of online survey platforms, which made conducting survey experiments trivial, the lure of convenience and internal validity has persuaded legions of researchers to use survey experiments to understand the world.

Conventional survey experiments are modest tools. Paul Sniderman writes,

“These three limitations of survey experiments—modesty of treatment, modesty of scale, and modesty of measurement—need constantly to be borne in mind when brandishing term experiment as a prestige enhancer.” I think we can easily collapse these in two — treatment (which includes ‘scale’ as he defines it— the amount of time) and measurement.

Paul Sniderman

Note: We can collapse these three concerns into two— treatment (which includes ‘scale’ as Paul defines it— the amount of time) and measurement.

But skillful artisans have used this modest tool to great effect. Famously, Kahneman and Tversky used survey experiments, e.g., Asian Disease Problem, to shed light on how people decide. More recently, Paul Sniderman and Tom Piazza have used survey experiments to shed light on an unsavory aspect of human decision making: discrimination. Aside from shedding light on human decision making, researchers have also used survey experiments to understand what survey measures mean, e.g., Ahler and Sood

The good, however, has come with the bad; insight has often come with irreflection. In particular, Paul Sniderman implicitly points to two common mistakes that people make:

  1. Not Learning From the Control Group. The focus on differences in means means that we sometimes fail to reflect on what the data in the Control Group tells us about the world. Take the paper on partisan expressive responding, for instance. The topline from the paper is that expressive responding explains half of the partisan gap. But it misses the bigger story—the partisan differences in the Control Group are much smaller than what people expect, just about 6.5% (see here). (Here’s what I wrote in 2016.)
  2. Not Putting the Effect Size in Context. A focus on significance testing means that we sometimes fail to reflect on the modesty of effect sizes. For instance, providing people $1 for a correct answer within the context of an online survey interview is a large premium. And if providing a dollar each on 12 (included) questions nudges people from an average of 4.5 correct responses to 5, it suggests that people are resistant to learning or impressively confident that what they know is right. Leaving $7 on the table tells us more than the .5, around which the paper is written. 

    More broadly, researchers are obtuse to the point that sometimes what the results show is how impressively modest the movement is when you ratchet up the dosage. For instance, if an overwhelming number of African Americans favor Whites who have scored just a few points more than a Black student, it is a telling testament to their endorsement of meritocracy.

Amartya Sen on Keynes, Robinson, Smith, and the Bengal Famine

17 Aug

Sen in conversation with Angus Deaton and Tim Besleypdf and video.

Excepts:

On Joan Robinson

“She took a position—which has actually become very popular in India
now, not coming from the left these days, but from the right—that what you have to concentrate on is simply maximizing economic growth. Once you have grown and become rich, then you can do health care, education, and all this other stuff. Which I think is one of the more profound errors that you can make in development planning. Somehow Joan had a lot of sympathy for that position. In fact, she strongly criticized Sri Lanka for offering highly subsidized food to everyone on nutritional grounds. I remember the phrase she used: “Sri Lanka is trying to taste the fruit of
the tree without growing it.”

Amartya Sen

On Keynes:

“On the unemployment issue I may well be, but if I compare an economist
like Keynes, who never took a serious interest in inequality, in poverty, in the environment, with Pigou, who took an interest in all of them, I don’t think I would be able to say exactly what you are asking me to say.”

Amartya Sen

On the 1943 Bengal Famine, the last big famine in India in which ~ 3M people perished:

“Basically I had figured out on the basis of the little information I had (that indeed
everyone had) that the problem was not that the British had the wrong data, but that their theory of famine was completely wrong. The government was claiming that there was so much food in Bengal that there couldn’t be a famine. Bengal, as a whole, did indeed have a lot of food—that’s true. But that’s supply; there’s also demand, which was going up and up rapidly, pushing prices sky-high. Those left behind in a boom economy—a boom generated by the war—lost out in the competition for buying food.”

“I learned also—which I knew as a child—that you could have a famine with a lot of food around. And how the country is governed made a difference. The British did not want rebellion in Calcutta. I believe no one of Calcutta died in the famine. People died in Calcutta, but they were not of Calcutta. They came from elsewhere, because what little charity there was came from Indian businessmen based in Calcutta. The starving people
kept coming into Calcutta in search of free food, but there was really not much of that. The Calcutta people were entirely protected by the Raj to prevent discontent of established people during the war. Three million people in Calcutta had ration cards, which entailed that at least six million people were being fed at a very subsidized price of food. What the government did was to buy rice at whatever price necessary to purchase it in the rural areas, making the rural prices shoot up. The price of rationed food in Calcutta for established residents was very low and highly subsidized, though the market price in Calcutta—outside the rationing network—rose with the rural price increase.”

Amartya Sen

On John Smith

“He discussed why you have to think pragmatically about the different institutions to be combined together, paying close attention to how they respectively work. There’s a passage where he’s asking himself the question, Why do we strongly want a good political economy? Why is it important? One answer—not the only one—is that it will lead to high economic growth (this is my language, not Smith’s). I’m not quoting his words, but he talks about the importance of high growth, high rate of progress. But why is that important? He says it’s important for two distinct reasons. First, it gives the individual more income, which in turn helps people to do what they would value doing. Smith is talking here about people having more capability. He doesn’t use the word capability, but that’s what he is talking about here. More income helps you to choose the kind of life that you’d like to lead. Second, it gives the state (which he greatly valued as an institution when properly used) more revenue, allowing it to do those things which only the state can do well. As an example, he talks about the state being able to provide free school education.”

Amartya Sen

Nothing to See Here: Statistical Power and “Oversight”

13 Aug

“Thus, when we calculate the net degree of expressive responding by subtracting the acceptance effect from the rejection effect—essentially differencing off the baseline effect of the incentive from the reduction in rumor acceptance with payment—we find that the net expressive effect is negative 0.5%—the opposite sign of what we would expect if there was expressive responding. However, the substantive size of the estimate of the expressive effect is trivial. Moreover, the standard error on this estimate is 10.6, meaning the estimate of expressive responding is essentially zero.

https://journals.uchicago.edu/doi/abs/10.1086/694258

(Note: This is not a full review of all the claims in the paper. There is more data in the paper than in the quote above. I am merely using the quote to clarify a couple of statistical points.)

There are two main points:

  1. The fact that estimate is close to zero and the s.e. is super fat are technically unrelated. The last line of the quote, however, seems to draw a relationship between the two.
  2. The estimated effect sizes of expressive responding in the literature are much smaller than the s.e. Bullock et al. (Table 2) estimate the effect of expressive responding at about 4% and Prior et al. (Figure 1) at about ~ 5.5% (“Figure 1(a) shows, the model recovers the raw means from Table 1, indicating a drop in bias from 11.8 to 6.3.”). Thus, one reasonable inference is that the study is underpowered to reasonably detect expected effect sizes.

Casual Inference: Errors in Everyday Causal Inference

12 Aug

Why are things the way they are? What is the effect of something? Both of these reverse and forward causation questions are vital.

When I was at Stanford, I took a class with a pugnacious psychometrician, David Rogosa. David had two pet peeves, one of which was people making causal claims with observational data. And it is in David’s class that I learned the pejorative for such claims. With great relish, David referred to such claims as ‘casual inference.’ (Since then, I have come up with another pejorative phrase for such claims—cosal inference—as in merely dressing up as causal inference.)

It turns out that despite its limitations, casual inference is quite common. Here are some fashionable costumes:

  1. 7 Habits of Successful People: We have all seen business books with such titles. The underlying message of these books is: adopt these habits, and you will be successful too! Let’s follow the reasoning and see where it falls apart. One stereotype about successful people is that they wake up early. And the implication is you wake up early you can be successful too. It *seems* right. It agrees with folk wisdom that discomfort causes success. But can we reliably draw inferences about what less successful people should do based on what successful people do? No. For one, we know nothing about the habits of less successful people. It could be that less successful people wake up *earlier* than the more successful people. Certainly, growing up in India, I recall daily laborers waking up much earlier than people living in bungalows. And when you think of it, the claim that servants wake up before masters seems uncontroversial. It may even be routine enough to be canonized as a law—the Downtown Abbey law. The upshot is that when you select on the dependent variable, i.e., only look at cases where the variable takes certain values, e.g., only look at the habits of financially successful people, even correlation is not guaranteed. This means that you don’t even get to mock the claim with the jibe that “correlation is not causation.”

    Let’s go back to Goji’s delivery service for another example. One of the ‘tricks’ that we had discussed was to sample failures. If you do that, you are selecting on the dependent variable. And while it is a good heuristic, it can lead you astray. For instance, let’s say that most of the late deliveries our early morning deliveries. You may infer that delivering at another time may improve outcomes. Except, when you look at the data, you find that the bulk of your deliveries are in the morning. And the rate at which deliveries run late is *lower* early morning than during other times.

    There is a yet more famous example of things going awry when you select on the dependent variable. During World War II, statisticians were asked where the armor should be added on the planes. Of the aircraft that returned, the damage was concentrated in a few areas, like the wings. The top-of-head answer is to suggest we reinforce areas hit most often. But if you think about the planes that didn’t return, you get to the right answer, which is that we need to reinforce areas that weren’t hit. In literature, people call this kind of error, survivorship bias. But it is a problem of selecting on the dependent variable (whether or not a plane returned) and selecting on planes that returned.

  2. More frequent system crashes cause people to renew their software license. It is a mistake to treat correlation as causation. There are many different reasons behind why doing so can lead you astray. The rarest reason is that lots of odd things are correlated in the world because of luck alone. The point is hilariously illustrated by a set of graphs showing a large correlation between conceptually unrelated things, e.g., there is a large correlation between total worldwide non-commercial space launches and the number of sociology doctorates that are awarded each year.

    A more common scenario is illustrated by the example in the title of this point. Commonly, there is a ‘lurking’ or ‘confounding’ variable that explains both sides. In our case, the more frequently a person uses a system, the more the number of crashes. And it makes sense that people who use the system most frequently also need the software the most and renew the license most often.

    Another common but more subtle reason is called Simpson’s paradox. Sometimes the correlation you see is “wrong.” You may see a correlation in the aggregate, but the correlation runs the opposite way when you break it down by group. Gender bias in U.C. Berkeley admissions provides a famous example. In 1973, 44% of the men who applied to graduate programs were admitted, whereas only 35% of the women were. But when you split by department, which eventually controlled admissions, women generally had a higher batting average than men. The reason for the reversal was women applied more often to more competitive departments, like—-wait for it—-English and men were more likely to apply to less competitive departments like Engineering. None of this is to say that there isn’t bias against women. It is merely to point out that the pattern in aggregated data may not hold when you split the data into relevant chunks.

    It is also important to keep in mind the opposite of correlation is not causation—lack of correlation does not imply a lack of causation.

  3. Mayor Giuliani brought the NYC crime rate down. There are two potential errors here:
    • Forgetting about ecological trends. Crime rates in other big US cities went down at the same time as they did in NY, sometimes more steeply. When faced with a causal claim, it is good to check how ‘similar’ people fared. The Difference-in-Differences estimator that builds on this intuition.
    • Treating temporally proximate as causal. Say you had a headache, you took some medicine and your headache went away. It could be the case that your headache went away by itself, as headaches often do.

  4. I took this homeopathic medication and my headache went away. If the ailments are real, placebo effects are a bit mysterious. And mysterious they may be but they are real enough. Not accounting for placebo effects misleads us to ascribe the total effect to the medicine. 

  5. Shallow causation. We ascribe too much weight to immediate causes than to causes that are a few layers deeper.

  6.  Monocausation: In everyday conversations, it is common for people to speak as if x is the only cause of y.

  7.  Big Causation: Another common pitfall is reading x causes y as x causes y to change a lot. This is partly a consequence of mistaking statistical significance with substantive significance, and partly a consequence of us not paying close enough attention to numbers.

  8. Same Effect: Lastly, many people take causal claims to mean that the effect is the same across people. 

Routine Maintenance: How to Build Habits

11 Aug

With Mark Paluta

Building a habit means trying to maximize the probability of doing something at some regular cadence.

max [P(do the thing)]

This is difficult because we have time-inconsistent preferences. When asked if we would prefer to run or watch TV next Wednesday afternoon, we are more likely to say run. Arrive Wednesday, and we are more likely to say TV.

Willpower is a weak tool for most of us, so we are better served thinking systematically about what conditions maximize the probability of doing the thing we plan to do. The probability of doing something can be modeled as a function of accountability, external motivation, friction, and awareness of other mental tricks:

P(do the thing) ~ f(accountability, external motivation, friction, other mental tricks)

Accountability: To hold ourselves accountable, at the minimum, we need to transparently record data. Without an auditable record of performance, we are liable to either turn a blind eye to failures or to rationalize them away. There are a couple of ways to amplify accountability pressures:

  • Social Pressure: We do not want to embarrass ourselves in front of people we know. This pressures us to do the right thing. So record your commitments and how you follow-up on them publicly. Or make a social commitment. “Burn the boats” and tell all your friends you are training for a marathon.
  • Feel the Pain: Donate to an organization you dislike whenever you fail.
  • Enjoy the Rewards: The flip side of feeling the pain is making success sweeter. One way to do that is to give yourself a nice treat if you finish X days of Y.
  • Others Are Counting on You: If you have a workout partner, you are more likely to go because you want to come through for your friend (besides it is more enjoyable to do the activity with someone you like). 
  • Redundant Observation Systems: You can’t just rely on yourself to catch yourself cheating (or just failing). If you have a shared fitness worksheet, others will notice that you missed a day. They can text you a reminder. Automated systems like what we have on the phone are great as well.

Rely on Others: We can rely on our friends to motivate us. One way to capitalize on that is to create a group fitness spreadsheet and to encourage each other. For instance, if your friend did not fill in yesterday’s workout, you can text them a reminder or a motivational message.

NudgeReduce frictions for doing the planned activity. For example, place your phone outside your bedroom before bed or sleep in your running clothes.

Other Mental Tricks: There are two other helpful mental models for building habits. One is momentum, and the other is error correction. 

Momentum: P(do the thingt+1 | do the thing_t)

Error correction: P(do the thing_t+1 | !do the thing_t)

The best way to build momentum is to track streaks (famously used by Jerry Seinfeld). Not only do you get a reward every time you successfully complete the task, but the longer your streak, the less you want to break it.

Error correction on the other hand is turning a failure into motivation. Don’t miss two days in a row. Failure is part of the process, but do not let it compound. View the failure as step 0 of the next streak.

What Academics Can Learn From Industry

9 Aug

At its best, industry focuses people. It demands that people use everything at their disposal to solve an issue. It puts a premium on being lean, humble, agnostic, creative, and rigorous. Industry data scientists use qualitative methods, e.g., directly observe processes and people, do lean experimentation, build novel instrumentation, explore relationships between variables, and “dive deep” to learn about the problem. As a result, at any moment, they have a numerical account of the problem space, an idea about the blind spots, the next five places they want to dig, the next five ideas they want to test, and the next five things they want the company to build—things that they know work.

The social science research economy also focuses its participants. Except the focus is on producing broad, novel insights (which may or may not be true) and demonstrating intellectual heft and not on producing cost-effective solutions to urgent problems. The result is a surfeit of poor theories, a misunderstanding of how much the theories explain the issue at hand, and how widely they apply, a poor understanding of core social problems, and very few working solutions. 

The tide is slowly turning. Don Green, Jens Hainmeuller, Abhijit Banerjee, Esther Duflo, among others, form the avant-garde. Poor Economics by Banerjee and Duflo, in particular, comes the closest in spirit to how the industry works. It reminds me of how the best start-ups iterate to a product-market fit.

Self-Diagnosis

Ask yourself the following questions:

  1. Do you have in your mind a small set of numbers that explain your current understanding of the scale of the problem and some of its solutions?
  2. If you were to get a large sum of money, could you give a principled account of how you would spend it on research?
  3. Do you know what you are excited to learn about the problem (or potential solutions) in the next three months, year, …?

If you are committed to solving a problem, the answer to all the questions would be an unhesitant yes. Why? A numerical understanding of the problem is needed to make judgments about where you need to invest your time and money. It also guides what you would do if you had more money. And a focus on the problem means you have broken down the problem into solved and unsolved portions and know which unsolved portions of the problem you want to solve next. 

How to Solve Problems

Here are some rules of thumb (inspired by Abhijit Banerjee and Esther Duflo):

  1. What Problems to Solve? Work on Important Problems. The world is full of urgent social problems. Pick one. Calling whatever you are working on as important when it has a vague, multi-hop relation to an important problem doesn’t make it so. This decision isn’t without trade-offs. It is reasonable to fear the consequences when we substitute endless breadth with some focus. But we have tried that way and it is probably as good a time as any to try something else.
  2. Learn About The Problem: Social scientists seem to have more elaborate theory and “original” experiments than descriptions of data. It is time to switch that around. Take for instance malnutrition. Before you propose selling cut-rate rice, take a moment to learn whether the key problem that poor face is that they can’t afford the necessary calories or that they don’t get enough calories because they prefer tastier, more expensive calories than a full quota of calories. (This is an example from Poor Economics.) 
  3. Learn Theories in the Field: Judging by the output—books, and articles—the production of social science seems to be fueled mostly by the flash of insight. But there is only so much you can learn sitting in an armchair. Many key insights will go undiscovered if you don’t go to the field and closely listen and think. Abhijit Banerjee writes: “We then ran a similar experiment across several hundred villages where the goal was now to increase the number of immunized children. We found that gossips convince twice as many additional parents to vaccinate their children as random seeds or “trusted” people. They are about as effective as giving parents a small incentive (in the form of cell-phone minutes) for each immunized child and thus end up costing the government much less. Even though gossips proved incredibly successful at improving immunization rates, it is hard to imagine a policy of informing gossips emerging from conventional policy analysis. First, because the basic model of the decision to get one’s children immunized focuses on the costs and benefits to the family (Becker 1981) and is typically not integrated with models of social learning.”
  4. Solve Small Problems And Earn the Right to Saying Big General Things: The mechanism for deriving big theories in academia is the opposite of that used in the industry. In much of social science, insights are declared and understood as “general.” And important contextual dependencies are discovered over the years with research. In the industry, a solution is first tested in a narrow area. And then another. And if it works, we scale. The underlying hunch is that coming up with successful applications teaches us more about theory than the current model: come up with theory first, and produce posthoc rationalizations and add nuances when faced with failed predictions and applications. Going yet further, you could think that the purpose of social science is to find ways to fix a problem, which leads to more progress on understanding the problem and theory is a positive externality.

Suggested Reading + Sites

  1. Poor Economics by Abhijit Banerjee and Esther Duflo
  2. The Economist as Plumber by Esther Duflo
  3. Immigration Lab that asks, among other questions, why immigrants who are eligible for citizenship do not get citizenship especially when there are so many economic benefits to it. 
  4. Get Out the Vote by Don Green and Alan Gerber
  5. Cronbach (1975) highlights the importance of observation and context. A couple of memorable quotes:

    “From Occam to Lloyd Morgan, the canon has referred to parsimony in theorizing, not in observing. The theorist performs a dramatist’s function; if a plot with a few characters will tell the story, it is more satisfying than one with a crowded stage. But the observer should be a journalist, not a dramatist. To suppress a variation that might not recur is bad observing.”

    “Social scientists generally, and psychologists, in particular, have modeled their work on physical science, aspiring to amass empirical generalizations, to restructure them into more general laws, and to weld scattered laws into coherent theory. That lofty aspiration is far from realization. A nomothetic theory would ideally tell us the necessary and sufficient conditions for a particular result. Supplied the situational parameters A, B, and C, a theory would forecast outcome Y with a modest margin of error. But parameters D, E, F, and so on, also influence results, and hence a prediction from A, B, and C alone cannot be strong when D, E, and F vary freely.”

    “Though enduring systematic theories about man in society are not likely to be achieved, systematic inquiry can realistically hope to make two contributions. One reasonable aspiration is to assess local events accurately, to improve short-run control (Glass, 1972). The other reasonable aspiration is to develop explanatory concepts, concepts that will help people use their heads.”

Unsighted: Why Some Important Findings Remain Uncited

1 Aug

Poring over the first 500 citations of the over 900 citations for Fear and Loathing across Party Lines on Google Scholar (7/31/2020), I could not find a single study citing the paper for racial discrimination. You may think the reason is obvious—the paper is about partisan prejudice, not racial prejudice. But a more accurate description of the paper is that the paper is best known for describing partisan prejudice but has powerful evidence on the lack of racial discrimination among white Americans–in fact, there is reasonable evidence of positive discrimination in one study. (I exclude the IAT results, weaker than Banaji’s results, which show Cohen’s d ~ .22, because they don’t speak directly to discrimination.)

There are the two independent pieces of evidence in the paper about racial discrimination.

Candidate Selection Experiment

“Unlike partisanship where ingroup preferences dominate selection, only African Americans showed a consistent preference for the ingroup candidate. Asked to choose between two equally qualified candidates, the probability of an African American selecting an ingroup winnerwas .78 (95% confidence interval [.66, .87]), which was no different than their support for the more qualified ingroup candidate—.76 (95% confidence interval [.59, .87]). Compared to these conditions, the probability of African Americans selecting an outgroup winner was at its highest—.45—when the European American was most qualified (95% confidence interval [.26, .66]). The probability of a European American selecting an ingroup winner was only .42 (95% confidence interval [.34, .50]), and further decreased to .29 (95% confidence interval [.20, .40]) when the ingroup candidate was less qualified. The only condition in which a majority of European Americans selected their ingroup candidate was when the candidate was more qualified, with a probability of ingroup selection at .64 (95% confidence interval [.53, .74]).”

Evidence from Dictator and Trust Games

“From Figure 8, it is clear that in comparison with party, the effects of racial similarity proved negligible and not significant—coethnics were treated more generously (by eight cents, 95% confidence interval [–.11, .27]) in the dictator game, but incurred a loss (seven cents, 95% confidence interval [–.34, .20]) in the trust game. There was no interaction between partisan and racial similarity; playing with both a copartisan and coethnic did not elicit additional trust over and above the effects of copartisanship.”

There are two plausible explanations for the lack of citations. Both are easily ruled out. The first is that the quality of evidence for racial discrimination is worse than that for partisan discrimination. Given both claims use the same data and research design, that explanation doesn’t work. The second is that it is a difference in base rates of production of research on racial and partisan discrimination. A quick Google search debunks that theory. Between 2015 and 2020, I get 135k results for racial discrimination and 17k for partisan polarization. It isn’t exact but good enough to rule it out as a possibility for the results I see. This likely leaves us with just two explanations: a) researchers hesitate to cite results that run counter to their priors or their results, b) people are simply unaware of these results.