Causality and Generalization in Qualitative and Quantitative Methods

19 Nov

Science deals with the fundamental epistemological question of how we can claim to know something. The quality of the system, forever open to challenge, determines any claims of epistemic superiority that the scientific method may make over other competing claims of gleaning knowledge from data.

The extent to which claims are solely arbitrated on scientific merit is limited by a variety of factors, as outlined by Lakatos, Kuhn, and Feyerabend, resulting in at best an inefficient process and at worst, something far more pernicious. I ignore such issues and focus narrowly on methodological questions around causality and generalizability in qualitative methods.

In science, the inquiry into generalizable causal processes is greatly privileged. There is a good reason for that. Causality and generalizability can provide the basis for intervention. However, not all kinds of data make themselves readily accessible to imputing causality or even making generalizable descriptive statements. For example, causal inference in most historical research remains out of bounds. Keeping this in mind, I analyze how qualitative methods within Social Sciences (can) interrogate causality and generalizability.

Causality

Hume thought that there was no place for causality within empiricism. He argued that the most we can find is that “the one [event] does actually, in fact, follow the other.” Causality is nothing but an illusion occasioned when events follow each other with regularity. That formulation, however, didn’t prevent Hume from believing in scientific theories. He felt that regularly occurring constant conjunctions were a sufficient basis for scientific laws. Theoretical advances in the 200 or so years since Hume have been able to provide a deeper understanding of causality, including a process-based understanding and an experimental understanding.

Donald Rubin defines causal effect as follows: “Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from t1 to t2 is the difference between what would have happened at time t2 if the unit had been exposed to E initiated at t1 and what would have happened at t2 if the unit had been exposed to C initiated at t1: ‘If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,’ or because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.’ Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning.”

Note that the Rubin Causal Model (RCM), as presented above, depicts an elementary causal connection between two Boolean variables: one explanatory variable (two aspirins) with a single effect (eliminates headaches). Often, the variables take multiple values. And to estimate the effect of each change, we need a separate experiment. To estimate the effect of a treatment to a particular degree of precision in different subgroups, for example, the effect of aspirin on women and men, the sample size for each group needs to be increased.

RCM formulation can be expanded to include a probabilistic understanding of causation. A probabilistic understanding of causality means accepting that certain parts of the explanation are still missing. Hence, a necessary and sufficient condition is absent. Though attempts have been made to include necessary and sufficient clauses in probabilistic statements. David Papineau (Probabilities and Causes, 1985, Journal of Philosophy) writes, “Factor A is a cause of some B just in case it is one of a set of conditions that are jointly and minimally sufficient for B. In such a case we can write A&X ->B. In general, there will also be other sets of conditions minimally sufficient for B. Suppose we write their disjunction as Y. If now we suppose further that B is always determined when it occurs, that it never occurs unless one of these sufficient sets (let’s call them B’s full causes) occurs first, then we have, A and X condition conjugated with Y is equivalent with B. Given this equivalence, it is not difficult to see why A’s causing B should be related to A’s being correlated with B. If A is indeed a cause of B, then there is a natural inference to Prob(B/A) > Prob(B/-A): for, given A, one will have B if either X or Y occurs, whereas without A one will get B only with Y, and conversely it seems that if we do find that Prob(B/A) > Prob(B/-A), then we can conclude that A is a cause of B: for if A didn’t appear in the disjunction of full causes which are necessary and sufficient for B, then it wouldn’t affect the chance of B occurring.”

Papineau’s definition is a bit archaic and doesn’t entirely cover the set of cases we define as probabilistically causal. John Gerring (Social Science Methodology: A Criterial Framework, 2001: 127,138; emphasis in original), provides a definition of probabilistic causality: “[c]auses are factors that raise the (prior) probabilities of an event occurring. [Hence] a sensible and minimal definition: X may be considered a cause of Y if (and only if) it raises the probability of Y occurring.”

A still more sensible yet minimal definition of causality can be found in Gary King et al. (Designing Social Inquiry: Scientific Inference in Qualitative Research, 1994: 81-82), “the causal effect is the difference between the systematic component of observations made when the explanatory variable takes one value and the systematic component of comparable observations when the explanatory variable takes on another value.”

Causal Inference in Qualitative and Quantitative Methods

While the above formulations of causality—the Rubin Causal Model, Gerring, and King—seem more quantitative, they can be applied to qualitative methods. We discuss how below.

A parallel understanding of causality, one that is used much more frequently in qualitative social science, is a process-based understanding of causality wherein you trace the causal process to construct a theory. Simplistically, in quantitative methods in the Social Sciences, one often deduces the causal process, while in qualitative methods the understanding of the causal process is learned from deep and close interaction with data.

Both deduction and induction, however, are rife with problems. Deduction privileges formal rules (statistics) that straightjacket the systematic deductive process so that the deductions are systematic and conditional on the veracity of assumptions like normal distribution of data, the linearity of the effect, lack of measurement error, etc. The formal deductive process bestows a host of appealing qualities like generalizability, when an adequate random sample of the population is taken, or even a systematic handle on causal inference. In quantitative methods, the methodological assumptions for deduction are cleanly separated from the data. The same separation between the formal deductive process with a rather arbitrarily chosen statistical model and data, however, makes the discovery process less than optimal and sometimes deeply problematic. Recent research by Ho and King (Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference, Political Analysis, 2007), and methods like Bayesian Model Averaging (Volinsky), have gone some way in providing ways to mitigate problems with model selection.

No such sharp delineation between method and data exists in qualitative research, where data is collected iteratively—[in studies using iterative abstraction (Sayer 1981, 1992; Lawson 1989, 1995) or grounded theory (Glaser 1978; Strauss 1987; Strauss and Corbin 1990)]—till it explains the phenomenon singled out for explanation. Grounded data-driven qualitative methods often run the risk of modeling particularistic aspects of data, which reduces the reliability with which they can come up with a generalizable causal model. This is indeed only one kind of qualitative research. There are others who do qualitative analysis in the vein of experiments, for example, with a 2×2 model, and yet others who will test apriori assumptions by analytically controlling for variables in a verbal regression equation to get at the systematic effect of the explanatory variable on the explanandum. Perhaps more than the grounded theory method, the pseudo-quantitative style qualitative analysis runs the risk of coming to deeply problematic conclusions based on the cases used.

King et al. (1994: 75, note 1): “[a]t its core, real explanation is always based on causal inferences.”

Limiting Discussion to Positivist Qualitative Methods

Qualitative methods can be roughly divided into positivist methods, e.g., case studies, and interpretive methods. I will limit my comments to positivist qualitative methods.

The differences between positivist qualitative and quantitative methods “are only stylistic and are methodologically and substantively unimportant” (King et al., 1994:4). Both methods share “an epistemological logic of inference: they all agree on the importance of testing theories empirically, generating an inclusive list of alternative explanations and their observable implications, and specifying what evidence might infirm or affirm a theory” (King et al. 1994: 3).

Empirical Causal Inference

To impute causality, we either need evidence about the process or an experiment that obviates the need to know the process, though researchers are often encouraged to have a story to explain the process and test variables implicated in the story.

Experimentation provides one of the best ways to reliably impute causality. However, for experiments to have value outside the lab, the treatment must be ecological—it should reflect the typical values that the variables take in the world. For instance, the effect of televised news is best measured with real-life news clips shown in a realistic setting where the participant has control of the remote. Ideally, we also want to elicit our measures in a realistic way, in the form of their votes, or campaign contributions, or expressions online. The problem is that most problems in social science cannot be studied experimentally. Brady et al. (2001:8) write, “A central reason why both qualitative and quantitative research are hard to do well is that any study based on observational (i.e., non-experimental) data faces the fundamental inferential challenge of eliminating rival explanations.” I would phrase this differently. It doesn’t make social science hard to do. It just means that we have to be ok with the fact that we cannot know certain things. Science is an exercise in humility, not denial.

Learning from Quantitative Methods

  1. Making Assumptions Clear: Quantitative methods often make a variety of assumptions including to make inferences. For instance, empiricists often use ceteris paribus—all other things being equal, which may mean assigning away everything ‘else’ to randomization—to make inferences. Others use the assumption that the error term is uncorrelated with other independent variables to infer the correlation between an explanatory variable x and dependent variable y can only be explained as x’s effect on y. There are a variety of assumptions in regression models and the penalty for violation of each of these assumptions. For example, we can analytically think through how education will affect (or not affect) racist attitudes. Analytical claims are based on deductive logic and a priori assumptions or knowledge. Hence the success of analytical claims is contingent upon the accuracy of the knowledge and the correctness of the logic.
  2. Controlling for things: Quantitative methods often ‘control’ for stuff. It is a way to eliminate an explanation. If gender is a ‘confounder,’ check for variation within men and women. In Qualitative Methods, one can either analytically (or where possible empirically) control for variables or trace the process.
  3. Sampling: Traditional probability sampling theories are built on the highly conservative assumption that we know nothing about the world. And the only systematic way to go about knowing it is through random sampling, a process that delivers ‘representative’ data on average. Newer sampling theories, however, acknowledge that we know some things about the world and use that knowledge by selectively over-sampling things (or people) we are truly clueless about and under-sampling where we have a good idea. For example, polling organizations under-sample self-described partisans and over-sample non-partisans. This provides a window for positivist qualitative methods to make generalizable claims. Qualitative methods can overcome their limitations and make legitimate generalizable claims if their sampling reflects the extent of prior knowledge about the world.
  4. Moderators: Getting a handle on the variables that ‘moderate’ the effect of a particular variable that we may be interested in studying.
  5. Sample: One of the problems in qualitative research that has been pointed out is the habit of selecting on the dependent variable. Selection on the dependent variable deviously leaves out cases where, for example, the dependent variable doesn’t take extreme values. Selection bias can not only lead to misleading conclusions about causal effects and also about causal processes. It is essential hence not to use a truncated dependent variable to do one’s analysis. One of the ways one can systematically drill down to causal processes in qualitative research is by starting off with the broadest palette, either in prior research or elsewhere, to grasp the macro-processes and other variables that may affect the case. Then cognizant of the particularistic aspects of a particular case, analyze the microfoundations or microprocesses present in the system.
  6. Reproducible: One central point of the empirical method is that there is no privileged observed. What you observe, another should be able to reproduce. So, whatever data you collect, however you draw your inferences, all need to be clearly stated and explained.