Flawed Analyses in Deliberative Polls

14 Jan

A Deliberative Poll works as follows: A random sample of people are surveyed. Out of the initial sample, a random subset is invited to deliberate, given balanced briefing materials, randomly assigned to small groups moderated by trained moderators, allowed the opportunity to quiz experts, and in the end surveyed again.

Reports and papers on Deliberative Polls often carry comparisons between participants to non-participants on a host of attitudinal, and demographic variables (e.g. see here, and here). The analysis purports to answer whether people who came to Deliberative Poll were different from those who didn’t and to compare participant sample to the population. This sounds about right, except the comparison is made between participants, and a pool of two likely distinct sub-populations—people who were never invited (probably a representative, random set), and people who were invited but never came. Under plausible and probable assumptions, such pooling biases against finding a result.

The key thing we want to measure is self-selection bias—was there a difference between people who accepted the invitation, and who did not. The right way to estimate the bias would be as follows:

(Participant/Didn't come) ~ socio-demographics (gender, education, income, party id, age) + knowledge + attitude extremity

Effect sizes can be provided to summarize the extent of bias. This kind of analysis can account for the fact that bias may not occur at first marginals (gender), but at second marginals (less educated women). (This all can be theory-driven, or more descriptive in purpose). The analysis also allows for smaller effects to be seen, as variance within cells is reduced.

p values
When the conservative thing to do is to reject the null hypothesis, think a little less about p-values.

Assuming initial survey approximates a ‘representative’ sample of the entire population and assuming we want to infer how ‘representative’ participants are of the entire population, it makes sense just to report mean differences without the p-values.

The survey sample estimates stand in for the entire population. Entire population census numbers are without standard errors or very low s.e. so comparisons are always significant.

By comparing to an uncertain estimate of the population, one cannot say whether the participant sample was representative of the entire population. That estimation is without bias but suffers the following problem – the more uncertain the population estimate, the less likely one can reject null, and more likely one is to conclude that the participant sample is representative. One way to deal with this is to do the following – Have 95% conf. band of sample estimate of population and then calculate max and min difference between the sample and report that.

Name calling
Calling the analysis ‘representativeness’ analysis seems misleading on two counts:

  1. While a clear representation question can be answered by some analysis, none such question is answered and can be answered by the analysis presented. Moreover, it isn’t clear if it relates to some larger politically meaningful variable. For example, one question that can be posed is whether participant sample resembles the population at large. For answering such a question, one would want to compare population estimates to census estimates (which have near zero variance, so t-tests, etc. would be pointless.)
  2. In a series of papers in the 1970s, Kruskal and Mosteller (citations at the end) rightly excoriate the use of `representativeness’, which is fuzzy and open to abuse.

Citations
Kruskal, W; Mosteller, F. (1979) Representative sampling I: non-scientific literature. Intern Stat Rev. 47:13-24.
Kruskal, W; Mosteller, F. (1979) Representative sampling II: scientific literature. Intern Stat Rev. 47:111-127.
Kruskal, W; Mosteller, F. (1979) Representative sampling III: the current statistical literature. Intern Stat Rev. 47:245-265.