The difference between partisans’ responses on retrospection items is highly variable, ranging from over 40% to nearly 0. For instance, in 1988 nearly 30% fewer Democrats than Republicans reported that the inflation rate between 1980 and 1988 had declined. (It had.) However, similar proportions of Republicans and Democrats got questions about changes in the size of the budget deficit and defense spending between 1980 and 1988 right. The median partisan gap across 20 items asked in the NES over 5 years (1988, 1992, 2000, 2004, and 2008) was about 15 points (the median was about 12 points), and the standard deviation was about 13 points. (See the tables.) This much variation suggests that observed bias in partisans’ perceptions depends on a variety of conditioning variables. For one, there is some evidence to suggest that during severe recessions, partisans do not differ much in their assessment of economic conditions (See here.) Even when there are partisan gaps, however, they may not be real (see paper (pdf)).
55000/(365*4) ~ 37.7. That seems a touch low for Sec. of state.
1. Clinton may have used more than one private server
2. Clinton may have sent emails from other servers to unofficial accounts of other state department employees
Lower bound for missing emails from Clinton:
- Take a small weighted random sample (weighting seniority more) of top state department employees.
- Go through their email accounts on the state dep. server and count # of emails from Clinton to their state dep. addresses.
- Compare it to # of emails to these employees from the Clinton cache.
To propose amendments, go to the Github gist
According to YouGov surveys in Switzerland, Netherlands and Canada, and the 2008 ANES in the US, Whites, on average, in each of the four countries feel fairly coldly — giving an average thermometer rating of less than 50 on a 0 to 100 scale — toward Muslims, and people from Muslim-majority regions (Feelings towards different ethnic, racial, and religious groups). However, in Europe, Whites’ feelings toward Romanians, Poles, and Serbs and Kosovars are scarcely any warmer, and sometimes cooler. Meanwhile, Whites feel relatively warmly towards East Asians.
Note that the x-axis covers a fair bit of time. So even if you were to imagine month or two long lag between the time the scandal breaks and public opinion changes, interpretation doesn’t change.
The median Democrat referred to in television news is to the left of the House Democratic Median, and the median Republican politician referred to is to the left of the House Republican Median.
Click here for the aggregate distribution.
And here’s a plot of top 50 politicians cited in news. The plot shows a strong right skewed distribution with a bias towards executives.
News data: UCLA Television News Archive, which includes closed-caption transcripts of all national, cable and local (Los Angeles) news from 2006 to early 2013. In all, there are 155,814 transcripts of news shows.
Politician data: Database on Ideology, Money in Politics, and Elections (see Bonica 2012).
Taking out data from local news channels or removing Obama does little to change the pattern in the aggregate distribution.
Deliberation often causes people to change their attitudes. One reason this may happen is because deliberation also causes people’s values to change. Thus, one mediational model is as follows: change in values causes change in attitudes. However, deliberation can also cause people to connect their attitudes better with their values, without changing those values. This may mean that ceteris paribus, same values yield different attitudes post-treatment. For identification, one may want to devise a treatment that changes respondent’s ability to connect values to attitudes but has no impact on values. Or a researcher may try to gain insight by measuring people’s ability to connect values to attitudes in current experiments, and estimating mediational models. One can also simply try simulation (again subject to the usual concerns): use post-treatment model (regressing attitudes on values), use pre-deliberation values, and simulate attitudes.
Some recent research suggests that Americans’ policy preferences are highly constrained, with a single dimension able to correctly predict over 80% of the responses (see Jessee 2009, Tausanovitch and Warshaw 2013). Not only that, adding a new (orthogonal) dimension doesn’t improve prediction success by more than a couple of percentage points.
All this flies in the face of conventional wisdom in American Politics, which is roughly antipodal to the new view: most people’s policy preferences are unstructured. In fact, many people don’t have any real preferences on many of the issues (`non-preferences’). Evidence that is most often cited in support of this view comes from Converse – weak correlation between preferences across measurement waves spanning two years (r ~ .4 to .5), and even lower within wave cross-issue correlations (r ~ .2).
What explains this double disagreement — over the authenticity of preferences, and over the structuration of preferences?
First, the authenticity of preferences. When reports of preferences change across waves, is it a consequence of attitude change or non-preferences or measurement error? In response to concerns about long periods between test-retest – which allowed for opinions to genuinely change – researchers tried shorter time periods. Correlations were notably stronger (r ~ .6 to .9)(see Brown 1970). But the sheen of these healthy correlations was worn off by concerns that stability was merely an artifact of people remembering and reproducing what they put down last time.
Redemption of correlations over longer time periods came from Achen (1975). While few of the assumptions behind the redemption are correct – notably uncorrelated errors (across individuals, waves, etc.) – for inferences to be seriously wrong, much has to go wrong. More recently, and, subject to validation, perhaps more convincingly, work by Dean Lacy suggests that once you take out the small number of implausible transitions between waves – those from one end of the scale to another – cross-wave correlations are fairly healthy. (This is exactly opposite to the conclusion Converse came to based on a Markov model; he argued that aside from a few consistent responses, rest of the responses were mostly noise.) Much simpler but informative tests are still missing. For instance, it seems implausible that lots of people who hold well-defined preferences on an issue would struggle to pick even the right side of the scale when surveyed. Tallying stability of dichotomized preferences would be useful.
Some other purported evidence for the authenticity of preferences has come from measurement error models that rest upon more sizable assumptions. These models assume an underlying trait (or traits) and pool preferences over disparate policy positions (see, for instance, Ansolabehere, Rodden, and Snyder 2006 but also Tausanovitch and Warshaw 2013). How do we know there is an underlying trait? That isn’t clear. Generally, it is perfectly okay to ask whether preferences are correlated, less so to simply assume that preferences are structured by an unobserved underlying mental construct.
With the caveat that dimensions may not reflect mental constructs, we next move to assessing claims about the dimensionality of preferences. Differences between recent results and conventional wisdom about “constraint” may be simply due to increase in structuration of preferences over time. However, research suggests that constraint hasn’t increased over time (Baldassari and Gelman 2008). Perhaps more plausibly, dichotomization, which presumably reduces measurement error, is behind some of the differences. There are of course less ham-handed ways of reducing measurement error. For instance, using multiple items to measure preferences on a single policy, as psychologists often do. Since it cannot be emphasized enough, the lesson of past two paragraphs is: keep adjustments for measurement error, and measurement of constraint separate.
Analysis suggesting higher constraint may also be an artifact of analysts’ choices. Dimension reduction techniques are naturally sensitive to the pool of items. If a large majority of the items solicit preferences on economic issues (as in Tausanovitch and Warshaw 2013), the first principal component will naturally pick preferences on that dimension. Since the majority of the gains would come from correctly predicting a large majority of the items, gains in percentage correctly predicted would be poor at judging whether there is another dimension, say preferences on cultural issues. Cross-validation across selected large item groups (large enough to overcome idiosyncratic error) would be a useful strategy. And then again, gains in percentage correctly predicted over the entire population may miss subgroups with very different preference structures. For instance, Blacks and Catholics, who tend to be more socially conservative but economically liberal. Lastly, it is possible that preferences on some current issues (such as those used by Jessee 2009) may be more structured (by political conflict) than some old standing issues.