From Living Instinctively to Living With History

4 Mar

Listening to my maternal grandparents narrate their experience of living with Muslims was confusing. According to them, Hindus and Muslims lived harmoniously. They also liked each other. Hindus and Muslims wouldn’t eat at each other’s houses or may use separate utensils but that had less to do with discrimination and more to do with accomodating each other’s faiths. Even in their recollections of the partition, I couldn’t detect bitterness. They narrated it as an adventure. But to many Hindus (and Muslims) today, it is hard to think of a time when Hindu-Muslim relations did not have a strong undercurrent of historic grievances and suspicion. Today many Hindus have a long litany of grievances, of repeat Muslim invasions, destruction of temples, and such.

Naipaul’s India: A Million Mutinies may have an answer to the puzzle.* People may go from a time when the “wider world is unknown” because they are “without the means of understanding this world” to a time when they have the means and the politics that comes with that greater capacity, from living instinctively to living with grievances.

“… The British forces the correspondent William Howard Russell had seen at the siege of Lucknow had been made up principally of Scottish Highlanders and Sikhs. Less than 10 years before, the Sikhs had been defeated by the sepoy army of the British. Now, during the Mutiny, the Sikhs – still living as instinctively as other Indians, still fighting the internal wars of India, with almost no idea of the foreign imperial order they were serving – were on the British side.”

From India: A Million Mutinies by V. S. Naipaul

Here’s some color on the sepoy army:

“From Russell’s book I learned that the British name for the Indian sepoy, the soldier of the British East India Company who was now the mutineer, was ‘Pandy’. ‘Why Pandy? Well, because it is a very common name among the sepoys …’ It is in fact a brahmin name from this part of India. Brahmins here formed a substantial part of the Hindu population, and the British army in northern India was to some extent a brahmin army.

From India: A Million Mutinies by V. S. Naipaul

“people who – Pandy or Sikh, porter or camp-following…Hindu merchant – run with high delight to aid the foreigner to overcome their brethren. That idea of ‘brethren’ – an idea so simple to Russell that the word is used by him with clear irony – is very far from the people to whom he applies it. …The Hindus would have no loyalty except to their clan; they would have no higher idea of human association, no general idea of the responsibility of man to his fellow. And because of that missing large idea of human association, the country works blindly on ….

the India that will come into being at the end of the period of British rule will be better educated, more creative and full of possibility than the India of a century before; that it will have a larger idea of human association, and that out of this larger idea, and out of the encompassing humiliation of British rule, there will come to India the ideas of country.”

From India: A Million Mutinies by V. S. Naipaul

Elsewhere:

To awaken to history was to cease to live instinctively. It was to begin to see oneself and one’s group the way the outside world saw one; and it was to know a kind of rage. India was now full of this rage. There had been a general awakening. But everyone awakened first to his own group or community; every group thought itself unique in its awakening; and every group sought to separate its rage from the rage of other groups.

From India: A Million Mutinies by V. S. Naipaul

* The theory isn’t original to him. Others have pointed to how many Indians didn’t see themselves as part of a larger polity. The point also applies more broadly, to other groups.

Deliberation as Tautology

18 Jun

We take deliberation to be elevated discussion, meaning at minimum, discussion that is (1) substantive, (2) inclusive, (3) responsive, and (4) open-minded. That is, (1) the participants exchange relevant arguments and information. (2) The arguments and information are wide-ranging in nature and policy implications—not all of one kind, not all on one side. (3) The participants react to each other’s arguments and information. And (4) they seriously (re)consider, in light of the discussion, what their own policy attitudes should be.

Deliberative Distortions?

One way to define deliberation would be: “the extent to which the discussion is substantive, inclusive, responsive, and open-minded.” But here, we state the top-end of each as the minimum criteria. So defined, deliberation runs into two issues:

1. It’s posited beneficient effects become becomes a near tautology. If the discussion meets that high bar, how could it not refine preferences?

2. The bar for what counts as deliberation is high enough that I doubt that most deliberative mini-publics come anywhere close to meeting the ideal.

Lifting the Veil on Some Issues Around The Burka Debate

30 Mar

For the unfamiliar, the BBC guide to Muslim veils.

The somewhat polemical:
Assuming that God has recommended that women wear the burka, assuming that burka has no impact on a woman’s ability to communicate or quality of life, as has been suggested by its supporters, then here’s a suggestion—to all men, who haven’t been ordered by God to wear a burka, and who don’t see a downside to wearing it—why not voluntarily commit to wearing the burka, since no law opposes such a voluntary act, to show solidarity with the women. My sense is that even the French would come to support the burka if Muslim men en masse chose to wear it.

More considered:
‘The interior ministry says only 1,900 women wear full veils in France, home to Europe’s biggest Muslim minority’ (BBC). If the problem is interpreted solely in terms of women wearing the veil, then it is much smaller than the dust in its wake.

There are three competing concerns at the heart of the debate: Protecting the rights of women who voluntarily want to wear it, protecting the rights of women who are forced to wear it, and protecting (French) ‘culture.’ Setting aside cultural concerns for the moment, let’s focus on the first two claims.

People are incredulous of the claim that women will voluntarily choose to wear something so straightforwardly unpleasant. Even when confronted with a woman who claims to comply voluntarily, they fear coercion, or something akin to brainwashing at play. There is merit to the thought. However, there is much evidence that women subject themselves to many unpleasant things voluntarily, such as wearing high heels (which I understand are uncomfortable to wear). So it is very likely indeed that there is ‘voluntary compliance’ by some women.

Assuming there exist both, voluntary compliers, and those forced to wear the niqab, wouldn’t it be pleasant if we could ensure the rights of both? In fact, doesn’t the extant legal framework provide for such a privilege already? Yes and no, mostly no. While it is true that women forced to wear the niqab can petition the police, it is unlikely to happen for a variety of reasons. Going to the police would mean going against the family, which may mean doing something painful, and risking financial and physical well-being. Additionally, the laws governing such ‘coercion’ are likely to carry modest penalties and unlikely to redress the numerous correlated issues including inadequate financial, and educational opportunities. Many of the issues raised here would seem familiar to people working with domestic abuse, and they are, and the modern state hasn’t (tried to) found a good solution.

Perhaps both camps will agree that wearing a niqab does dramatically limit the career opportunities for women. Of course, people in one of the camps may be happy that there are limits to such opportunities but let’s assume that they would be happy if the women had the same opportunities. Part of the problem here then is the norms of dressing in business environments in the West. Entrepreneurs in Saudi Arabia recently brought to air a television talk show in which both of the hosts wore the niqab. The entire effect was disturbing. However, that isn’t the point. The point is that there may be ways not to reduce career opportunities for women based on the dress code, which after all seems ‘coercive.’

Time considerations mean a fuller consideration of the issue will have to wait. One last point – One of the problems cited about the burka is that it poses a security threat, which has some merit, given its long history in being used a method of escape, including by militant clerics.

The Jury is Out: Correctness of Democratic Decisions

18 Mar

Rousseau saw majority preferences as an expression of the general will (The Social Contract). With Condorcet, the general will was also imbued with the notion of correctness. As contradictory evidence is ample, it is time to remove the false comfort of any epistemic benefit accruing from such aggregation.

Condorcet Jury Theorem, originally postulated by Marquis de Condorcet, formalized by Duncan Black, runs as follows:

If,

  1. The jury has to decide between two options using simple majority as the decision rule
  2. If each juror’s probability of being correct is greater than half (~ competence)
  3. Each juror has an equal probability of being correct (~ homogeneity)
  4. Each juror votes independently (~ independence)

Then,

  1. Any jury of more than one juror is more likely to arrive at the correct answer than any single juror
  2. As n increases, the probability of arriving at the correct answer approaches 1

There have been multiple attempts at generalizing Condorcet, mostly by showing that violations to one or more of the assumptions don’t automatically doom the possibility of achieving an epistemically superior outcome. 

One way to summarize the theorem is that math works to the extent the assumptions hold. And the assumptions often do not hold.

Applying Condorcet’s Jury theorem to Electoral Democracy

To apply CJT to democracy, we must assume citizenry to be a jury and the decision task in front of it as choosing the “right” party or candidate.

The word jury is saddled with association with courts in the American context, and it is important to disambiguate how the citizenry differs from a jury of citizens summoned by the court. The disambiguation will allow us to cover key issues that affect the epistemic utility of any “aggregations” of human beings.

In the court system, a jury is randomly (~ within certain guidelines) selected from the community. It is generally subject to a battery of voir dire questions so as to assess their independence, lack of conflict of interest, biases, etc. It is sworn to render a “rational” and “impartial” verdict. The jury is instructed in the applicable law, including evidentiary law. And members of the jury are asked not to learn about the case from any other source other than what is presented within the court, which itself is subjected to reasonably stringent evidentiary guidelines. The jury is also guarded against undue influence, for example, bribes by interested parties. The jury is also made to at least sit through extensive presentations from ‘both sides’ and their rebuttals and generally asked to deliberate the evidence among what is generally a ‘diverse’ pool before reaching a verdict.

On the other hand, citizens that come out to vote are a self-selected sample (roughly half of the total body), highly and admissibly ‘non-independent in how they look at the evidence, generally sworn to ‘parties,’ unconstrained by law on what evidence to look at, and how to look at it, generally extensively manipulated by interested ‘parties,’ rarely informed about the ‘basis,’ rarely arriving at a decision after learning about arguments by ‘both sides,’ and rarely ever deliberating.

The comparison provides a rough template for the argument against positive comparisons between the epistemic competence of juries and that of the citizenry. However, Condorcet’s argument is a bit different—though many of the above lessons apply—and hinges on the enormous n in a democracy. The only other assumption that one then needs is each juror having more than 1/2 chance of having it right, or some variation thereof. The key claims that can be made against Condorcet can come from two sources: 1) theorization of the sources and extent of violation of the assumptions, for example, independence and competence, and 2) inapplicability due to incongruence, etc. The various contentions emerging from the two sources are covered below in no particular order.

Rational Voting, Sincere Voting

While it is one of the weaker cases against applying Condorcet – mostly because the counterargument imagines a ‘rational’ voter – the argument deserves some attention – mostly because of its salience in the political science literature. One of the axioms of political science, since Downs, has been that information acquisition is costly. Hence it follows that as the decision-making body becomes larger, and as the chance to be a ‘pivotal voter’ goes down, the incentives to shirk (free-ride) increase.

Austen-Smith and Banks, among others, have shown that ‘sincere voting’, voting the best choice based on the information signal, is not equilibrium behavior as rational voter votes not only based on the signal but also on the chance of being pivotal. (1998, APSR), taking the claim (perils of strategic voting) to its logical extreme and applying it to ‘unanimity rule’ (not majority rule, though similar less stark contentions apply, which they note) show that as jury size increases, the probability of convicting the innocent increases.

Extreme Non-independence

Given p > half is a ‘reasonably high’ threshold – jurors performing better than random – especially in circumstances of misinformation, problems can arise quickly.

In the current state, about 90% of the voters exhibit high forms of non-independence emerging from apathy and partisanship. It also reasons that reduction in either one will lead to a higher probability of citizenry choosing the ‘better choice’ on offer and arguably better choices on offer. Partisanship also means that people have different utilities that they intend to maximize. The other 10% err on the side of manipulation.

What’s on the Menu?

To the extent there are two inferior choices to choose from, one can imagine that in the best case, the polity will choose the slightly better one among the two inferior choices. Condorcet offers no comfort for what kind of choices are on offer – perhaps the central and pivotal role of any normative conception of democracy. In fact, it is likely that the quality of choices on offer (‘correctness of choices’) is likely to be a function of probability with which a body politic knows about the ‘optimal correct choice’, and the probability that it chooses the ‘optimally correct’ choice (which is likely to be collinear with odds of picking the ‘better choice’).

Policy Choices

Policy choices are an array of infinite counterfactuals. To choose the ‘most correct,’ one would need a population informed enough to disinter the right choice with a higher probability than any other wrong choice. Given infinite choices, the bar set for each citizen is very high. The chances of current citizenry crossing that bar—non-existent.

Three or More Choices

The manipulability of the system offering more than two choices is well documented and filed alternately as Condorcet’s Paradox and Arrow’s impossibility theorem. Much work has been done to show that propensity of cycles in a democracy is not great. (For example, Gerry Mackie, ‘Democracy Defended’) One contention, however, remains unanswered for the binary choice version -American Democracy often reduces larger sets into two options. One can imagine that the preference order for citizens will depend on unoffered choices. Depending on how multiple choices are reduced to two choices, one can think of ways ‘cycling’ can work even in the offered binary choices. (David Austen Smith) More succinctly – all binary decisions in democratic politics can be thought to come from larger option sets, and the threat of cycling hence is omnipresent.

Correct Decisions

CJT, a trivial result from probability, when applied to voting with two choices, is just that an electorate is most likely to arrive at the more likely choice of each of its members. The probability of achieving that comes close to 1 as n increases.

If we assume that electoral democracy is a competition between interests, then we just get majoritarian opinions, not ‘correct’ answers. As in there is no common utility function but a different set of utilities for different groups – so people look at a common information signal and split based on their group interests. In that case, the ‘correctness of the decision really reduces to the ‘winning’ decision.

Median Voter: Condorcet in Reverse

In many ways, applying Condorcet to democracy is applying things in reverse. We know that politicians create policies that appeal to the ‘median voter’ (as opposed to the median citizen). Politicians work to cobble together a ‘majority’ such that the probability of the majority picking them is the greatest. Significantly, policy preferences that can be sold to the majority have no similar claims as made by CJT. Another important conclusion that can be drawn from the above is that since the options on offer can manipulate the population, it is likely that the errors are not at random.

Democratic Errors Don’t Cancel

In The Rational Public, Benjamin Page and Bob Shapiro argue that one of the benefits of aggregation is that the errors cancel out. Errors may cancel if they are random, but if they are heteroskedastic and strongly predicted by sociodemographics, they are likely to have political consequences. For example, such errors very plausibly reduce the likelihood of certain people making a demand or from coalescing to make demands in line with their interests.

Formation of Preferences, Aggregation of Preferences

Applying CJT to democracy, we can roughly proxy what preferences will emerge from available data. Assuming people have a perfect lens to the hazy data, the “probability that the correct alternative will win under majority voting converges to the probability that the body of evidence is not misleading.” (Franz Dietrich and Christian List in ‘A Model of Jury Decisions Where All Jurors Have the Same Evidence’)

While even the probability calculated thence is optimistic— we know that evidence isn’t the same for all jurors, and the lens of most jurors is foggy—it is a good start to thinking about what data is available to the jurors, how it is used by the jurors (citizens), what the consequences are of different information and ‘analytic lens’ distributions.

Letting the Experts Speak

If our interest is limited to getting the ‘correct outcome,’ then we ought to do better (in terms of likelihood of arriving at the correct decision) by polling people with higher probabilities of getting it right. We will also save on resources. Another version of the idea would be to do a weighted poll, with weights proportional to the probability of being correct. The optimal strategy is to have weights proportional to log p(correct)/p(incorrect) (Nitzan and Paroush, 1982; Shapley and Grofman, 1984).

It isn’t as much a contention as a prelude to the following conclusion – Any serious engagement with epistemic worthiness as a prime motive in governance will probably mean serious adjustments to the shape and nature of democracy and, in all likelihood, abandonment of mass democracy.

60% is Different from 51%

The key consideration in CJT is choosing the ‘right’ option from the two on offer. Under this system, 51% doesn’t quite differ from 60% or 90% for all yield the same ‘right choice.’ Politics works differently. Presidents tout their ‘mandates’ and base their policy agendas on them. Congress and Senate have a slew of procedural and legislative rules that buckle under larger numbers. Thinking about Congress and Senate brings new complications, and here’s why – while the election of each member may be justified by CJT, the benefit produced by elected representatives needs another round of aggregation – without some of the large n benefits of mass democracy. Here again, we may note – as McCarty and Poole have shown – that the ‘jury’ is extremely ‘non-independent,’ prone to systematic biases, etc. In addition, no longer is choice limited to two – though each choice task can be broken down into a series of Boolean decisions (arriving at the ‘right decision’ in this kind of linear aggregation over choice spectrum will follow a complex function of p(correct choice) for each binary decision.

Summary

Conjectures about the epistemic utility of electoral democracy are rife with problems when seen through the lens of Condorcet. This isn’t to say that no such benefits exist but that alternate frameworks are needed to understand those benefits.

On Representation in a Democracy

3 Dec

“Representation means the making present of something that is nevertheless not literally present.” (Pitkin, 1967) In a representative democracy, which, minimally construed, involves a mediating assembly for political decision making, representation, implies an attempt to find the people in assembly’s political decision making.

But what do we mean by that? Should we look for people’s values, thoughts, current (or not so current), strong (or also weak) opinions (“phantom,” as Philip Converse puts it, though they may be?) in governance? And where exactly should we look? Should it be policy, or institutional design, or process, or in the race and gender of representatives? Not to say that all of this rests upon the idea that things like opinions can be coherently expressed by people (in the aggregate), and are cognizable in institutions, policy outcomes, etc.

Other than these seemingly intractable questions of measurement, we also have substantive questions. Who is represented, and to what degree, and why? And we must struggle with some normative questions that lie adjacent to the empirically posed question above. Who/what should be represented, and to what degree?

Origins of our thinking about representation

There are two intersecting facets of how representation is understood, and perhaps should be. One is highlighting the cultural construction of concern with representation, and the other is historical understanding of representation.

Partly, idealized notions of representation are built against the inequalities manifest in the economic processes. The need for political equality/representation is a necessary counterpart to the society that has salient economic inequalities built on the mythology that anything is possible for everyone.

Some of our understanding of representatives and control by people, constrained as they are by social norms of the Congress or bargaining between President and Congress, ought to be shaped by historical and normative conception. Historical foundations of the current form of representation in the US can be traced to at least Madison. As is commonly surmised, elite deliberation as a model for representation was developed against the fear of the mob. True, but there was much positive thought guiding Madison’s idea of a modern democracy, and representation. It wasn’t just that unconstrained mass democracy is unsuitable, or the larger logistics based argument that mass democracies are untenable, Madison’s claim was that the desired effect of (elite) political representation is “to refine and enlarge the public views, by passing them through the medium of a chosen body of citizens, whose wisdom may best discern the true interest of their country.” Of course to what degree he succeeded in that ideal is open to conjecture, if not open derision.

Partly we also get an understanding of what is to be represented by the prominent instruments that key institutions provide people to express or control their representatives. If representation implies the extent to which political leader acts in accordance with wants and need and demands on the public, then we ought to look into how the public can express its needs, and how those are funneled in the political process. Let’s take, for example, vote. We know for a fact that vote itself is a poor instrument for expressing multifaceted preferences. A vote is binary, or at best trichotomous. So typically the role of a citizen was conceived to be relatively minimal, at least on a per capita basis.

But by constraining ourselves to a discussion about voting, arguably the single most potent symbol of democracy, we fail to fully understand the ability and opportunity provided by democratic governance systems.

More simply, not all representation is via representatives. Democratic governance systems provide multiple ways to shape the public’s agenda, shape public opinion on the agenda, and how it is fed into governance. It provides multiple modes (lobbying, media, etc.), multiple institutional entry points (courts, legislature, public hearings of executive branches etc.), multiple temporal entry points (at the crafting of law, or as its failings are exposed, or in restricting its application “prerogative of the executive branch etc.), through communication of dissent, and consent, and hence allows for representation in many different ways within the many different institutional frameworks.

Measurement: Who is represented?

One way to assess who is represented is to merely track the economic well being of various groups over time. Another would be to correlate opinion/policy of representatives with that of the opinion of the constituency. Given politicians often actively shape opinion, and the problems with using correlation (as highlighted by Christopher Achen), the measure is largely doomed. In addition, any such measure ought to incorporate the agendas of people (problematic to measure), and their opinion on those agenda items. In other words, we ought to measure two things: are issues considered important by people/constituents considered similarly important by the representatives and the ‘correlation’ in opinion on those issues. In absence of similar agenda priorities, the question about agenda would be hard to measure. And certainly, concerns about strategic/manipulative agenda setting by politicians (Page and Shapiro recently came out with a book “Politicians Don’t Pander” that gives this worry some legs) would be of import here as well.

What should be represented?

The answer to the question is murky. Clearly, multiple things need to be represented. For example, say a policy has a disproportionally negative impact on a small group of people—their concerns perhaps ought to be represented. It is inarguable that the representation structure somehow constrains what is to be represented, depending on how widespread the cognition of its impact is – and to what salience. Part of our answer to what ought to be represented depends on our conception of democracy. So if governance is at heart about the allocation of meager resources, and it is certainly at least about that, then does ‘representation’ of one’s interests (hard to define) at the bargaining table as interest groups (or mobilized segments of society) present their cases the ideal?

If we minimally understand people’s wants as interest in better outcomes and assume that better outcomes emerge from good information, we can perhaps then focus on the representation of (all) information, be it the differential impact of certain policies, or some innovative technique.

Role of a representative

Heinz Eulau et al. present two models of thinking about the role of a representative: 1) Who is being represented (district/state)—this needs to be further disinterred, and 2) how—trustee, delegate, politico or hybrid? These axes are a small but essential kernel of a theory of a representative. Yet it would do us a disservice if we think that representatives do one or the other, on any of the dimensions. To a very significant degree, the institutional mechanisms have evolved to dole out the pork (district) and deal with say national issues as well, and many a time getting the former accomplished seamlessly as part of the latter. Alternately phrased, it does us a disservice to think of district and state orientation as polar opposites of a continuum. A broad set of policies accommodate both. Similarly, trusteeship needn’t automatically contradict the role of a delegate. The theorization of the role of a representative ought to take into account the fact that given that over a large set of policy issues, the population has minimal (if not phantom) opinions, what is his/her role and responsibilities? Is it opinion leadership or manipulation (again the reference to Page and Shapiro Politicians don’t pander)? The Page and Shapiro version is considerably closer to the dystopic version outlined by Pitkin— mass democracy inevitably fades into fascist manipulation. The argument, differently expressed elsewhere, goes like this – representation in a democracy is best understood not in terms of accurate correspondence between pre-existing citizen preferences and subsequent government decision but rather as a constructive (if working ideally) process that shapes the very same preferences and perspectives that are represented.


Hanna Pitkin. The Concept of Representation. (1967)
Heinz Eulau et al. The Role of the Representative: Some Empirical Observations on the Theory of Edmund Burke. (1959)
Christopher Achen. Measuring Representation. (1978)

Stability and Democracy

16 Oct

How does one democratically govern a heterogeneous population with an immense plurality of interests, perceived or real? In fact, how does one keep pressures stemming from economic, ethnic, racial, religious, regional, identities back? How does one avoid centrifugal forces from building up, and cleaving? We build an institutional system that only rewards broad coalitions. There is a nice corollary to the system that demands broad coalitions for governance, one that opens up the opportunity for change: As the coalitions becomes broader and more unwieldy, the opportunity beckons for the smaller party(ies) to expand their base by appealing to underserved segments of that coalition, and perhaps win enough over to get a chance to govern.

Then, if it was the threat of factions that led to the institutional design of American democracy, we have succeeded, almost entirely. The American political system has become a stable duopoly, with factions, even troublesome ones like 1968 McCarthy supporters, now residing largely within the parties, mostly quietly.

But to discuss the success of institutional systems that reward broad coalitions in the American context is to not fully discuss them at all. While it is true that in the American context, the first past the post electoral system (if indeed the kind of electoral system predicts the number of parties) has produced a largely stable two-party system (with occasional bouts of third-parties, the latest being Ross Perot in 1992; and the longest lasting being the ‘left-wing’ parties in the Teddy Roosevelt era), the system has had much less success in India, which boasts of thirty plus parties, with each ploughing its own furrow.

So clearly, there are limits to what institutional design can achieve. A closer inspection may reveal that some of the fault lines are visible even in the US. One may argue that the term broad coalitions is a misnomer, especially in the American context, where a significant number don’t vote, and where you can win an election by appealing to the median evangelist or the median racist, in Republican Party’s case. Similarly, one must question why significant third parties like the Socialist party came to be important players, given the logic of wasted votes. But overall, the system has worked well.

Party on

Democracy is perhaps best understood as a Schumpeterian ideal of mass public choosing from competing elites. Parties emerge as natural coalitional vehicles in a democracy to allow elites to stand on ideas, and not as elites. They allow providing the more ambitious members of the public to gain power, in exchange for co-option, partial indoctrination, and work. And furthermore, they allow for only people who aver by the dogma to rise to the top. But reality impedes. More so now, when media have made possible for politicians to come to the fore with only limited help from the party machinery.

Stable Coalitions

If factionalized political systems amplify every segment’s sane and insane demands, political systems that demand ‘broad coalitions’ are, by design, tethered to broad dysfunctions within a society.

At the heart of it, there is nothing seemingly ‘stable’ or even vaguely comprehensible about the ‘broad coalition’ that the Republican Party commands – it is a coalition of the rich, and the poor, the fiscal conservatives, and the taxation-averse (sometimes both), the social conservatives who elect Larry Craig, the libertarians who want government to legislate marriage (and more), etc. The subtext of this coalition, its glue, is of course race.

To keep broad coalitions from heeding to their worst instincts, one needs an informed, civic and liberal-minded citizenry. Failing which, while democracy with a relatively free press may prevent famines, it may not always prevent slavery or foreign occupation, if that is a broad coalition supports it.

Elections Matter

4 May

Politics begets cynicism, especially during the campaigning season when each politician tries to outdo the other in spouting disingenuous and sometimes patently false statements. Cynicism, in turn, becomes the aegis with which we defend our apathy. It’s all the same! Why bother when nothing changes? So, are our peregrinations into indifference well-founded? I don’t think so. Things change—like they did over the past eight years under Mr. Bush, during which at least $4 trillion was added to the deficit to pay for tax cuts for the rich, and the Iraq War. If you think Bush is unique, think again. Consider the policy achievements of Kevin Rudd and Zapatero.

Kevin Rudd was elected Prime Minister about five months ago. His first official act on taking office was to sign the Kyoto Protocol. A few days later, Rudd de facto scrapped the Pacific Solution, the Howard era policy that sent all asylum seekers arriving by boat to remote islands for ‘assessment.’ In February, Rudd offered a short but unambiguously worded apology on behalf of the government and the Australian parliament for the shameful the treatment of the aborigines. (See also the BBC News article on Kevin Rudd’s first 100 days, and a white paper on his first 100 days (pdf).)

Zapatero’s achievements as the head of Spain may have been slower in coming than Rudd’s whirlwind pace, but they have been no less momentous. In his four years at the helm, he “legalized gay marriage, brought in fast-track divorces and laws to promote gender equality and tackle domestic violence. He also introduced an amnesty for undocumented workers.” (BBC. He has introduced “targeted measures to raise the female employment rate (which is still comparatively low in Spain)”, established the legal right to paternity leave. Under Zapatero’s capable finance minister, Pedro Solbes, Spain declared a budget surplus for the third consecutive year, topping 2 percent of gross domestic product for 2007. Policy Network

Epistemic Gains in Prediction Markets

9 Mar

Since at least James Surowiecki’s “Wisdom of the crowds,” a multitude of scholars involved in the field of epistemic democracy have taken to theorizing epistemic utility of tools like “Prediction Markets,” and even the “Wikipedia” model. Cass Sunstein, a Professor of Law at the University of Chicago, has in particular been effective in advocating the idea through a stylized analysis that cherry picks successfully working corporate prediction markets and ignores problems like the current morass of InTrade. Here below I analyze the conditions under which predictions markets can deliver their theorized epistemic gains, and test their robustness to violation of optimal conditions. I, however, start with analyzing a comparison that political scientist Josiah Ober makes between Ostracism and Prediction Markets and in doing so lay out some of the essential features of markets.

Josiah Ober and Learning from Athens

Ober has been a keen exponent of the idea that ancient Athens had institutions that ably aggregated information from citizens, and fostered “considered” judgments. In his Boston Review article, he strangely argues that the decision to build hundreds of warships, prodded by Oracle (!) and now known deliberate misinformation by Themistocles, led to an ultimately ‘right’ decision by the assembly to build warships and not say distribute the windfall from the silver mines to the average citizen. There are two problems here. One is epistemological with its reliance on Oracles for signs, and the other is the use of manipulative information. It is impossible to answer whether Persia attacked because they felt inklings of a threat due to the huge armada of ships that Athens had built.

At another place, Ober has compared the first step of Ostracism proceedings – the Demos taking a vote to determine whether to hold ostracism or not – with Prediction Markets. He argues that the vote to hold Ostracism or not aggregated individual level information or predictions about whether there is “someone” whose presence is pernicious enough so as to merit Ostracism. There are three pitfalls to such comparisons and I will deal with them individually. Firstly, Ostracism didn’t provide people with direct private economic incentives to reveal private information or seek “correct” information, and something which economists believe is essential (it is also born out in experiments). To counteract this argument, Dr. Ober argues that the manifest threat of making a “wrong” decision was large enough to impel citizens to gather the best information. There are two problems with this argument. Penalties for making wrong decision fall on a continuum and are rarely either prosperity or annihilation (certainly the case in Themistocles and Persia). Secondly, even in the presence of imminent threats (something not quite true in this case as the threat is defined vaguely as “wrong decision”- which is a little different from the most informed decision) to groups “collective action problem” prevails – albeit in an extenuated form.

In ancient Athens, the decision to hold Ostracism was made through a vote. The vote is a poor aggregator of private information for with each dip you get only a yes or a no. This impoverished information sharing also exerts enormous pressure on the distribution of “right” information among the population for a small minority of “right” voters can easily be silenced by a misinformed majority. The only way a voting system can reliably aggregate information (if the choice is binary) is if each dip on average has more than a 50% chance of being correct (Condorcet’s insight). Markets, on the other hand, provide for information to be expressed much more precisely through price. (We will come to the violations of this tenet in markets later.)

Unlike in voting, markets deter information (and misinformation unless strategically) sharing although the price does send cues (information) to the market. (Of course, strategic players fudge investments so as monetize their investment maximally) Suffice it is to say however that voting systems are more prone to aggregating disinformation than market systems where incentives for gaining “right information” increase in tandem with people investing with “wrong information.”

Markets, Betting Markets

I will deal with some other issues including assumptions about the distribution of private information later in the article. Let me briefly stop here to provide an overview of markets and betting markets in particular.

Markets, when working optimally, are institutions that aggregate all hidden and manifest information and preferences and express it in a one-dimensional optimally defined parameter, price. Since all individual preferences are single-peaked with reference to price, markets are always single-peaked, avoiding aggregation issues and Condorcet’s paradox. Markets aggregate not only information about demand, and supply but also the utility afforded by the commodity to each individual consumer, and such aggregation optimizes the “allocative efficiency.” And apparently all this is done magically – and in Adam Smith’s coinage at the beckoning of the famous “invisible hand.”

Prediction markets, also known under the guises of “information markets” or “idea futures” among others, tie economic gains to the fulfillment of some prediction. The premise is that the possibility of economic gain will provide people to reveal hidden information – or more precisely bet optimally without revealing information. Prediction markets are quite different from regular markets for trading is centralized against a bookmaker that decides the odds after aggregating bets. This type of architecture puts significant constraints on the market than say the architecture of a share market, which is essentially decentralized. I will come to the nature of the constraints later but suffice it is to say that it avoids some of the “variances” and “excesses” and “excess variances” of the decentralized system – the kinds which made Robert J. Shiller turn to behavioral economics from playing with math and monkey wrench models.

Expanding on the nature of prediction markets – “A prediction market is a market for a contract that yields payments based on the outcome of a partially uncertain future event, such as an election. A contract pays $100 only if candidate X wins the election, and $0 otherwise. When the market price of an X contract is $60, the prediction market believes that candidate X has a 60% chance of winning the election.” (Prediction Markets in Theory and Practice -2005 Draft, Justine Wolfers, and Eric Zitzewitz) A more robust description is perhaps necessary to explain how bookies come to know about these odds. In much of sports betting, bookies commence betting by arriving at a consensus that reflects expert opinions of a small group of professional forecasters. “If new information on the relative strengths of opposing teams (e.g., a player injury) is announced during that week, the bookie may adjust the spread, particularly if the volume of behavior favors one of the teams. In addition, since the identity of the bettors is known, bookies may also change the spread if professional gamblers place bets disproportionately on one team. To make these adjustments, the bookie moves the spread against the team attracting most of the bets to shift the flow of bets toward its opponent. Shortly before game time, the bookie stops taking bets at the ‘closing’ point spread. Like securities prices at the end of trading, closing spreads are assumed to reflect an up-to-date aggregation of the information and, perhaps, biases of the market participants.” (Golec and Tamarkin, Degree of inefficiency in the football betting market, 1991, Journal of Financial Economics: 30). There are other ways through which a similar arrangement can be executed. For example, the software now continually adjust the odds depending on bets. The danger is that you can quickly short the system if you solely rely on anonymous betting data. I will come back to his later. One additional point to finish the description – Given the nature of the commodities or assets traded, we can only get results on questions that have binary answers, and not say discovery questions unless discovery questions can be split into innumerable binary questions.

Before we analyze the betting market efficiency, I would like to present a short list of the previously theorized (and proven) betting market failures or “instances where the operation of the market delivers outcomes that do not maximize collective welfare.” There are several forms of market failure:

  • Imperfect competition—where there is unequal bargaining power between market participants;
  • Externalities—where the costs of a particular activity are external to the individual or business and imposed on others (e.g. Assassination Markets);
  • Public goods—where there are goods for which property rights cannot be applied; and
  • Imperfect information—where market participants are not equally informed.

*As always, penalties follow some function of the extent of the violation. Most effects are non-linear.

Let’s analyze the epistemic dimension of the market as in its capability to deliver information that is somehow better. The supposition in a prediction market is that people aren’t revealing (or finding information) for they have inadequate incentives to do so. So betting is merely a way to incentivize the discovery process. It is important to note that merely the fact it assumes that people have private reserves of information (generally amounting to the knowledge that other people aren’t smart) severely limits the role of the prediction markets in areas where there isn’t such knowledge. Certainly, I can’t think of a lot of public policy arena where it is the case. (It is also important to keep in mind that most policy decisions have a normative dimension aside from some fully informed preference dimension.) Otherwise betting markets merely try to aggregate – and don’t do so well – public information cues. Simon Jackman in his forthcoming paper that analyzes betting behavior in political markets in Australia has found that betting markets essentially move following the cues of opinion polling results. There is no information source outside of what is already publicly accessible that people rely on to make their bets. So the idea that somehow prediction markets will deliver better results even where privately held reserves of information are low or zero is ludicrous and easily empirically disproved.

More importantly, betting markets, even sophisticated ones like the sports betting markets, are incurably biased—proven statistically multiple times over: they underestimate home field advantage, and all too often “go with the winners.” The bias is supported by two intertwining psychological biases: “safe betting” and “betting on favorites.” And these biases are found in nearly all betting markets.

Betting markets behave best if there is complete adversarial betting – which is never the case for most of the price is set by investment by small players following the elite herd. This has defined by Sushil Bikhchandani, David Hirshleifer, and Ivo Welch, in a classic 1992 article, as “information cascades” that can lead people to serious error. Shiller recently wrote about this while explaining how the housing bubble (essentially banks betting on loans) stayed under the radar so long. He quotes the paper at length –

“Mr. Bikhchandani and his co-authors present this example: Suppose that a group of individuals must make an important decision, based on the useful but incomplete information. Each one of them has … information…, but the information is incomplete and noisy and does not always point to the right conclusion.

Let’s update the example…: The individuals in the group must each decide whether real estate is a terrific investment… Suppose that there is a 60 percent probability that any one person’s information will lead to the right decision. …
Each person makes decisions individually, sequentially, and reveals … decisions through actions — in this case, by entering the housing market and bidding up home prices.

Suppose houses are really of low investment value, but the first person to make a decision reaches the wrong conclusion (which happens, as we have assumed, 40 percent of the time). The first person, A, pays a high price for a home, thus signaling to others that houses are a good investment.

The second person, B, has no problem if his data seem to confirm the information provided by A’s willingness to pay a high price. But B faces a quandary if his information seems to contradict A’s judgment. In that case, B would conclude that he has no worthwhile information, and so he must make an arbitrary decision — say, by flipping a coin to decide whether to buy a house.
The result is that even if houses are of low investment value, we may now have two people who make purchasing decisions that reveal their conclusion that houses are a good investment.

As others make purchases at rising prices, more and more people will conclude that these buyers information about the market outweighs their own.

Mr. Bikhchandani and his co-authors worked out this rational herding story carefully, and their results show that the probability of the cascade leading to an incorrect assumption is 37 percent. … Thus, we should expect to see cascades driving our thinking from time to time, even when everyone is absolutely rational and calculating.

This theory poses a major challenge to the efficient markets view of the world… The efficient-markets view holds that the market is wiser than any individual: in aggregate, the market will come to the correct decision. But the theory is flawed because it does not recognize that people must rely on the judgments of others. …

It is clear that just such an information cascade helped to create the housing bubble. And it is now possible that a downward cascade will develop — in which rational individuals become excessively pessimistic as they see others bidding down home prices to abnormally low levels. “

Betting markets like all other markets are “sequential” with each investor trying to parse tea leaves and motives of prior investors. The impulse to do original research is countervailed by the costs, and by the fear that others know something that they don’t.

The other intersecting psychological factor that complicates markets is complete blind betting. Time and again even as information and probabilities converge, some bettors hold out for a miracle.

It is also important to keep in mind the following tenet that governs prediction market behavior– “Garbage in, garbage out… Intelligence in, intelligence out…” So prediction markets – to the extent that they rely on speculation are remarkably likely to follow any information that is likely to give them a leg up. While misinformation theoretically incentivizes procurement of good information, it never pans out empirically for a major investment by another is seen as an informational cue, more powerful than whatever access you may have. This is an important point, for the competitor has no way of knowing your information for all s/he has access to is the investment that you make on it, and the space for conjecture about the veracity of the competitor’s information is immense. This is a market based on never revealing information, and that diminishes the efficiency considerably.

Betting markets merely rely on the fact that you are less misinformed than others, and that gradient can be built through strategically spreading misinformation (quite common in betting circles) or through some theorized virtuous cycle of increasingly good information.

Betting markets can be easily shot by someone willing to lose some money. Asymmetry in finances can hobble the incentives for betting market and information discovery process.

Lastly, laws against insider trading limit the kind of information bettors have access to. They limit the information discovery process severely. Relatedly, information – for it to be monetizable – has to be brought into the system privately so bettors may try to sabotage release of public information. Not only that, they have to be strategic in how they send cues to the market so that they earn the most money from their bets. If done en masse or rashly, it will almost certainly short their bets. So not only do betting markets have only one way of expressing information – price/investment- bettors go to great lengths to hide that cue especially if they know how the cues are being aggregated.

In summary, the above list of problems with betting markets underscores the analytical and empirical evidence against the naive ill-substantiated unbridled faith in the epistemic prowess of the betting markets.

The Case for Epistemic Controls in a Democracy

14 Feb

First Amendment mandates that “Congress shall make no law … abridging the freedom of speech.” Over the years, the courts have construed the clause in a way that privileges political speech in its various manifestations while ruling to limit the protections afforded to commercial speech. Hence, different regulatory frameworks have emerged to control commercial speech in a variety of areas. For example, FDA through its Division of Drug Marketing, Advertising, and Communications sets standards for all drug advertisements. It mandates that “all drug advertisements contain (among other things) information in brief summary relating to side effects, contraindications, and effectiveness.” Similarly, any company listed publicly has to comply with a host of disclosure laws about its finances and business practices that dramatically restrict the kind of claims companies can make, at least about their accounts. The principle behind these mandates is that public good (in case of the stock market, investor good) is served when we limit or penalize disinformation. The further pretext for such laws is that such laws are necessary where the supply of information is essentially a monopoly of organizations or people with explicit incentives to lie or strategically misrepresent information.

Arguments for setting epistemic standards for speech in the political arena have been criticized in America on a variety of grounds including that such a law will be unduly restrictive and hard to implement, that free speech is necessary for democracy, free speech is a “fundamental” “human right” (absent of its necessity or utility) – it is part of Article 19 of UN Declaration of Human Rights, and free speech promotes search for truth, etc. More broadly, there are two major defenses for freedom of speech – a deontological conception of the sanctity of freedom of speech (expression – more broadly), and an instrumental defense of its merits – its necessity for preserving democratic values and ideals, etc. It is hard to contest either of the claims: the claim for normative supremacy is hard to make in face of axiomatic judgments about the “fundamental” nature of these “rights”, and instrumental supremacy is hard to argue given “democratic values and ideals” are so loosely defined that they can be spun every which way, and an incommensurable value attached to any of those parts. But let’s hold this chain of thought for now.

Black’s Law Dictionary, 5th ed., by Henry Campbell Black, West Publishing Co., St. Paul, Minnesota, 1979, defines fraud as, “All multifarious means which human ingenuity can devise, and which are resorted to by one individual to get an advantage over another by false suggestions or suppression of the truth. It includes all surprises, tricks, cunning or dissembling, and any unfair way which another is cheated.”

It is an unsurprisingly vague definition cognizant of the artfulness of the subtlety with which fraud is perpetrated. The law relies on the skill of the prosecutor, and in company’s case – the defense attorney, and the perspicacity of the judge (or) jury to come up with a judgment as to whether fraud was perpetrated. One can come up with a more restrictive definition of the “law” but it would most likely be counterproductive for it would channel effort in perpetrating “fraud” that is still technically legal – much like compliance with tax law through using tax havens – and by taking away discretion from the judges, make it impossible to award judgments against patent cases of fraud. In the political arena, people are justifiably skeptical of arriving at limiting definitions, and at submitting a speech to be analyzed by a body with fairly large discretionary limits on the interpretation of the “letter of the law.” More nefarious motives undoubtedly exist for such protestations – least of them perhaps include worries that such a law would jeopardize their chances at electoral success. Then there is empirical evidence to suggest that politicians are very skillful at being honest without ever telling the truth. A stylistic narrow-minded honesty that includes choice picking of their own life and opposition’s words is currently in vogue, and very hard to guard against. Not to mention the perverse ability to bludgeon the voter with the inconsequential, or ability to strategically shift focus on issues. For example – the 2000 campaign was essentially fought on the grounds of “whom would one like to have beer with?”, or the “Willie Horton” ad used in 1988 election featuring real-life person and story – though of limited evidential value – to skillfully convey race and crime cues to the disadvantage of the Democratic challenger, Michael Dukakis.

Quite apart from the justifications for “freedom of speech” is the claim that we should let the market dictate roughly the proportion of what is heard at what volume, and what isn’t. The supposition is that market will somehow tune up the volume of the speech in proportion to its appeal to people – with no claims made about its epistemic worth. The ancillary argument is that “marketplace of ideas” will sift through ideas in a way that the “best” ideas and opinions rise to the top, and additionally even if someone buys more airtime it doesn’t quite matter for the public decides whether the idea is in its interest and hence dictates its adoption (popularity). Understandably it is a hopelessly unsupported proposition lacking any merit whatsoever.

The argument that competition alone would be enough to bring the “product” with the “best utility” (economic utility) to come to the top is doubtful at best and predicated on a host of bizarre wholly empirically unsupported assumptions. The argument is particularly inapplicable to the domain of “public goods” where people have limited incentives to gather enough information, and in systems where information distribution is asymmetric. Anthony Downs expressed worries about “asymmetric information” in his 1957 opus, An Economic Theory of Democracy. There is indeed a steadily increasing penalty for disinformation and the resulting sub-optimal decisions, but given the low efficacy that people feel (among other things), their interest and motivation remains in making themselves more informed low. Given such conditions, we need – more strongly than a regulatory framework governing information dispersal in private economic choices – a similar mechanism so as to mitigate some of the most severe problems.

One of the ways – and one that would be appealing to all political parties and free speech advocates – through which we can mitigate some of the problems in the current information regime would be to mandate release of government information. Transparency has been found to increase attendance of teachers in rural schools, and distribution of funds to people in rural employment scheme in India, attendance of legislators in Uganda etc. FOIA works to a great degree in the US, and it is arguable that the gravest trespasses occur where the reach of the statute is the most limited – “national defense.” Transparency is effective because it creates opportunities for accountability. It however also foments strategic compliance. For example, as I mentioned earlier – we now have fairly truthful ads that systematically misrepresent issues and positions of opposition. Graver still is the issue that transparency only works to a limited degree in a country where people are apathetic, and media absconding. For example, while America has inarguably the largest trove of publicly available data including statistics on economics, labor, etc. – they are hardly ever part of the public discourse. The corrective solution flows directly from the way I describe the problem –creating an aware media that works to highlight these facts and put issues and positions in context.

Let me take a moment to argue that given mass media is currently the dominant way through which people get information, any proposal for instituting epistemic controls has to cover media. Similarly, any proposal for epistemic controls has to not only ensure the epistemic superiority of the resulting commentary but also regulate how it is presented so as to be useful to the masses. In other words, the framework has to take into account the state of the masses, and how they process information. Given the psycho-cognitive research by Kahneman and others, we know that people regularly ignore base rate information in favor of illustrative anecdotes – the way news is traditionally offered. We also know enough through the priming literature that repeatedly bringing up an issue –regardless of context – increases its salience in decision making.

Given the difficulty in mounting such controls, perhaps the best epistemic intervention that we can provide is an educative one. There is ample evidence that if we improve people’s understanding of what constitutes valid and pertinent evidence, and even help inculcate simple skills like numeric literary – we can have a tangible impact on the way people make decisions and how they respond to appeals.

The Democratic Idea of Knowledge

3 Jul

In their misguided battle against ‘global warming’, the phrase, “science is not a democracy” has become a favorite with conservative pundits.

Well, I am co-opting their slogan to initiate discussion about something else—the future of knowledge.

Let us imagine that Google’s model of site ranking trumps all others. The Google model relies on the fact that if an information source is reliable, people will “vote” for it with a link. So more the number of links to the site, higher the ranking.

There are serious issues with this “democratic” model of ranking information. One falsehood linked again and again can give it the same credence as a fact. This theory is just an extension of the corner-shop gossip analogy with one substantive difference – in a linked online world, the effects are not localized but global. The model is troubling especially given that more acerbic, vituperative articles very likely get linked more than the dry, measured pieces. To drive home the point, let me etch out a particularly troublesome outcome – a world where all the knowledge is hijacked by zealots of either persuasion.

The democratic idea of knowledge rests upon the twin facts that information choices are diverse enough on the Internet to allow people to choice of source that has the most accurate information and that people who see the source with “right information” will know it is the “right information” and will be involved enough to “vote” with their links to the site. The paradigm just ignores one key thing -Internet, counterintuitively, doesn’t allow many choices. Ahem…

Hegemony is not only a problem with the mass-communication world but also with the Internet model. We are right now emerging from decades of a communication model dominated by mass-media where only a few outlets controlled the majority of the information. From this relatively oligarchical model, we are moving to a “distributed” model. The key problem with this distributed model is firstly that it is not really distributed. It is, in fact, narrower – a vast majority of the people look for information using just three tools – Google, Yahoo, and MSN. While these search engines spit out millions of results to our queries, studies show that most people never get beyond the first page. In this media market, the information hierarchy is in fact even more entrenched.

The other key issue that is at the heart of the problem of determining the veracity of information on the Internet is the relative anonymity of the Internet. The pedigree of knowledge is an important part in establishing the veracity of a piece of information and the relative anonymity of the Internet has raised concerns about the veracity of the information on it.

So, in all the Internet poses unique challenges for the existence and acceptance of “real” knowledge.

Need for Epistemic Standards for Evidence and Arguments in Governance

2 Jul

Claim:
Explicit standards for evidence and argument are critical in a competitive system where competing groups have palpable incentives to withhold information, monger stilted information, use irrelevant information, or use any tactic to win.

Argument:

Different branches of the US government use different epistemic standards.

The US judicial system uses the adversarial system in which each of the parties presents its case to a neutral party (judge or jury). Each side is supposed to furnish evidence in support of its argument, and an ‘impartial’ judge decides on what evidence is better in terms of its applicability and strength.

The adversarial system is a competitive system that relies on the sparring parties to furnish evidence. Like any competitive system, the sparring parties have incentives to withhold information from each other and misrepresent information. The system relies on the ‘other’ party to excavate any such violations, and sometimes on the neutral party. There are some other formal procedures to limit the kind of evidence that can be presented (though some are rooted in alternate theories) and procedures for sharing corroborative evidence. There are also formal procedures as to what kind of arguments can be presented.

The adversarial judicial process inarguably uses the strictest standards of evidence amongst any branch of government.

While the legislative process is largely a ‘competitive’ system, it has no formal epistemic standards limiting the kind of evidence or arguments that can be presented. The strength of the evidence presented, its applicability, etc. are either ‘judged’ by ‘citizens’ (substantially mediated by media) or by members of the other competing party.

The problem with legislative branch is not only that it is a competitive system, but that is a corrupt, special interest driven competitive system. The system provides little incentive to the members to judge the evidence impartially with the nation’s best interests in mind.

There are no epistemic standards that hold back the executive branch except for some loose constraints that tie those standards to the marketability of a particular policy decision.

Congress also uses the ‘Inquisitorial system’ when it conducts ‘Congressional Hearings’ to ‘investigate’ a particular issue. Of course, due to partisanship pressures, the inquisitorial system often uncomfortably borders on ‘inquisition’.

Solution:

One way to correct the problem would be to create governance structures that explicitly involve independent bodies that judge the strength and applicability of evidence presented.

Opining About the Perils of Opinions: The Opinion Poll Model of Policy Making

29 Jun

Public opinion is central to the democratic political process and it has never been more important that today, when opinion poll numbers are constantly cited in the media to buffet policy choices. It behooves us hence to foremost understand opinions and the process of opinion change, and then to think critically about whether the causal mechanism driving ‘opinion change’ are commensurate with the expressed ideals of ‘democracy’.

What is the value of an ill-considered opinion from a person with limited knowledge of the facts? Close to none, one would expect. But apparently, it is worth much more in a policy debate on the Hill if sound bytes by politicians quoting poll numbers to buffet the validity of their issue positions are anything to go by.

Courtesy significant advances in sampling methodology, communication technology, and computational technology, one can now conduct a nationwide Opinion Poll cheaply (relatively) and quickly. Every major media company, from New York Times to Fox News, now publishes stories about the ‘findings’ from the polls with unerring frequency and drops these numbers casually on near about every policy issue, let alone questions like, ‘What should Paris Hilton eat for breakfast?’

Given the important role that media has in ‘framing’ the issue (Iyengar), and the fecundity of the polls, news media now often cites figures from opinion polls as part of a story on an issue and asks politicians to defend their policy choices (in six seconds or less) given the poll numbers. Correspondingly politicians increasingly cite poll numbers on issues as corroboratory evidence for or against a policy direction.

Where do opinions come from?

In a culture that values ‘individual expression’ above everything else, it isn’t surprising that people offer opinions on issues they know little to nothing about an issue. Funnily, and as has been extensively documented in Political Science literature, people not only offer opinions about what they know nothing about, they also offer opinions about non-existent (phantom) issues. (Lippman, 1993 and others) Krosnick et al. have posited a more benevolent interpretation of ‘phantom opinions’ arguing that these opinions originate from ‘violation of communication norms’. Even if Krosnick is right, there is wide agreement within the field that the general public which makes it to the voting booth and gleefully casts its vote (a behavior strongly based on overall opinion) is deeply ignorant about most issues.

Leaving aside ‘phantom opinions’, let us try to understand where opinions come from. “Every opinion is a marriage of information and values-information to generate a mental picture of what is at stake and values to make a judgment about it” (Zaller, 1991). It is important to notice how Zaller uses the term ‘information’ which he describes later in the paper as whatever political information a person consumes via media or other ways. By limiting himself to political information, Zaller mistakenly assumes that political opinion making sits in an isolated bunker – only affected by relevant political information – in people’s minds. Neuman in his book, ‘Common Knowledge’ has persuasively argued to the contrary. Leaving Neuman’s objections aside, it is easy to surmise that information has generally little to do with facts of the case. Secondly, we have yet to tackle how much of the opinion is driven by ‘values’ and how much of it is driven by ‘information’ but it seems intuitive that the mix would vary depending on a variety of factors ranging from the issue at hand (opinion on a value issue like abortion would inarguably have higher percentage of ‘value’ as compared to one on economic policy), need for cognition (people with higher need for cognition would use more ‘information’), cognitive ability, amount of information, etc.

Normative Questions

Since the publication of American Voter (Cambell et al.), and Converse’s later explications (1964, 1970), which described the average American voter as apathetic, and largely ignorant about major issues, political theorists have grappled with the threat that an uninformed voter poses to the claims of normative superiority of democracy. If democracy was to be claimed as a ‘normatively superior system’ in itself, without resorting to claims about its superiority as an instrumental good that provided ‘better governance’, it was important for the political theorists to argue that voters voted their interests – a claim which was no longer possible in lieu of evidence that pointed to widespread ignorance. The more severe threat that the theorists are rightly concerned about, is whether the democratic system can continue to deliver its benefits if the voters ceased to vote ‘their interests’ given a lot of benefits in the system are predicated on that assumption. While the conjecture is open to empirical analysis, we can theoretically analyze the value of an ‘opinion’ in an ideal democratic model.

The value of an expressed opinion in a democracy is directly proportional to its ability to tap into a voter’s ‘real interests’, best understood as ‘interests’, as understood by the voter, under ‘full knowledge.’

Ideal Opinions, Opinion Aggregation

Value of opinion cannot be pried apart from the system in which it is used. The composition of ‘ideal opinion’ would vary according to the system. Since we are talking about a democracy, let’s analyze its composition here.

The value of an opinion in a discussion is if it reveals a hitherto unknown piece of information. In a poll where you are asked to furnish your ‘considered preferences’ that benefit is lost to some degree. One can argue that there is indeed some knowledge hidden in the choice but since a poll weighs considered choices equally with ill-considered ones, and because we don’t know what led to the final choice, it is impossible to argue whether democratic majorities do bring forth collective knowledge. The only condition in which the scenario would hold is when a majority of the voters vote for the ‘right’ preference.

Let us now assume ‘full-information’ and the only variable as ‘value system’. If there is a ‘common’ universally accepted value system, then there is little value in soliciting opinions from everybody. It is when you start thinking about ‘averaging’ across value systems, a tenuous concept at best, that you need to think about soliciting opinions from others. Polling populace on a fixed choice of politicians is an impoverished way to go about tapping into ‘people’s will’.

Perhaps the way to tackle the problem is by actually breaking this down into a two-step problem – information maximization and ‘averaging’ over values. I believe we have the scientific community to address the former, but I do not have the answer to the latter but perhaps informed deliberation – which involves ‘public exchange of reasons’ – about consequences would be one way to approximate that.

Notes on Partisanship

25 Jun

Manipulating the Median Voter Theorem

It is commonly touted that elites are far more partisan than the rank and file. One would have thought that in accordance with the median voter theorem, a simple majority voting model for single dimension issue space proposed by Duncan Black and later popularized by Anthony Downs, the elites would be under pressure to have public ideological profiles that appeal to the median voter.

This seemingly ‘irrational’ behavior of the elites can be explained in a variety of ways—average voter, which includes only the people who do vote, is on average more partisan than an average eligible voter, an average ‘voter’ chooses a candidate based on vague personality and party cues rather than specific issue position cues (to which they are largely unaware), voter’s issue positions are incestuously linked to the positions outlined by the candidates that they ‘like’, and the fact that elites gerrymander the multi-dimensional issue space so that the salient issue(s) on which an average voter votes are ones on which they have positions similar to the ‘median voter’.

Party and Partisanship

While the overall impact of parties has waned over the years, the party ‘line’ exercises more control on candidate’s professed positions. In this world of continuous media coverage, there is increasing pressure to present a consistent party approved stance. At the other end, there is a strong self-selection process, precedent, and certainly fear of how each ‘off-message’ comments would be interpreted in media, that is driving an assembly line in which generally only candidates who profess abiding faith in party ideology succeed in the primaries.

There is a certainly an increasing gap between the message, the voting record, and the candidate opinion, and a deliberately cultivated one. The partisanship is held together by ‘partisan money’, and custom order research produced by think tanks to justify and corroborate any policy initiative that they are asked to.

Media and Partisanship

Horse Race format of covering policy

The other aspect of media’s impact on partisanship has been driven by how it covers political issues – be it immigration or Iraq. The much-decried horse-race coverage, which was once a preserve of election coverage, has now entered the policy domain. A large number of articles in newspapers give an insider view of politicking and impact of a policy decision on the party rather than on say the nation. Now while covering a news story journalists go from politician to politician seeking quotes which they then use to provide worthless hack analysis in words of politicians. Nowhere do journalists stop and question the policy stances independently aside from what the ‘other side’ chose to point out. By doing this, they do two things – they first of all fail to provide substantive useful information to their readers, and secondly by weaving in partisan cues give readers automatic pointers to devalue certain information.

Partisan Identities: Using anger and satire

The rise of humorous “fake” news shows satirizing politics – most prominently “The Daily Show” by John Stewart – over the past decade has been widely seen as an unmitigated positive by a lot of self-identified ‘liberals’. What ‘liberals’, cozy in the success of a liberal comedy show, fail to realize is the pernicious aspect of satire – it delegitimizes opposing viewpoints without proper analysis. It is only time before right-wing ‘news’ channels come up with their liberal baiting satire shows.

The other prominent way to delegitimize opposing opinion is through self-righteous anger. This is, of course, most prominently done by right-wing pundits like Bill O’Reilly and Rush Limbaugh.

While Bill O’Reilly’s “No Spin Zone” is a stylized partisan lynching of liberals, Stewart’s satire is the vicious intelligent kind that ridicules the ‘idiots’. The shows use every rhetorical (and editing) trick to not only defeat the opposing party but do so in the most vicious incendiary manner that entertains the partisan viewers.

Both anger and satire are explicit identity building and reaffirmation rituals. What we see when straw man ‘guests’ get grilled on these shows is identify reaffirmation for the viewers – these people in the opposition are actually immoral, corrupt idiots.

Perhaps something of much more concern is the rise of entire partisan news channels. While there wasn’t much ‘news’ on the ‘news channels’ to begin with, and the ‘news’ coverage continues to cede territory to celebrity coverage, whatever shriveled carcass was left is now being preyed upon by explicit partisan coverage. There are no longer undisputed facts—there are now Republican facts and Democratic facts. And of course, both bear little resemblance to actual facts.

Democratic Pandering

9 Jun

“Mr. Bush said Putin’s recent harsh comments toward the West suggests he may be trying to build support for his party in advance of next year’s elections, and the president saw that as positive. He said, quote, “When public opinion influences leadership, it is an indication that there is involvement of the people.” (Fox Transcript)

The argument that Mr. Bush is making here is that when leaders deliberately pander fear and do warmongering, it is a signal that the country is democratic. Alternatively, deliberate unethical manipulation of public opinion to garner votes is a positive.

Democracy: Whither Epistemic Validity?

15 Mar

It doesn’t take long for a person to realize that the current democratic model is deeply flawed. The continued failure of about thirty percent of Americans to realize that Iraq did not have weapons of mass destruction speaks volumes of the limitations of the current information stream and the democratic system based on it. As our democratic stands now, it works, or more accurately doesn’t work, in the following way – it needs three years of continuous coverage that the war is going catastrophically for about 70% of the citizens to finally realize that it is indeed going badly. In other words, the current democratic model not only has a substantial time lag in information dispersal (and hopefully action) but also a model that doesn’t respond to gradual increases in problems like gradual increase in poverty. In other words, it is a ‘frog in the hot water’ (oblivious of the gradual rise in temperature) model. And while we respond to pointless scandals and excel at slaying imaginary ghosts, we can build little momentum towards solving some of the most exigent problems in an optimal way. I argue that the current state of democracy has a lot to do with its modern origins that were based on that period’s exigencies and the then prevailing wisdom (Adam Smith).

The modern origins of democracy that typically begin with the democratic US point to a system formed in response to elite and colonial excesses. The chief worry at the time was to prevent the exercise of power by a small minority with no vested stake in the welfare of the masses. Hence, appropriately, the system of democracy that was formed as a result of it was tailored towards distributing power to common citizens and hence, in turn, maximizing the legitimacy of the decisions made. Critically, since the British excelled at monopoly, ‘founding fathers’ (themselves rich) strove to institute Capitalist attitudes towards trade, private ownership, and business.

Modern democracy was never geared towards coming up with the ‘best’ decision or maximizing some other utility function. To analyze democracy’s claims to making ‘best’ decisions, one has to make a number of leaps including that every citizen is aware of his self-interests and larger public’s interests; each citizen forcefully hawks his or her ideas in the marketplace of ideas, and that the best information and best arguments will win in this marketplace and form the basis for legislation. In other words, claims to the normative superiority of democracy it seems to come from a reasonably well-functioning market of ideas – a market that is not driven by the most saleable or seductive ideas but by the ‘best’ ideas (which it hopes would sell the most). This, in turn, seems like a particularly botched hypothesis in a market with pervasive ignorance, as Converse et al. have shown.

The concept of an idea marketplace deserves further attention for that is from where all possible benefits of democracy are actually supposed to accrue. The fact is that while a lot of theoretical energy in the field of democratic theory has been tailored towards justifying the moral superiority of democracy over other systems, an ailment that I believe can be traced to Cold War days, there has been little focus on critiquing the fundamentals of democracy. If look at the time period just before Cold War, there was a lot of intellectual energy invested in analyzing whether having a Capitalist economic system puts at risk the functioning of the marketplace of ideas. There is little doubt in my mind that if profiteering is the guiding principle of information distribution, let alone the entire society, it seems unlikely that good information, a requisite for the marketplace of ideas and citizenship, will flow unpolluted. The idea that the market can let alone decide and assess an accurate value on each piece of information and give to the citizen at the appropriate time in an appropriate manner is ludicrous at best. It comes as no surprise to me that economic market has increasingly usurped the democratic marketplace of ideas. A prime exemplar of the usurpation is the proclamation that head of Ford once made when he said, “What’s good for Ford is good for America.”

There are two points that one can glean from the above discussion – one is that there is little doubt that the current democratic system is fatally flawed and its flaws primarily stem from a stilted realization of the marketplace of ideas. If we indeed want to continue with some form of governance that takes into account public opinion, we must strive to make the public more informed about issues. To the extent that people can be made more informed by instituting reforms in media, we must do so. Alternatively, we can try to come up with better decision making models that provide better incentives to citizens to be informed and for lawmakers to aggregate the choices with less pressure from lobbyists. Deliberative polling model, which takes a random representative sample of the populace and lets them deliberate about issues, does just that. But it fails to fix the wider malaise that afflicts the wider body politic. It is likely that a combination of the above two methods presents us with the best chance of succeeding as a democracy.