Idealog: Opening Interfaces to Websites and Software

20 Mar

Imagine the following scenario: You go to NYTimes.com, and are offered a choice between a variety of interfaces, not just skins or font adjustments, built for a variety of purposes by whoever wants to put in the effort. You get to pick the interface that is right for you – carries the stories you like, presented in the fashion you prefer. Wouldn’t that be something?

Currently, we are made to choose between using weak customization options or build some ad hoc interface using RSS readers. What is missing is open-sourced or internally produced a selection of interfaces that cater to diverse needs, and wants of the public.

We need to separate data from the interface. Applying it to the example at hand – NY Times is in the data business, not the interface business. So like Google Maps, it can make data available, with some stipulations, including monetization requirements, and let others take charge of creatively thinking of ways that data can be presented. If this seems too revolutionary, more middle of the road options exist. NY Times can allow people to build interfaces which are then made available on the New York Times site. For monetization, NY Times can reserve areas of real estate, or charge users for using some ad-free interfaces.

This trick can be replicated across websites, and easily extended to software. For example, MS-Excel can have a variety of interfaces, all easily searchable, downloadable, and deployable, that cater to specific needs of say, Chemical Engineers, or Microbiologists, or programmers. The logic remains the same – MS needn’t be in the interface business, or more limitedly, doesn’t need to control it completely or inefficiently (for it does allow tedious customization), but can be a platform on which people can build, and share, innovative ways to exploit the underlying engine.

An adjacent broader and more useful idea is to come up with a rich interface development toolkit that provides access to processed open data.

Idealog: Internet Panel + Media Monitoring

4 Jan

Media scholars have for long complained about the lack of good measures of media use. Survey self-reports have been shown to be notoriously unreliable, especially for news, where there is significant over-reporting, and without good measures, research lags. The same is true for most research in marketing.

Until recently, the state of the art aggregate media use measures were Nielsen ratings, which put a `meter’ in a few households, or asked people to keep a diary of what they saw. In short, the aggregate measures were pretty bad as well. Digital media, which allows for effortless tracking, and the rise of Internet polling however for the first time provides an opportunity to create `panels’ of respondents for whom we have near perfect measures of media use. The proposal is quite simple: create a hybrid of Nielsen on steroids and YouGov/Polimetrix or Knowledge Network kind of recruiting of individuals.

Logistics: Give people free cable and Internet (~ 80/month) in return for 2 hours of their time per month and monitoring of media consumption. Pay people who already have cable (~100/month) for installing a device and software. Recording channel information is enough for TV, but Internet equivalent of a channel—domain—clearly isn’t, as people can self-select within websites. So we only need to monitor the channel for TV but more for the Internet.

While the number of devices on which people browse the Internet, and watch TV has multiplied, there generally remains only one `pipe’ per house. We can install a monitoring device at the central hub for cable, and automatically install software for anyone who connects to the Internet router or do passive monitoring on the router. Monitoring can also be done through applications on mobile devices.

Monetizability: Consumer companies (say Kellog’s, Ford), Communication researchers, Political hacks (e.g. how many watched campaign ads) will all pay for it. The crucial innovation (modest) is the addition of the possibility to survey people on a broad range of topics, in addition to getting great media use measures.

Addressing privacy concerns:

  1. Limit recording information to certain channels/websites, ones on which customers advertise, etc. This changing list can be made subject to approval by the individual.
  2. Provide for a web-interface where people can look/suppress the data before it is sent out. Of course, reconfirm that all data is anonymous to deter such censoring.

Ensuring privacy may lead to some data censoring and we can try to prorate the data we get it a couple of ways –

  • Survey people on media use
  • Use Television Rating Points (TRP) by sociodemographics to weight data.

Marginal value of ‘Breaking News’

12 Mar

Media organizations spend millions of dollars each year trying to arrange for the logistics of providing ‘breaking news’. They send overzealous reporters, and camera crew to far off counties (and countries -though not as often in US), pay for live satellite uplink, and pay for numerous other little logistical details to make ‘live news’. They do so primarily to compete and to out do each other but when queried may regale you with mythical benefits of providing the death of a soldier in Iraq a few minutes early to a chronically bored, apathetic US citizen. The fact is that there is little or no value whatsoever for a citizen of breaking news for a large range of events. Breaking news is provided primarily as a way to introduce drama into the news cast and done so in a style to exaggerate the importance of the miniscule and the irrelevant.

The more insidious element of breaking news is that repeated news stories about marginal events, which most breaking news events are – for example a small bomb blast in Iraq, a murder in some small town in Michigan, provide little or no information to a citizen consumer about the relative gravity of the event or its relative importance. In doing so they make a citizen consumer think either that all news (and issues) is peripheral or that these minor events are of critical importance. Either way, they do a disservice to the society at large.

This doesn’t quite end the laundry list of deleterious effects of breaking news. Focus on breaking news makes sure that most attention is given to an issue when the journalists on the ground typically know the least about the issue. To take this a step further – often times the ‘sources’ for reporting during the initial few minutes of an event are often times ‘official sources’. In doing so the breaking news format legitimizes the official version of the news which then gets corrected a week or a month later in the back pages of a newspaper.

While there is little hope that the contagion of ‘breaking news’ will ever stop (and it stands to believe that web, radio and television will continue to be afflicted by the malaise), it is possible for people to opt for longer better reported articles in good magazines or learn about an issue or an event through Wikipedia, as Chaste in his column for this site suggested earlier.

Conversation With Bill Thompson: Fragmented Information

10 Mar

This is the fourth and concluding part of the interview with BBC technology columnist, Mr. Bill Thompson.

part 1, part 2, part 3

This kind of completes two of the major questions that I had. I would now move on to digital literacy and fragmented informational landscape. Google has made facts accessible to people – too accessible, some might say. What Google has done is allowed the people to pick up little facts, disembodied and without the contextual information. It may lead to a consumer who has a very particularistic trajectory of information and opinions. Do you see that as a possibility or does the fundamental interlinked nature of the Internet somehow manages to make information accessible in a more complete way? In a related point do you see that while we are becoming information-rich, we are also simultaneously becoming knowledge poor?

That is such a big question. In fact, I share your concerns. I think there is a real danger – that it’s not even just that there is sort of a surfeit of facts and a lack of knowledge, its that the range of facts which we have available to us becomes defined by what is accessible through Google. And as we know that even Google, or any other search engine, only indexes a small portion of the sum of human knowledge, of the sum of what is available. And we see that this effect also becomes self-reinforcing so that somebody is researching something and they search on Google, find some information, they then reproduce that information and link to its source and it becomes therefore even more dominant, it becomes more likely to be the thing people will find next time they search and as a result alternative points of view, more obscure references, the more complex stuff which is harder to simplify and express drops down the Google ranking and essentially then becomes invisible.

There is much to be said for hard research that takes time, that is careful, that uncovers this sort of deeper information and makes it available to other people. We see in the world of non-fiction publishing, particularly I think with history every year or two we see a radical revisionist biography of some major historical figure based on a close reading of the archives or access to information which was previously unavailable. So all the biographies of Einstein are having to be rewritten at the moment because his letters from the 1950s have just become available and they give us a very different view of the man and particularly of his politics. Now if our view of Einstein was one defined by what Google finds out about Einstein we would know remarkably little. So we need scholars, we need the people who are always going to delve a little more deeply and there is danger in the Google world – it becomes harder to do that and fewer people will even have access to the products of their [careful researcher’s] work because what they write will not itself make it high up the ranking, will not have a sufficient ‘page rank’.

So I actually do think Google and the model of information access which it presents us is one that should be challenged and it should only ever be one part of the system. It is a bit like Wikipedia. I teach a journalism class and I say to my students that Wikipedia may be a good place to start your research but it must never be the place to finish it. Similarly, with Google, anybody who only uses the Google search engine knows too little about the world.

You bring up an important point. Search engine design, and other web usage patterns are increasingly channeling users to a small set of sites with a particular set of knowledge and viewpoints. But hasn’t that always been the case? An epidemiological study of how knowledge has traditionally spread in the world would probably show that at any one time only a small amount of knowledge is available to most people while most other knowledge withers into oblivion. So has Google really fundamentally changed the dynamics?

You are trying to do that to me again and I won’t let you.

This is not a fundamental shift in what it means to be human. None of this is a fundamental shift in what it means to be a human. Things may be faster, we may more access or whatever but we have always had these problems and we have always found solutions to them. And I am not a sort of a millenialist about this; I don’t think this is the end of civilization. I think we face short-term issues and we historically have found a way around them and we will again. That Google’s current dominance is a blip. In a sense – it will go, I don’t know how. Ok, here’s a good way in which Google’s dominance could go. So, at the moment we have worries in the world about H5N1 avian flu mutating into a form which infects humans. Let’s just suppose that this happens and that somebody somewhere writes an obscure academic paper which describes how basically to cure it and how to prevent infection in your household. Well all the people who rely on Google won’t find this paper will die and all the people who go to their library and look up the paper version will live and therefore the Google world will be over. How about that? There is something, perhaps not quite on that scale, something will happen which will force us to question our dependence on Google and that would be a good thing. We shouldn’t ever depend on anyone like that.

You know Mr. Thompson, even libraries have sort of shifted. They are increasingly interested in providing Internet access.

Yeah, it is and it is search rather than structure. And you know the fact is that search tools make it easy to be lazy and we are a lazy species and therefore we will lazy and we will carry on being lazy until we are forced until something bad happens because of our laziness at which point we will mend our ways.

That’s why I had brought up the question of fragmented knowledge earlier. One of my close friends is blind and he generally has to read through the book to reach the information that he wants. He tends to have a much fuller idea of context and the kind of corroboration that he presents is much different from the casual kind of scattered anecdotal argumentation that others present. Of course part of that is a function of he being a conscientious arguer but certainly part of it stems from he not having as many shortcuts to knowledge and actually having a fuller contextual understanding of the topic at hand. The fact is that most users can now parachute in and out of information and Google has helped make it easier.

I don’t think we see what’s really going on. There is a lot more information and there is a lot more to cope with and this superficial skimming is a very effective strategy. Skim reading is something we know how to do, we teach our children how to do, we value in ourselves and indeed in them, and skim surfing is just as valuable. You know I monitor thirty-forty blogs, news sites and stuff like that and when I am doing it, I don’t look too closely at things. That doesn’t mean that I don’t have the ability or the facility to do something which is a lot deeper and a lot more involved.

I have a fifteen-year daughter. She is doing her GCSE exams this year. And I have watched over the last 18 months or so how she has developed her ability to focus, her research skills, her reading around, she is surrounded by a pile of books, she has stopped using the computer as the way to find things quickly because she now needs to know stuff in depth and she is doing all of that. So I suspect that from the outside observing children we seem them in a certain way because we only see part of what they do and we have to look in more detail. It is too easy to have the wrong idea and actually I am a lot more hopeful about this, having seen this with my daughter and I think I will start to see it with my son, who is fourteen at the moment. And again I see his application to the things he cares about and the way he searches. He is a big fan of The Oblivion, the X-Box game, his engagement and the depth of his understanding is immense. So we shouldn’t let the fact that we look at some domain of activity where they are purely superficial let us lose sight of the fact of other areas where it is not superficial at all, where they have developed exactly those skills which would want them to have.

——–

Bill Thompson’s blog

Conversation With Bill Thompson: Copyright Law

8 Mar

This is part III of a four-part interview with Mr. Bill Thompson, noted technology columnist with the BBC.

part 1, part 2

“Copyright is not a Lockean natural right but is a limited right granted to authors in order to further the public interest. This principle is explicitly expressed in the U.S. Constitution, which grants the power to create a system of copyright to Congress in order to further the public interest in “promoting progress in science and the useful arts.” (Miller and Feigenbaum, Yale) UK’s copyright law dates back to Statute of Anne from 1709, which states – “An Act for the Encouragement of Learning, by vesting the Copies of Printed Books in the Authors or purchasers of such Copies, during the Times therein mentioned.” Both seem to see copyright as something tailored towards the public good. The modern understanding of it has sort of disintegrated into a sort of “right to make as much money as one can”. Am I correct in saying that? Please elaborate your views on the subject.

Copyright started out as an attempt to restrict the ability of publishers of books to control absolutely what they did under contract law and to establish limitations on the period in which a work of fiction or indeed any written work could be exploited by one group of people, and to ensure that after a certain amount of time it was available as part of the public domain to serve the public good. So copyright has always been about taking away any absolute right so that the creator of a work of art, fiction, literature or non-fiction has so that everyone can benefit; take away the absolute right and give away in return monopoly over certain forms of exploitation during which period they are expected to make enough money or gain enough benefit to encourage them to carry on creating.

So the idea is that it is a balance – give the creator enough so that they can create more and encourage them to do that because it is good but make sure that the products of their creative output fall into the public domain so they can be used by everyone for the wider good on the grounds that you can never know in advance who will make the best use of someone else’s creative output and therefore it should be available. So, the fact that the early years of the last century a cartoonist in the United States called Walt Disney drew a mouse based on other people’s ideas is great and Disney and his family have had a lot of time to exploit the value in the mouse but there are other people now who could do a better job with it and they should be allowed to get their hands on the mouse and do cool stuff with it. That’s the idea and that is the principle that is being broken by large corporations who see the economic advantage to themselves in extending the term of copyright, in limiting the freedoms that other people have because they don’t care about the public good, they care about their own good. And legislatures, particularly in the United States but also elsewhere, have been bought off, corruptly or not, and have not been true to the original principles, which is that in the end it should all go into the public domain so that anybody who wants can make use of it and exploit it in creative ways that we cannot yet imagine. In a sense it’s an expression of humility – it’s saying that we cannot know for sure who will be able to do the best with its work and therefore it is the interest of everybody that it should be available to everybody. That was the breakthrough – the insight – of copyright law 300 years ago. We are coming up on the 300th anniversary of the Statute of Anne, the first codified copyright law and I think we should big party for it.

The point is that – the point is most eloquently made not by Larry Lessig, who is good, but by Richard Stallman of the Free Software Foundation and his point is just that copyright is broken and it needs to be rebalanced and we need new and different approach to copyright and in a sense it is the one area of law where we actually do need to start again. I am always an advocate of trying to make old laws work with new technologies. I think that we should be very cautious about making new laws because looking back historically it does like that today’s politicians are more stupid and more corrupt than those of older days and therefore are less likely to make good laws – that just seems to be the case. Correct me if I am wrong. And therefore we should avoid giving them the ability to screw things up. But with copyright, we are forced to. So we have to engage with the political system, we have to make sure that the people who have political power understand the issues and we have to force them to do the right thing. In other areas for example libel laws and all sorts of other aspects of what we do online, in fact, the existing legal framework has proven remarkably robust. There have been problems over jurisdiction and problems over enforcement but the laws themselves have applied pretty well in the networked world and we haven’t needed that many new laws and that is a good thing. Copyright is the one area where we clearly do.

Copyright, if minimally construed, is the right to produce copies. This particular understanding is fabulously unsuited for the Internet era where technology companies like Google have a business model based on making daily copies of content and making it searchable. Book publishers, along with some other content producers, have cried foul. It seems to me that they don’t understand the Internet model, which in a way has changed the whole dynamic of ‘copying’.

I don’t think it has changed the whole dynamic as much as it as exposed another reading of the word copy and made it the dominant reading and so undermined part of the ball. Parliamentary draughtsmen, the people who wrote those laws, were perfectly right in using the word like they did; it is just that we have promoted one particular facet of copy. The fact that we use the word copy to refer to the version that is made in sort of viewing a webpage on a browser – the version that is held in the display memory and all those sorts of things – we could have avoided a lot of this fuss by redefining what the word copy means thirty years ago or fifty years ago or just not using the word copy. It wouldn’t have actually helped the larger issue because the real problem with copyright is not that too many incidental acts on our computer systems, on our network are in principle in breach of copyright, it’s the fact that the existence of the network makes it possible to breach copyright deliberately, almost maliciously.

As we talk I am waiting for the Episode 13 of Series 3 of Battlestar Galactica to download onto my PC via BitTorrent from the United States so I could watch it. Ok! Now that is a complete infringement of copyright.
[I reply jokingly – so I am going to the MPAA.] Feel free, I would welcome their letter. I would delete it once I have watched it and I would buy the DVD once it comes out. But Sky here hasn’t started showing it four months after it was on the Science Fiction channel. Well, I am not going to wait four months to watch something when it is available. I mean that’s just foolish. That exposes holes in copyright law. It also exposes holes in the economic strategy of multinational corporations who run the broadcast industry in the UK and the US because they just don’t understand the market or what people are doing. There are times when you have to stretch the system to demonstrate the absurdity of the old model and that’s what I see myself as doing.

The US and EU copyright regimes differ in some marked ways. Similarly, Australian copyright law is different in its statute of limitations that is much smaller than the US. Post Internet, we do really need a common international framework for copyright.

But we do. We have that. We have the World Trade Organization, we have WIPO – the World Intellectual Property Organization, we have the Berne (convention signatories). There is an international framework for copyright. It’s as broken as anything else. We need a new Berne, we need to go back to Switzerland and renegotiate what copyright means on a global level but there is that framework but it’s been caught out by technology.

Databases are given legal protection in EU via its database directive while similar privileges haven’t been granted in US. What do you make of this effort to give copyright to databases?

That’s just a European absurdity which we will realize was a mistake and eventually change. You have a database copyright in the European Union and in some other countries though not in the United States and it is clearly a mistake. There is growing awareness that something needs to be done about it because it’s not necessary to offer such protection. The idea that you get automatic protection for taking other people’s data and structuring it in a certain way has limited economic flexibility and has damaged competitiveness.

There is always a problem you see that as new technologies emerge to suggest new rights to go with them and this was the case where [we drafted something into] a law before wiser counsels could prevail.

Gowers report recently received a fair bit of attention. The report, I believe, had this wonderful recommendation for handling patent applications. It talked about putting up patent applications online and having an open commenting period. You in fact wrote about the report in your recent column. Can you talk a little more about the report?

Gowers report was commissioned by the Chancellor of the Exchequer, Gordon Brown, who is a senior government minister, basically second only to Tony Blair and indeed Gordon Brown hopes to be Prime Minister within the next few months. Because of the way British politics works he can probably manage that without ever getting elected because he would just become party leader and therefore automatically the Prime Minister because the Labor Party is the dominant party in the government.

Brown commissioned a man called Andrew Gowers, who had at that point just been fired from being the editor of the Financial Times, to carry out this report. Andrew is a nice man but many of us doubted his ability to resist the Copyright lobby, to resist the pressures, to write something which would make industry happy, but he surprised us all, partly thanks to the excellent team of people he had working for him at the Treasury in the UK. He came up with a report that wasn’t radical but was sensible and what we do best in British politics is sensible because people can behind sensible. He said some things which were well argued, didn’t give in to the vested interests and didn’t give the music industry what they wanted.

Unfortunately, the Gowers Report is just that – it is a report, it is a series of recommendations which then goes into the government machine and has then to be acted on. It doesn’t do anything itself. We have a political issue here which is that when Gordon Brown as Chancellor of the Exchequer commissioned the report, he believed that by the time it was published he would be Prime Minister, he believed by then Tony Blair would have gone and he would then be in a position to take this report and say I commissioned this report when I was Chancellor and it is absolutely fantastic, now I am Prime Minister and I am going to make it happen. Unfortunately, Tony Blair has refused to go and so Gordon Brown has received the report as a Chancellor and has no real power to deliver on it. And so the question is when Gordon does become Prime Minister – will it be his priorities – probably not, will the world have changed- probably, will he have been leaned on so effectively by the very wealthy music and movie industry so that he will actually dilute some of its recommendations –well tragically probably yes. So the timing is all wrong. The opportunity that Gowers presented was for Gordon Brown to say – this is great let’s just do it. Now we are going to have to wait – eight months – and [in that time] things would have changed and there will be a lot else for Gordon Brown to do. So for those of us who think that the recommendations are good are trying to keep the pressure on and keep track of what is happening, have the right conversations and make sure that when Gordon does become Prime Minister, because it looks fairly likely that he will, that he is reminded of his at the right time in the right way so that it can then turn into real change.

The other thing to remember is that a lot of changes that are proposed, a lot of recommendations are proposed, are actually international recommendations. So there are things that will have to happen at a European level or at a global level and so to some extent it is a call for British ministers, for British representatives, for British commissioners at Europe, for British delegates at WIPO to behave in a different way but it will take some time before we know that’s being successful. The report advocates engagement at a global level. It then needs to happen.

Conversation With Bill Thompson: The Political Economy of the Internet

6 Mar

This is part 2 of the interview with Bill Thompson, technology columnist with the BBC. part 1

When I look at the Internet, there is this wonderful sense of volunteerism. It is incredible to see the kind of things that have come out of recent technology like the open source movement, and Wikipedia. Even Internet companies seem to have adopted sort of socially nurturing missions. How did these norms of volunteerism get created? Has technology merely enabled these norms? Or are we witnessing something entirely new here?

If you look at common space peer production, as Yochai Benkler calls it, what motivates people is exactly the same question as what motivates altruism. Because what we have with contributions to open source projects like Linux or positive contributions to Wikipedia, is what would seem to be on surface just pure altruistic behavior. So we can ask the same questions. What do people get in return? And do they have to get something in return?

Pekka Himanen in the Hacker Ethic, I think, nailed what people get in return— the social value you get from that, the sense of self-worth, the rewards that you are looking for, all of that makes perfect sense to me. I don’t think we need to ask any more questions about that. You get stuff back from contributing to the Linux kernel or putting something up on SourceForge. The stuff you get back is the same sort of stuff you get back from being a good active citizen. It is the same stuff as you get back from say recycling your trash.

The question as to whether something new is emerging, whether what’s happening online, because it allows for distributed participation – because the product of the online activity is say, certainly in the case of open source, a tool which can then itself be used elsewhere, or in the case of Wikipedia, a new approach to collating knowledge. Whether something completely new or radical is coming out of there still remains to be seen. I am quite skeptical about that. I am quite skeptical of brand new emergent properties of network behavior because we remain still the same physical and psychological human beings. I am not one of those people who believes that singularity is coming, that they are about to transcend the limitations of the corporeal body and that some magical breakthrough in humanity is going to happen thanks to the Internet and new biomedical procedures. I don’t think we are on the verge of that change.

I think that Internet as a collaborative environment might emphasize what it is to work together and change what it means to be a good citizen, but it doesn’t fundamentally alter the debate.

But the kind of interactions that we see today wouldn’t have happened if it were not for the Internet. For example, the fact that I am talking to you today is, I believe, sufficiently radical.

But has it changed anything fundamentally? Ok, it has allowed us to find each other but there was in the 13th century medieval Europe a very rich and complicated network of traveling scholars, who would travel from university or monastery to share each other’s ideas, they would exchange text. It was on a smaller scale, it was much slower, and it was at a lower level but was it fundamentally different to what we are doing in the blogosphere or with communications like this? Just because there is more of it doesn’t mean it is automatically different.

Let me move on here to a related but different topic. I imagine that the techniques which have been developed around this distributed model be applied to a variety of different places. For example, lessons from open source movement can be applied to how we do research. Can lessons of the Internet be applied elsewhere? Certainly, alternative forms of decision making are emerging within companies. Is Internet creating entirely new decision models and economies?

That’s quite a big question. There’s a sort of boring answer to it which is just that more and more organizations and more and more areas of human activity are reaching that third stage in their adoption of information and communication technologies. The first stage is where you just computerize your existing practices. The second stage is where you tinker with things and perhaps redefine certain structures. But the third stage is where you think ok these technologies are here so lets design our organizational processes, structures and functions around the affordances of the technology, which is a very hard thing to do but something which more and more places are doing. So just as in the 1830s and 1840s, organizations built themselves around the capabilities of steam systems and technologies and in the 1920s they built themselves around the new availability of the telephone, so now, in the West certainly, it is reasonable to assume that the network is there, and the things it makes possible it will continue to make possible. So you start to build structures, workflow and practices, businesses and indeed whole sectors of the economy around what the net does. In that sense, it is changing lots of things. As I said, I think that’s a boring insight. That’s what happens! We develop new technologies and we come to rely on them. It’s happened for the past five thousand years. So while it may be a new one but it’s the same pattern. Joseph Schumpeter got it right in the 1930s talking about waves of ‘Creative Destruction’ and everybody is now talking about that in the media but fundamentally there is nothing different going on there.

There is a more interesting aspect of that. Are some of the outputs of the more technological areas—the open source movement and things like that—creating wholly new possibilities for human creative and economic expression? And, they might be. I don’t think we know yet. I think it’s too early to tell. We have seen the basis of the Western economy and hence of the global economy move online (become digital) over the past twenty years. As Marx would put it the economic base has shifted. We are seeing the superstructures move now to reflect that. The idea of economic determinism is not right at every point in history but certainly, the world we live in now is a post-capitalist world. We still use the word Capitalism to describe it but in fact, the economy works in a slightly different way and we are going to need a new word for it. In that world – we have a new economic base – we will find new ways of being. And we will start to see the impact in art and culture, in forms of religious expression. You know we haven’t yet seen a technologically based region and it is about time we saw something emerge where the core presets rely on the technology.

Are we really post-Capitalist as you put it? I would still argue that Capitalism still trumps. The usage patterns of websites etc. still largely reflect the ‘old economy’. More importantly, I would argue that the promise of Information Age has long been swallowed by the quicksand of Capital.

When I say post-Capitalist, I don’t mean it’s not capitalist. If you look at the move from the feudal economy to Capitalism, the accumulation of capital became important. It still remains very important. It is still what drives things. The rich get more, the powerful remain more powerful and indeed those who have good creative ideas get appropriated by the system. We are seeing it happen already with the online video world where now if you create a cool 30-second video, your goal is to monetize that asset and basically you put it on Youtube and try to advertise it – you become part of the system and that this continues to happen. Just in parenthesis, the idea is that we are post-Capitalist not in that we are replacing Capitalism but it’s a different form of Capitalism—it is Uber Capitalism, it is Networked Capitalism. We need a new word for what we can do now. It doesn’t mean that those with capital don’t dominate because they do and they will continue for some time, I imagine.

In that sense that the network had some sort of democratizing influence is misguided. It hasn’t. It has enabled much greater participation. It may well make it possible for more people to benefit from their creativity in a modest way but I don’t think it will do anything to challenge the fundamental split between the owners of capital, those who invest their money and that counts as their work, and the wage slaves, the proletariat, those who have to do stuff every day in order to carry on and earn enough money to live. I don’t think it will change that at all.

Your comments are just spot on. There is an astute understanding of the political economy of the net especially at a time when one constantly hears of the wondrous impact of the Internet to revolutionize everything from Democracy to Economy.

Yeah. The network is a product of an advanced Capitalist economy largely driven by the economic and political interests of the United States although that balance is starting to shift. We see what is happening – particularly India and China are starting to have some influence, not very strong at the moment but growing, on the evolution of the network. But again India and China are trying to find their own ways of be industrial capitalist economies. They are not really trying to find their ways to be something completely different.

The digital economy, as you pointed out, still largely reflects the ‘real’ world underneath it. Things will change and are changing in some crucial fundamental ways but the virtual world is anchored to the real world. One facet of that real world is the acute gender imbalance in the IT industry. What are your thoughts on the issue?

There have been massive advances, particularly in Europe and the United States, [which] are I think two [places] in which over the past 100 years we have accepted and indeed believe that differences [in treatment] between men and women, which existed in many other societies, were just wrong. The differences which are currently enforced on billions of women around the world by their religions should be overcome. This was a historical era. There is no real difference [between genders]; the gender differential is unjust. Social justice requires equality. But it’s [gender equality] a very recent idea, it’s a very recent innovation and one of the last places where it has made an impact is within the education system so that fifty years ago the education system would push the men towards science and technology and women towards art and domestic skills. I think we are just living through the consequences of that in that sort of adults that we have today, in the people of my age now. When I was in school the girls would be glided away from the sciences and as a result technology and engineering were to a large extent male preserves and we are still correcting that historical injustice.

Now, what’s interesting though is that whilst we see that difference between those who build and create the machines, and at the engineering level, we are seeing it much less and less at the user level. So now the demographics of Internet use, computer use, laptop use, mobile phone use and all those sorts of things, certainly within the West, reflect the general population. Over the last ten years I have watched Internet use equalize, certainly here in the UK between men and women, and indeed what research has been done about how computers are used in the household makes it very clear that the computer has now become another household device that is as likely to be used by or controlled by the women or girls in the house as by the boys. So I think at the user level where the technology pushes through into our daily life that distinction isn’t there anymore. It’s at the programmer level where we see fewer women programmers and fewer women web designers. There are still a lot of them out there, friends of mine, male and female who are just as equally good and astute and capable at coding and developing and all those things but we still do see fewer. And I think it’s just a general societal imbalance that has yet to be corrected.

Conversation With Bill Thompson: The Future

5 Mar

While technology has become an important part of our social, economic and political life, most analysis about technology remains woefully inadequate, limited to singing paeans about Apple and Google, and occasional rote articles about security and privacy issues. It is to this news market full of haberdasher opining that Mr. Bill Thompson brings his considerable intellect and analytical skills every week for his column on technology for the BBC.

To those unfamiliar with his articles, Mr. Bill Thompson is a respected technology guru and a distinguished commentator on technology and copyright issues for the BBC. Mr. Thompson’s calm moderated erudition of technology comes from his extensive experience in the IT industry in varying capacities and a childhood without computers. “I was born in 1960. So I grew up before there were computers around. Indeed, I never touched one at school.” It was not until his third year at Cambridge University when he was running experiments in Psychology that he first touched a computer. He says that in many ways his first experiences with computers formed his mindset about computers. And that view—computers are there to perform a useful function—has stayed with him for over 25 years.

Mr. Thompson went on to get a Master’s level diploma in Computer Science from Cambridge University in 1983. After graduating from Cambridge, he joined a small computer firm and then quit it to join Acorn Computers Limited, creators of the successful BBC Micro, as a database consultant. He left the enterprise because “they wanted to promote me” and joined as a courseware developer with Instruction Set. After a stint with PIPEX, he found himself running Guardian’s New Media division a decade or so ago when the Internet was still in its infancy. After working for a few years managing Guardian’s online site, Mr. Thompson left to pursue writing and commenting full time. It is in the field of writing and providing astute analysis on technology-related issues that Mr. Thompson finds himself today.

I interviewed Mr. Thompson via Skype about a month ago. The interview covered a wide range of issues. Given the diversity of issues covered I have chosen to put an edited transcript of the interview rather than an essay styled thematic story. Here’s an edited (both style and content) transcript of the interview.

The technology opinion marketplace seems to be split between technology evangelists and Luddites. Your writing, on the other hand, manifests a broad range of experience; it reflects moderated enthusiasm about what computers can do. I find it an astute and yet optimistic account.

I am fundamentally optimistic about the possibilities of this technology that we have invented to both make the world a better place and to help us recover from some of the mistakes of the past and make better decisions as a species, not just as a society, in the future. It informs my writing. It informs as well the things that I am interested in and the areas that I want to explore.

Our relationship with machines was once fraught with incomprehension and fear. Machines epitomized the large mechanized state and its dominance over the natural world. There was a spate of movies somewhere in the 70s when refrigerators and microwaves rose up to attack us. Over the past decade or so, our relationship has transformed to such a degree that not only do we rely on fairly sophisticated machines to do our daily chores, but we also look at machines as a way to achieve utopian ideals. Fred Turner, professor of Communication at Stanford, in “From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism” traces this rise of digital utopianism to American counterculture. How do you think the relationship evolved?

The way you phrase the question leads me to think that perhaps it was the exaggerated claims of Artificial Intelligence community that led people to worry that computers would reach the point at which they would take over. And the complete failure of AI to deliver on any its promises has led us to a more phlegmatic and accepting attitude, which is that these are just machines; we don’t know how to make them clever enough to threaten us, and therefore we can just get on with using them.

The fact is known that Skynet is not going to launch nuclear weapons at us in a Terminator world and so we can then focus on the fact that the essential humanity of the Terminator itself, certainly in the second and third movies, is a source of redemption. We can actually feel positive about the machines instead of negative about them.

When you have a computer that is around, that crashes constantly, that is infected with viruses and malware, that doesn’t do what is supposed to do and stuff like that, you are not afraid of it. You are irritated by it. And you treat it as you would a recalcitrant child that you might love and care for and that has some value but is certainly not something that is going to threaten you. And then we can use the machines. That then actually allows us to focus on what you call the Utopian or altruistic aspects. It allows us to focus on machines in a much broader context, which recognizes that human agency is behind it.

The dystopian stories rely on machines getting out of control but in fact, we live in a world in which the machines are being used negatively by people, by governments, by corporations, and by individuals. The failure to have AI allows us to accept that – to reject the systems they have built without rejecting the machines themselves.

And for those who actually believe that information and communication technologies are quite positive – (it allows us) to focus on what could be done for good instead of just dismissing all of the technology as being bad. It allows us to take a much more complex and nuanced point of view.

You make an excellent point. I see where you are coming from.

In a sense, it is where I am coming from and which is—I am a liberal humanist atheist. I believe we make this world and we have the potential to make it better, and the technologies we invent should be part of that process.

Just as I am politically socialist, I believe in equality of opportunity and social justice and all those things [similarly] I have a humanist approach to technology which is that what we have made we can make ‘do good’ for us.

Google News: Positives, Negatives, and the Rest

16 Nov

Google News is the sixth most visited news site, according to Alexa Web Traffic Rankings. Given its popularity, it deserves closer attention.

What is Google News? Google News is a news aggregation service that scours around ten thousand news sources, categorizes the articles and ranks them. What sets Google News apart is that it is not monetized. It doesn’t feature ads. Nor does it have deals with publishers. The other distinguishing part is that it is run by software engineers rather than journalists.

Criticisms

1. Copyright: Some argue that the service infringes of copyrights.

2. Lost Revenue: Some argue that the service causes news sources to lose revenue.

3. Popular is not the same as important or diverse: Google News highlights popular stories and sources. In doing so, it likely exacerbates the already large gap between popular news stories and viewpoints and the rest. The criticism doesn’t ring true. Google News merely mimics the information (news) and economic topography of the real world, which encompasses the economic underpinnings of the virtual world as in better-funded sites tend to be more popular or firms more successful in real world may have better-produced sites and hence may, in turn, attract more traffic. It does, however, bring into question whether Google can do better than merely mimic the topography of the world. There are, of course, multiple problems associated with any such venture, especially for Google, whose search algorithm is built around measuring popularity and authority of sites. The key problem is that news is not immune to being anything more than a popularity contest shepherded by rating (euphemism for financial interests) driven news media. A look at New York Times homepage, with extensive selection of lifestyle articles, gives one an idea of the depth of the problem. So if Google were to venture out and produce a list of stories that were sorted by relevance to say policy, not that any such thing can be done, there is a good chance that an average user will find the news articles irrelevant. Of course, a user-determined topical selection of stories would probably be very useful for users. While numerous social scientists have issued a caveat against adopting the latter approach arguing that it may lead to further atomization and decline in sociotropism, I believe that their appeals are disingenuous given that specialized interest in narrowly defined topics and interests in global news can flower together.

4. Transparency: Google News is not particularly transparent in the way it functions. Given the often abstruse and economically constrained processes that determine the content of newspapers, I don’t see why Google News process is any less transparent. I believe the objection primarily stems from people’s discomfort with automated processes determining the order and selection of news items. Automated processes don’t imply that they aren’t based on adaptive systems based on criteria commonly used by editors across newsrooms. More importantly, Google News works off the editorial decisions made by organizations across the board, for they include details like placement and section of the article within the news site as a pointer for the relative importance of the news article. At this point, we may also want to deal with the question of accountability, as pertaining to the veracity of news items. Given that Google News provides a variety of news sources, it automatically provides users with a way to check for inconsistencies within and between articles. In addition, Google News relies on the fact that in this day and age, some blogger will post an erratum to a “Google News source” site, of which there are over ten thousand, and that in turn may be featured within Google News.

Positives

Google News gives people the ability to mine through a gargantuan number of news sources and come up with a list of news stories on the same “topic” (or event) and the ability to search for a particular topic quickly. One can envision that both the user looking for a diversity of news sources or looking for quick information on a particular topic, could both be interested in other related information on the topic. More substantively, Google News may want to collate information from its web, video and image search, along with links to key organizations mentioned in the websites and put then right next to the link to the story. For example, BBC offers a related link to India’s country profile next to a story on India. Another way Google News can add value for its users is by leveraging the statistics it compiles of when and where news stories were published, stories published in the last 24 hrs or 48 hrs etc. I would love to see a feature called the “state of news” that shows statistical trends on news items getting coverage, patterns of coverage etc. (this endeavor would be similar to Google Trends)

Diversity of News Stories

What do we mean by diversity and what kind of diversity would users find most useful? Diversity can mean diverse locations—publishers or datelines, viewpoint—for or against an issue, depth—a quick summary or a large tome, medium—video, text, or audio, type of news—reporting versus analysis. Of course, Google can circumvent all of these concerns by setting up parallel mechanisms for all the measures it deems important. For example, a map/google news “mashup” can prove to be useful in highlighting where news is currently coming from. Going back to the topic of ensuring diversity – conceptual diversity is possibly the hardest to implement. There can be a multitude of angles for a story – not just for and against binary positions and facets can quickly become unruly, indefensible and unusable. For example if it splits news stories based on news sources (like liberal or conservative – people will argue over whether right categorizations were chosen or even about the labeling, for example, social conservatives and fiscal conservatives) or organizations cited (for example there is a good chance that an article using statistics from Heritage foundation leans in a conservative direction but that is hardly a rule). Still, I feel that these measures can prove to be helpful in at least mining for a diversity of articles on the same topic. One of the challenges of categorization is to come up with “natural” categories as in coming up with categorization that is “intuitive” for people. Given the conceptual diversity and the related abstruseness, Google may though want to preclude offering them as clickable categories to users thought it may want to use the categorization technique to display “diverse” stories. Similarly, more complex statistical measures can also prove to be useful in subcategorization, for example providing a statistical reference to the most common phrases or keywords or even Amazon like statistics on the relative hardness of reading. Google News may also just want to list the organizations cited in the news article and leave the decision of categorization to users.

Beyond Non-Profit
Google News’ current “philanthropic” (people may argue otherwise viewing it as a publicity stunt) model is fundamentally flawed for it may restrict the money it needs to innovate and grow. Hence, it is important that it explores possible monetization opportunities. There are two possible ways to monetize Google News – developing a portal (like Yahoo!) and developing tools or services that it can charge for. While Google is already forging ahead with its portal model, it has yet to make appreciable progress in offering widely incorporable tools for its Google News service. There is a strong probability that news organizations would be interested in buying a product that displays “related news items” next to news articles. This is something that Technorati already for does for blogs but there is ample room for both, additional players, and for improving the quality of the content. It would be interesting to see a product that helps display Google News results along with Google image, blog, and video search results.

Comments Please! The Future Of Blog Comments

11 Nov

Often times the comments sections of blogging sites suffer from a multiplicity of problems – they are overrun by spam or by repeated entries of the same or similar point, continue endlessly, and are generally overcrowded with grammatical and spelling mistakes. Comments sections that were once seen as an unmitigated good are now seen as something irrelevant at best, and a substantial distraction at worst. Here, I discuss a few ways we can re-engineer commenting systems to mitigate some of the problems in the extant models, and possibly add value to them.

Comments are generally displayed in a chronological or reverse chronological order, which implies that, firstly, the comments are not arranged in any particular order of relevance and, secondly, that users just need to repost their comments to position them in the most favorable spot – the top or the bottom of the comment heap.

One way to “fix” this problem is by having a user based rating system for comments. A variety of sites have implemented this feature to varying levels of success. The downside of using a rating system is that people don’t have to explain their vote for, or against, the comment. This occasionally leads to rating “spam”. The BBC circumvents this problem on its news forums by allowing users to browse comments either in a chronological order or in the order of reader’s recommendations.

Another way we can make comments more useful is by creating message board like commenting systems that separate comments under mini-sections or “topics”. One can envision topics like “factual problems in XYZ” or “readers suggested additional resources and links” that users can file their comments under. This kind of a system can help in two ways – by collating wisdom (analysis and information) around specific topical issues raised within the article, and by making it easier for users to navigate to the topic, or informational blurb, of their choice. This system can also be alternatively implemented by allowing users to tag portions of the article in place – much like a bibliographic system that adds a hyperlink to relevant portions of the story in comments.

The above two ways deal with ordering the comments but do nothing to address the problem of small irrelevant repetitive comments. These are often posted by the same user under one or multiple aliases. One way to address this issue would be to set a minimum word limit for comments. This will encourage users to put in a more considered response. Obviously, there is a danger of angering the user, leading to him/her adding a longer, more pointless comment or just giving up. On average, I believe that it will lead to an improvement in the quality of the comments. We may also want to consider developing algorithms that disallow repeated postings of same comments by a user.

The best way to realize the value of comments is to ask somebody – preferably the author of the article – to write a follow-up article that incorporates relevant comments. Ideally, the author will use this opportunity to acknowledge factual errors and analyze points raised in the comments. Hopefully, this follow-up piece will be able to solicit more comments, and the process would repeat again, helping to take discussion and analysis forward.

Another way to go about incorporating comments is to use a wiki-like system of comments to create a “counter article” or critique for each article. In fact, it would be wonderful to see a communally edited opinion piece that grows in stature as multiple views get presented, qualified, and edited. Wikipedia does implement something like this in the realm of information but to bring it to the realm of opinions would be interesting.

One key limitation of most current commenting systems on news and blog sites is that they only allow users to post textual responses. As blog and news publishing increasingly leverages multimedia capabilities of the web, commenting systems would need to be developed that allow users to post their response in any media. This will once again present a challenge in categorizing and analyzing relevant comments but I am sure novel methods, aside from tagging and rating, will eventually be developed to help with the same.

The few ideas that I have mentioned above are meant to be seen as a beginning to the discussion on this topic and yes, comments would be really appreciated!

Making Comments More Useful

10 Nov

Often times comments sections of blogging sites suffer from a multiplicity of problems – they are overrun by spam or by repeated entries of the same or similar point; continue endlessly and generally overcrowded with grammatical and spelling mistakes. Comments sections that were once seen as an unmitigated good are now seen as something irrelevant at best and a substantial distraction at worst. Here below I discuss a few ways we can re-engineer commenting systems so to mitigate some of the problems in the extant models, and possibly add value to them.

Comments are generally displayed in a chronological or reverse chronological order, which implies that firstly the comments are not arranged in any particular order of relevance and secondly that users just need to repost their comments to position them in the most favorable spot – the top or the bottom of the comment heap. One way to “fix” this problem is by using a user based rating system for comments. A variety of sites have implemented this feature to varying levels of success. The downside of using a rating system is that people don’t have to explain their vote ( Phillip Winn) for or against the comment leading occasionally to rating “spam”. BBC circumvents this problem on its news forums by allowing users to browse comments either in a chronological order or in the order of reader’s recommendations.

Another way we can make comments more useful is by creating message board like commenting systems that separate comments under mini-sections or “topics”. One can envision topics like “factual problems in XYZ” or “readers suggested additional resources and links” that users can file their comments under. This kind of a system can help in two ways – by collating wisdom (analysis and information) around specific topical issues raised within the article and by making it easier for users to navigate to the topic or informational blurb of their choice. This system can also be alternatively implemented by allowing users to tag portions of the article in place – much like a bibliographic system that hyperlinks relevant portions of the story to comments.

The above two ways deal with ordering the comments but do nothing to address the problem of small irrelevant repetitive comments, often times posted by the same user under one or multiple aliases. One way to address this issue would be to set a minimum word limit for comments. This will prod users to put in a more considered response. Obviously, there is a danger of angering the user leading to him/her adding a longer more pointless comment or just giving up but on an average, I believe that it will lead to an improvement in the quality of the comments. We may also want to consider coding in algorithms that disallow repeated postings of same comments by a user.

The best way to realize the value of comments is to ask somebody – preferably the author of the article- to write a follow-up article that incorporates relevant comments. Ideally, the author will use this opportunity to acknowledge factual errors and analyze points raised in the comments. Hopefully, then this follow up piece will be able to solicit more comments and the process repeated again helping take discussion and analysis forward.

Another way to go about incorporating comments is to use a wiki-like system of comments to create a “counter article” or critique for each article. In fact, it would be wonderful to see a communally edited opinion piece that grows in stature as multiple views get presented, qualified, and edited. Wikipedia does implement something like this in the realm of information but to bring it to the realm of opinions would be interesting.

One key limitation of most current commenting systems on news and blog sites is that they only allow users to post textual responses. As blog and news publishing increasingly leverages multimedia capabilities of the web, commenting systems would need to be developed that allow users to post their response in any media. This will once again present a challenge in categorizing and analyzing relevant comments but I am sure novel methods, aside from tagging and rating, will eventually be developed to help with the same.

The few ideas that I have mentioned above are meant to be seen as a beginning to the discussion on this topic and yes, comments would be really appreciated.

From Satellites to Streets

11 Jul

There has been a tremendous growth in satellite-guided navigation systems and secondary applications relying on GIS systems, like finding shops near the place you are, etc. However, it remains opaque to me why we are using satellites to beam this information when we can easily embed RFID/or similar chips on road signs for pennies. Road signs need to move from the ‘dumb’ painted visual boards era to electronic tag era, where signs beam out information on a set frequency (or answer when queried) to which a variety of devices may be tuned in.

Indeed it would be wonderful to have “rich” devices, connected to the Internet, where people can leave our comments, just like message boards, or blogs. This will remove the need for expensive satellite signal reception boxes or the cost of maintaining satellites. The concept is not limited to road signs and can include any and everything from shops to homes to chips informing the car where the curbs are so that it stays in the lane.

Possibilities are endless. And we must start now.

End of Information Hierarchy

11 Nov

Today, people have a variety of ways to explore a collection via the Internet as opposed to carefully orchestrated explorations in a brick and mortar museum with a curated exhibition.

A curator comes up with a story along with other contextual information about the exhibit and arranges the exhibition so that the person exploring it has only a few chosen entry points and few ways of exploring the collection. Some of the impediments are put in deliberately while others are a result of hosting an exhibition in the real world where the design of building etc. still matter.

Cut to the online world and the user is untethered from most of the curated connivances. This, in turn, may be a result of the fact that people haven’t really understood how best to present a virtual museum but that is not the point I want to get into. The result of the untethered experience is that these cultural objects are seen in a twice removed setting -e.g. a pot taken from an archaeological site and then photographed and put on the Internet. So what is the result of all this? It is hard to give an objective listing but one can see that some of the “meaning” is lost in this journey of an artifact from the ground to the Internet.

What happens when information that was once tethered in a context or a story is made available virtually free of context over say Google. Is storing information in hierarchical networks or associations obsolete? How do you maintain the integrity of information when context-free snippets of information are freely available?

Say, for example, once upon a time people learned about history via a scholar who chose carefully the specific issues about history. Today, a teen gets his/her history by searching on the web often encountering a lot of miscellaneous information. I would argue that the person then can come away, from such a scattered exploration, with a bunch of miscellaneous trivia and no real understanding of the major issue at hand. The key idea here is that for transmission of “knowledge”, the integrity of information is of prime value.