From the cult of the amateur to the triumph of miscellanea, a revolution in how we interact with information is in its early stages. I don’t use the term “revolution” lightly. You could be forgiven for thinking it hyperbolic, because traditional publishers have been slow on the uptake. If you still rely on traditional publishers for your information, you will not have seen much of a revolution. Sure, you can read the newspaper on the internet nowadays, and leave instant feedback on what you’re reading. But “revolution” is a bit strong, surely? Well, indeed. But revolutions do not always happen overnight.
Lets go back to Stephen Fry on Room 101. He wants to consign critics to history. And it’s happening, or at least, a shift in how criticism occurs is part of the revolution. Under the traditional model, what is good, what is important, and what is true, are largely dictated from on high. We have some experience of bottom up review — bestseller charts, opinion polls, television ratings figures, and so on — but it is usually secondary to the power of the critics. It is not my intention here to argue over the merits and pitfalls of the invisible hand of bottom-up criticism, except with regards to how it might be applied to academic publishing and the peer-review system. Because this revolution is happening, whether the publishers like it or not. And just as I believe the invisible hand of economics needs the occasional slap to keep it from making rude gestures, the invisible hand of criticism needs some discipline and manners if it is to be of any use to us. And thus concludes my tortured metaphor for today.
The Existing Model
Peer-review is an important mechanism for ensuring that published research is not faulty, fraudulent, fruity, fallacious or trivial (damn my thesaurus!). None of us have time to read papers that are of no consequence, and even senior academics have the expertise to judge the quality of research in only a narrow field. So, the job of peer-review is to filter the work submitted for publication. Here is roughly how it works. A journal sends the paper to four or five academics and asks them questions like:
- is the question that researchers sought to answer an important one?
- did they pick the right methods for answering that question?
- have they found any answers?
- are their answers adequately supported by the empirical data?
The reviewers are then asked whether or not the work should be published, and if so, whether any changes should be made first. We, the readers, can then judge the research: we know that it is somewhat trustworthy, by virtue of the fact that is has survived to publication; and we can judge its importance as roughly equal to that of the journal it is published in. This is certainly a clever solution to the problem of determining what research should be published. But it’s tailored to a world of paper publication, and we’d be making a mistake if we assumed it to be the best solution in all situations.
Indeed, the entire research publication industry remains tailored to the dead-tree model — even those that only publish on-line. Manuscripts, for example, must be right first time, because once published, you can’t change them. Journals are not aimed at individuals, but at libraries, and therefore publishers court institutions, rather than readers — the most pernicious result of this is that research lies behind expensive paywalls, because readers are worth very little, as long as the institutions are loyal.
When I set out thinking about this post, I was certainly not in favour of the abolition of the existing method of peer-review. Certainly, to begin with, the new mechanisms of bottom-up review will merely provide a useful new layer of metadata with which to determine whether the published research is faulty, fraudulent, fruity or trivial, and not replace the old methods entirely. What will these new mechanisms be? Well, humans are notoriously bad at describing the future, but I’ll have a go.
The obvious start is the addition of comments to research articles. But we’ve established that comments are not revolutions, they’re merely letters made easy. We can see more sophisticated mechanisms by looking at websites that are aware of the revolution. The big one is Google. When all of the information is cross-linked in a web, Google can determine which piece of information is most relevant to your search by adding up how many other pages link to it. The photo sharing site flickrdoes something similar, calculating how “interesting” a photograph is by looking at how many people have looked at, commented upon, and saved each photo a “favourites” gallery. There are many ways we could implement something similar for papers. We could, perhaps, award “points” to a paper — say, 1 point when a website links to it, 5 when somebody comments on it, or saves it to a bookmark site (like del.icio.us, or Nature’s new citation manager) and 10, 20, 50 and 100 points respectively when it is cited in blogs, magazines, other papers, and textbooks. Perhaps the job of traditional reviewers will evolve to include setting a “starting interestingness”; the papers can then have a built in “interestingness decay”, while the awarding of “points” will keep interestingnessup. Papers could even have points deducted, for example, in response to new research overturning its conclusions. It may be useful to separate “interestingness” from “trustworthiness”, with a different mechanism for judging the latter.
Now, must go off on a tangent here, because thinking about interestingness, a potentially wonderful side-effect came to me. “Interestingness” on flickr is an invisible hand that picks the best photos of the day, which are then displayed in a gallery. “Interestingness” is therefore a positive feedback mechanism, bringing more people to your work. At present, the goal of researchers is to publish, but no more. Being read hardly matters, so long as you are published. Imagine the attitude to paywalls in a world where being read and talked about was important, and of consequence. And people would be falling over-themselves to engage with the public on scientific matters, and to follow up questions left in the comments box.
I’m sure there are an abundance of additional mechanisms for judging research out there, and many different mechanisms can be implemented. The amount of storage space you can buy for a given amount of money is doubling at least once a year, so the amount of metadata you can publish is no longer limited to space. Other developments I’d find interesting include the publishing of traditional peer-reviewers’ comments, and holding virtual “journal clubs”, with the authors present; I’m sure you can think of many more. I’ll give you one more mechanism though, this one already in its infancy. The job of reviewers is to determine what is important and accurate. The job of critics is subtly different: they judge what is worth yourtime. We have a bottom-up mechanism for this job, too. Amazon tells you what other people who have read your favourite book enjoyed. Last.FM suggests music you might like, by looking at the playlists of people with a similar taste in music. Other sites have additional layers, such as tags/keywords, with which to organise this data. “Social bookmarking” sites are already starting to do this job, and Nature recently launched a science specific one, Connotea.
It’s not just the mechanisms of review that can change. Publishing on paper is a big operation, especially when it requires distributing to hundreds of libraries; you just can’t do it all by yourself. Publishing online is easy, and the distribution does itself. These days you can even get the copy-editing to do itself! And online documents do not have to be static: you can publish a “first edition”, and update it later in response to reviewers comments, or late results. This suggestion might worry you — perhaps it sounds too much like Wikipedia, or the behaviour of crackpots who self-publish the results of their studies into time travel, or the mathematical proof of God — but it should not worry you, because we have time to prepare for it. We have time to put into place standards for the electronic publication of papers — an obvious condition to include would be that archives of every revision of the paper should be maintained, and passages that have changed from earlier revisions should be marked as such. I’m sure you can come up with many more sensible guidelines.
These are relatively tame suggestions, some of which are now inevitable, and others that are almost certain to happen is some form, and I can conceive of far more radical changes. Indeed, these changes could be a slippery slope, that sees publishers themselves become entirely obsolete. Perhaps we will see a system where research is self-published (or published by the universities and/or funding agencies), and peer-review either emerges bottom-up, or traditional blind peer-review is organised (by the university, funding agency, a central “authority”, or someone else altogether). Indeed, I embarked upon this expedition with no expectation of coming to the conclusion that tradition top-down peer-review could one day be dispensed with altogether, but I am no longer so sure. Certainly, if that day were to come, its demise would be fiercely opposed. But as we know, the major revolutions in science occur not because people are convinced by the evidence, but because the young replace the old.
“I used Google this morning, and it failed to find anything of use,” you say. “The most ‘interesting’ photos on flickr are the ones that use nasty photoshop gimmicks.” “Last.FM gave me Busted because I listened to The Killers, and Amazon seems to think that I’d enjoy ‘The Student’s Guide to Passing the UK Clinical Aptitude Test’ because I bought a book about genetics by Matt Ridley!” Yes, yes, the bottom-up approach to reviewing information is not perfect. But, since I believe a lot of it isinevitable, that’s a great reason for discussing it first, and making sure we get it right. I’m optimistic that the problems that affect other bottom-up review systems will not be so much of a problem with academic research: generally the people doing the reviewing are going to be more intelligent and mature, for a start. But lets discuss the potential problems, and we can try to come up with solutions.
If we get to a stage where academic careers are judged more on the “interestingness” of their work, rather than the quantity of publications, people are going to game the system. They will use the academic equivalent of flickr’s photoshop gimmicks, or Google’s linkspam. Additionally, on flickr, photographers build up reputations and make lots of friends who follow their work. They might then have an off day and produce some absolute rubbish, but they will still be more “interesting” than somebody who has not had time to build up a following. Similarly, there are several examples of Nobel prize winners who have had crazy days, and nobody will let Newton forget about his alchemy, or let Alfred Russell Wallace forget about his spiritualism. Also, if careers and reputations are dependent on “interestingness”, and academics are actively popularising their work, can we rely on them not to sensationalise or in some way dumb down?
Another potential problem with opening up review to everybody is that people are assholes, especially when anonymised and distant on the internet. If, for example, we decided to include as part of our bottom-up review process the option to rate a paper on a scale of 1 to 5 for faults, fraud, fruitiness, fallacies or triviality, it is guaranteed that some asshole would come along and give everyone bad reviews. They might have been turned down for a job by the author, they might have been laughed at for their bizarre pseudoscientific theory, they might be outraged because the author carelessly made an adaptationist hypothesis without mentioning the alternative neutralist hypothesis. Some people don’t even need a reason for being an asshole. A potential solution to this is to review the reviewers: if somebody behaves childishly they get a bad rating, and their ability to influence “interestingness” is diminished. Sometimes people don’t mean to be assholes, but come across as such because on the internet nobody can hear the intonations in your speech, or see the tongue in the cheek. Others come across as assholes because they haven’t thought things through properly: writing a letter takes time, and during that time you can think about what you’re saying. If you can’t post it until the next morning, you have time to realise the point that you’ve missed, and retract the letter. Online comments are faster than the speed of thought.
So far we’ve mentioned just two of the jobs that traditional publishers do: peer review and distribution. They do another vital job: editing (at least, some do). An awful lot of academics just can’t write. If you think the average paper is bad when you read the final product, you should see it before the editors and copyeditors have applied themselves. Were we to loose the traditional publishers in favour of self/institution-publishing, who would take over this job? There are several candidates: universities could, for example, spend all that money they saved on the library’s journals budget on full time departmental copyeditors. I could mention the invisible hand as a candidate, but I hear you let out a small cry of pain as you think once again of Wikipedia, and anyway, Wikipedia is no Keats or Yeats.
The traditional model of publishing has the advantage, and disadvantage, of being decentralised. The revolution of publishing may make some things more centralised, and others less so. Some of the proposals above would only be of limited use without a centralised authority, while others would be impossible. I’m not sure that having centralised authorities is a particular problem — it works for DNA and protein data, for example. We do not necessarily need a central repository of papers or even of all metadata (though we already have a few central repositories of selective metadata — PubMed being the obvious example– and either these will expand with new features, or new sites will develop to give us those features), but to take full advantage of the new technology we do need to agree upon rules, protocols, “interestingness” algorithms (if we choose that route), and so on.
The revolution in publishing has begun, and it is a good thing. It is too late to stopthe revolution, but it not too late to have it on our own terms. We can sit back and let it happen anarchically, and risk being left with sub-optimal systems that nobody is willing to change, or we can plan ahead, and think about how we can make the best use of new technology. If it is true that we are on a slippery slope, at the bottom of which traditional publishers might have ceased to exist, how might they deal with it? They can embrace it and work hard to make sure that when we get to the bottom of the slope they still have a role to play. Or they can fight it, and be left standing at the top, while the rest of us go ahead and remove their reasons for existence.