Timothy Gowers, in TLS, "The end of an error?", opens by discussing three cases:
- In 1998 an article appeared in the Lancet (on the link between MMR vaccine and autism), was found to be deeply flawed, but not retracted until 2010.
- Three articles posted to asXiv proving an important piece of mathematics. The articles were difficult and a bit sketchy, but others went to work and cleaned things up.
- A couple months ago a different article was posted to asXive. It was clear and complete and within days a serious flaw was found and the original claim was retracted.
The first case involved formal peer review whereas the second two involved review by peers, but not the standard prepublication process typical of the formal literature. The first case was a disaster while the second two were successes. In light of such cases, should we reconsider the role of formal peer review in academic publication? Has the Internet rendered it obsolete?
Gowers notes that prepublication in arXive is how mathematicians now disseminate ideas and establish priority, "Of course, different disciplines have different needs and very different publishing cultures." It may well remain useful in some disciplines, but no longer for mathematics. Peer review is defended with respect to three functions:
- Reliability: "The first is that it is supposed to ensure reliability: if you read something in the peer-reviewed literature, you can have some confidence that it is correct."
- Weed out the chaff: "... a vast amount of academic literature is being produced all the time, most of it not deserving of our attention, and the peer-review system saves us time by selecting the most important articles. It also enables us to make quick judgements about the work of other academics."
- Feedback to the author: "If you submit a serious paper to a serious journal, then whether or not it is accepted, it has at least been read, and if you are lucky you receive valuable advice about how to improve it."
Gowers then goes on to discuss these functions, mostly in terms of the (scientific) disciplines he's familiar with. Reliability is important in mathematics because researchers build on the work of others in a fairly 'tight' way. This is not so much the case in literary criticism, which is as close to a home discipline as I've got. He notes:
In some disciplines, the formal peer-review system appears to have failed on a huge scale. This is particularly true of articles about scientific experiments where the conclusions are statistical in nature.
Nor, for that matter:
...does formal peer review seem to manage very well to stop wrong ideas from spreading outside academia. Climate change deniers are not put off by their lack of representation in respectable academic journals. Drugs policy bears little relation to the harm that drugs actually cause. An economics paper that supported austerity-based policies influenced several governments before it was discovered to contain some very basic mistakes that invalidated it.
As for weeding out the chaff:
Again, the way things already are in mathematics suggests that this would not be as serious a problem as it at first looks. Far more papers are posted to the arXiv than I have time to check through, even if I restrict myself to a few areas of particular interest. But I use a recently created website called arxivist.com, which puts each day’s preprints in what it judges to be their order of interest to me.
As for saving time, yes, we certainly can "make quick judgements of publication lists by looking at the names of journals" but this "encourages the measurement culture that has infected academia, with all its well-known adverse consequences." As for the feedback function, that
is much more obviously valuable, especially in some subjects. Remarkably, we have arrived at a system where academics feel a moral obligation to perform the thankless task of reviewing the work of other academics, anonymously and unpaid. This undoubtedly makes the literature better than it would otherwise have been, and ensures at least one reader for each paper.
While other feedback mechanisms are possible "it is less clear how they could become widely adopted." Thus one could add a comment function to preprint servers but "only a small minority of preprints are actually worth commenting on" and
there is no moral pressure to do so. Throwing away the current system risks throwing away all the social capital associated with it and leaving us impoverished as a result.
That is a strong argument against an abrupt change to a new system, but it is not an argument against a gradual one. And it is not unrealistic to hope for gradual change.