Peer review is commonly accepted as an essential part of scientific publication. But the ways peer review is put into practice vary across journals and disciplines. What is the best method of peer review? Is it truly a value-adding process? What are the ethical concerns? And how can new technology be used to improve traditional models? The recent announcement of a new journal sponsored by the Howard Hughes Medical Institute, the Max Planck Society, and the Welcome Trust generated a bit of discussion about the issues in the scientific publishing process it is designed to address—arbitrary editorial decisions, slow and unhelpful peer review, and so on. Left unanswered, however, is a more fundamental question: why do we publish scientific articles in peer-reviewed journals to begin with? What value does the existence of these journals add? In this post, I will argue that cutting journals out of scientific publishing to a large extent would be unconditionally a good thing, and that the only thing keeping this from happening is the absence of a “killer app”.
The publishing process as it stands currently
As most readers here are aware, the path to publishing a scientific paper has two major obstacles: first, the editor of a journal has to decide that a paper is potentially “interesting” enough for publication in their journal; if it passes that threshold, it is then sent out for “peer review” by two to four people chosen by the editor. The reviewers make a recommendation about whether or not the journal should publish the paper – if they all like it, chances are it will be accepted (potentially after additional experiments); if one of them hates it, chances are it will be rejected; if the reviews are mixed, the editor makes a judgment call. In total, this process involves a handful of people at most, and takes around a few months to a year (of course, if the paper is rejected, you generally start all over again). The problems with this system have been pointed out ad nauseam; the most succinct statement of the issues I’ve seen this in a nice commentary by British Medical Journal editor. To summarize, peer review is costly (in terms of time and money), random (the correlation in perceived “publishability” of a paper between two groups of reviewers is little better than zero), ineffective at detecting errors, biased towards established groups and against originality, and sometimes abused (in that reviewers can steal ideas from papers they review or block the publication of competitors).
We can do better?
So why do we stick with this system, despite its many flaws? These days, there’s zero cost to self-publishing a paper online (for example, on a preprint server like arXiv or Nature Precedings), so going through a peer-reviewed journal for publication itself seems a bit silly. However, journals do perform a service of sorts: they filter papers, ostensibly based on interest to a community, and also ostensibly ensure that the details of an analysis are correct and clear enough to be replicated (though in practice, I doubt these filters work as intended).
So let’s take this goal–that of filtering papers based on quality, interest to a community, and reproducibility–as the legitimate service provided by peer-reviewed journals. When phrased like this, it’s simply absurd that our way of achieving this goal involves a handful of unaccountable, often anonymous, reviewers and editors, and takes so much time and money. Certainly the best judge of the interest of a paper to the community is, well, the community itself. Ditto for the best judge of the quality and reproducibility of a paper. So let’s imagine a different system. What features would this system have?
1. Immediate publication without peer review. This is simply a feature taken from preprint servers like arXiv, and addresses the issues of speed and cost of publication.
2. One-click recommendation of papers. Now we need to find a way to filter the papers in step 1. Imagine a feed of new papers (like the feed in reddit or Facebook); one simple and relatively effective filter is to allow individuals to express that they like a paper with a single click (again, like reddit, Facebook, or Google+). It seems stupid, but it’s little effort and extremely effective in some situations.
3. Connection to a social network. Of course, in some cases I don’t really care if a lot of people like a paper; instead, what I want to know is: do people I trust like the paper? If I’m trying to find the best recent papers on copy number variation, I don’t necessarily care if a thousand people like a paper, but I would probably take a second look at a paper recommended by (my GNZ colleague and copy number variation expert) Don Conrad.
4. Effective search based on the collective opinion on a paper. Many times, I’m searching for the best papers in a field somewhat outside my own. One of the most useful features in Google Scholar in this regard is that it immediately tells you how many citations a paper has received; in general, this is highly correlated with the community opinion of a paper. This breaks down for new papers, all of which have zero citations. Often, I’d like to be able to search the relatively recent literature and sort based on the criteria in steps 2 and 3.
You can imagine additional sorts of features that would be useful in a system like this—comments, voting on comments themselves, encouragement of reproducible research via Sweave or some other mechanism—but the aspects above are probably essential.
Does a system like this perfectly address all of the issues with peer review mentioned above? No–my guess is that this sort of system would also be somewhat biased towards established research groups, just as peer review is. But for all other aspects, this sort of system seems superior.
How do we get there?
Many of the above ideas, of course, are not new (see, e.g., discussions by Cameron Neylon). And much of academia has bought into the “peer-reviewed journal” system for evaluation of individuals–that is, when evaluating an individual researcher for a grant or tenure, the quality of the journal in which a piece of work is published is often used as a proxy for the quality of the work itself. It’s easy to see how this system became entrenched, but it is obviously not compatible with the publishing model outlined above (nor is it an ideal system in its own right, unless you’re someone with a knack for getting crappy papers published in Nature). So what’s the way out?
One thing to note is that many of these ideas about community ranking could be incorporated into the “standard” publishing route. Indeed, some aspects of these ideas (comments, rating of articles) have been implemented by the innovative PLoS journals, but have been greeted with deafening yawns from the research community. Why is this? Certainly, it’s partially because these systems are non-trivial to use (you have to log in to some new system every time you want to rate a paper), but most importantly, there’s no sense of community—I’ll never see your comment on a paper, as relevant as it is, unless I come across it by chance, and there’s no mechanism to tell me when a paper is being read and liked by many people I trust. The current implementations of these ideas simply don’t perform the filtering mechanism that they’re designed to replace–if I see that a PLoS One paper is highly rated, this doesn’t help me at all; I’ve already found the paper!
This situation has created the perfect niche for a killer app—one that solves all of these issues and will actually be used. I’m not sure exactly what this will look like, but it will likely tap into already existing online identities and social networks (Google+? Facebook?), will require approximately no learning curve, and will be of genuine utility (i.e., it will deliver the good PLoS One papers to me, rather than waiting for me to find them). Once a system like this exists, and it can be shown that it’s possible to judge people (for grants, tenure, etc.) in this system based on the impact of their work, rather than the prestige of the journal in which the work is published, it’s an additional easy step to eliminate the journals altogether.
Conclusions
Before the internet, peer-reviewed journals and researchers had a happy symbiosis: scientists had no way of getting their best scientific results to the largest audience possible, and journals could perform that service while making a bit of profit. Now, that symbiosis has turned into parasitism: peer-reviewed journals actively prevent the best scientific results from being disseminated, siphoning off time and money that would be better spent doing other things. The funny thing is, somehow we have been convinced that this parasite is doing us a favor, and that we can’t survive any other way. It’s not, and we can.
Leave a Reply