A new tool that selects peer reviewers by algorithm could make the peer review process more reliable, by sources
The more peer reviews on a paper from top scientists, the stronger the peer review signal.
Technology has changed some aspects of science tremendously. We have deep space telescopes that produce terabytes of data every day. We have tools to create synthetic living cells. However, the system of peer review – the mechanism by which scientific results are vetted – hasn’t materially changed since the 1600s when journal publishing was invented.
The reliability of the peer review process has recently come under the spotlight when the journal Science wrote a fake biology paper and submitted it to a number of open access journals. The fake paper made it through the peer review process of the majority of the journals.
Pharmaceutical companies have also drawn attention to the fact that the majority of peer reviewed papers in the bio-medical space are not reproducible. Amgen reported that 89% of peer reviewed articles are not reproducible. Bayer’s estimation stands at 64%. The Economist has also discussed studies that question the reliability of the peer review process.
Part of the issue is that in the traditional peer review process, only two or three scientists peer review a paper. The system places too much weight on what a small number of scientists think. There are more than 50,000 people working in a field like breast cancer. What two or three people think is too small a sample size: the signal is not statistically significant. We need to expand peer review so that there are vastly more scientists peer reviewing papers, sharing their thoughts and comments on each other’s papers.
In the past, it hasn’t been possible for journals to solicit peer reviews from a large number of people, because the process by which journal editors find peer reviewers for a paper is manual and labour intensive. That may change with large communities of scientists emerging online: Academia.edu has 5 million academics on its platform; ResearchGate lists over 3 million and Mendeley has 2.6 million.
If we combine these large communities with technology currently available, we can algorithmically determine the top scientists in a given field, and weight their peer reviews accordingly. The goal is to build a system that incentivises scientists to share their peer reviews and thoughts on the papers they are reading. Science is a conversation, and progress happens through the rapid discovery of errors, and quickly learning from those errors and moving on.
Cue the acquisition by Academia.edu of Plasmyd, a peer review and discussion platform for papers. The goal is to tackle the next building block in open science: building a better peer review system. The integration will combine Plasmyd’s experience in fostering peer review with Academia.edu’s research community and experience developing new ways of measuring reputation in research.
Humanity faces many problems that require science-based solutions. We need to find cures for Alzheimers, Parkinson’s, HIV, malaria and cancer. The planet is on a path to self-destruction with increasing carbon emissions and we need to develop a carbon-free source of energy that works at scale.
The goal of open science is to accelerate scientific discovery by making the process of science more open. Part of open science is getting every science PDF that has ever been written onto the internet and accessible for free. Another part is building a better peer review process, one that is more open and does a better job at separating the reproducible research from the non-reproducible research.
Someone sure dont understand what the peer review process is. If it’s not reproduceable, it cant be peer reviewed. It seems the writer thinks the peer review process starts and stops with getting an article printed… it doesn’t. Getting printed, is the FIRST step in the peer review process. FIRST. The process, NEVER ends… All papers, are CONSTANTLY being reviewed, over, and over, and over, and over again. While certainly the journals can improve their reviewing of the papers, saying that that’s a failure of the peer review process, is absurd and just shows complete ignorance towards what the peer review process is…
The more researchers will review or comment papers, the more quality will be contributed to the peer review. Peer review is an important mechanism of quality assessment in science, however, it requires some improvement by means of the current digital age. The principle of social networks combined with those aspects of traditional publishing which shall survive the print-age could be easily adopted to peer-reviewing as the author pointed out. This is why I initiated ScienceOpen. However, we should take extreme care to use automatic systems only to seek for qualified reviewers. We all know that active scientists are bothered to receive more than one e-mail per day either by traditional journals asking for a review or by those platforms which were mentioned above. Therefore auch a mechanism should be more sophisticated to prevent reseachers from an increasing amount of spam. The good news: there are concepts to combine these ideas and we will working on it to implement them.