The general public believes that peer-reviewed publications are thoroughly vetted and scrutinized. In reality, nothing could be further from the truth. The review method is outdated and stuck in the eighteenth century. This has lead to a completely broken system that has made peer-review a meaningless self-promotional phrase.
This is how a typical review works:
The author submits a manuscript by uploading the files to the journal. This goes into the editor’s queue. The editor is typically an unpaid appointee, usually a scientist or professor. He or she looks at the title and abstract and finds potential reviewers from a database, or a google search. A dozen reviewers may get emails to review this paper. Of that, only one or two may actually send the review by the deadline. Sometimes no one may send the review, and the process may have to be started again. The reviewers are also unpaid volunteers. However, unlike the editor whose name appears on the journal, reviewers remain anonymous, so they don’t get any recognition or credit for their work. Except for altruistic reasons, they are rarely motivated to complete the task, or even do it right. Besides, the probability that the topic is in an area of much interest to the reviewer is very small. The vast majority of reviewers give it no more than a cursory glance. If the topic is current, noncontroversial, seems to be reasonably detailed, and spelling and grammar appears to be good, chances are it will be recommended for publication. In my experience serving as associate editor of IEEE journals, the reviewers who do the best job are graduate students, post-docs or assistant professors. The higher up the chain they are, the more likely that they will simply ignore requests for review, or provide a sloppy review. This is the current state of peer view.
This is not to say that reviewers are lazy and editors are incompetent. These are people with busy lives and packed schedules. Ask any professor, and the one thing they will all agree on is how much they despise grading term papers. So, why would they want to spend their spare time reading and correcting even more papers? The answer is, they don’t.
What does peer review really mean?
Peer review does not mean the paper was critically examined by experts. It does not mean it is free of errors, free of fraudulent claims and plagiarism, or contain only previously unpublished results. There are plenty of infamous examples where a paper passed peer review and got published, but were later caught by the readership. But only in the most outrageous cases do these actually get caught. Readers are not obligated, or even inclined, to report anything suspicious to the journal. This means there could be potentially thousands of peer-reviewed journal articles out there that should not be out there. It is not a comforting thought.
Why is peer-review broken?
A hundred years ago, all journals were printed and mailed out. The publication volume was small, and the community was small. Furthermore, the postal mailing system made getting manuscripts back and forth very difficult. As a result, the editor and a small group of his close colleagues often controlled the entire review process. The assumption was that these gate-keepers were wise men who made the best judgement to protect the integrity of scientific literature. Flawed as it may be, that was the only way it could be done at that time.
Today the volume of publications has exploded. Each day I get nearly 100 RSS feeds of new peer-reviewed publications, just within my narrow discipline. There is no way that every single one of them has been carefully scrutinized.
So why do we still do it this way?
Old habits die slowly. This is even more true in top-heavy disciplines like academics. Most of the editors and senior scientists built their careers in the old system. Making drastic changes is like admitting that they succeeded in a system that was flawed.
Is there a better system?
Yes. Crowd-source the peer review process. Think about it. Real peers are not two people hand-picked by the editor. Peers should be everyone who has an expert opinion on the subject. Such a review would have been impossible to do 25 years ago. But it is frightfully easy today. Unfortunately, even the most technologically advanced organizations like IEEE have not taken this initial step.
Isn’t crowd-sourcing scientific review fraught with potential pitfalls?
Let me explain how it would work, using an IEEE journal as an example.
The author uploads a manuscript to the journal. The editor (yes, we still need an editor), reviews the manuscript for proper content, language and formatting. This should also include an automated check for copyright infringements. Such things are widely used in colleges to screen student papers, but I have never heard of it being used with journal articles. Why? Probably because we assume professionals will not do such things, despite being proven wrong time and time again.
Then the editor places the manuscript in the publication queue. Anyone who is a member of that technical society (or possibly limited to a certain membership grade) will be allowed to rate and comment on the manuscript. At any point in time the editor can delete comments that are off topic or inappropriate. Once the manuscript earns a sufficient number of positive reviews, the editor can then approve it for publication, or recommend the manuscript to be revised. If there are many negative reviews, the manuscript can be returned to the author. Manuscripts with no reviews can get automatically withdrawn from the system after a predefined length of time.
I have presented this idea to various IEEE executive committees, and it was shot down every single time. These are the reasons they usually give:
- If we let anyone review a manuscript, people might get their friends to give positive reviews:
There is some validity to this argument. But the real problem behind this statement is the assumption that the majority of our members are unscrupulous. That is a sad reflection of how the leadership of these organizations view their own membership. Who is to say that the editors and the hand-picked reviewers have more integrity than the general membership of the society? Although some members may attempt to manipulate the system, when the community base is large any such attempt will usually correct itself. That is the beauty of a crowd-sourced process.
- How do we know only the ones with technical expertise will provide reviews?
We don’t. We have to rely on the member’s own judgement of their own expertise. The same thing is true today with the hand-picked reviewers. Often times, the expert areas of the reviewers are self-proclaimed. Furthermore, the problem we should be concerned about is receiving too few reviews. Too many reviews, even from those who are not experts, is still a good thing. Furthermore, it is easy to identify a review written by someone who is not an expert on the subject. This is the job of the editor. The editor can still make the final call which reviews to ignore.
Put the membership in the driver’s seat
Most scientific organizations are struggling with declining membership and poor member participation. Despite deeply discounted dues, students nowadays rarely join these organizations. The common complaint we hear at every executive meeting is how the younger generation has become apathetic and disconnected. But the problem is not with the younger generation’s attitude. It is the outdated institutional model. The organization is designed to serve its members in a top-down fashion. But what the current generation wants is interactive participation. In the old model, leadership meant having to run for office and take on roles such as President, Vice-President, Treasurer etc… This is no longer the case. One can have significant and transformational influence without having to take on the traditional leadership roles. The digital age has made this possible.
Let the members review their own peers. Given them more ownership and a stake in their own future. This is true democracy, and the technology to do it is already here.