Support The World's Smartest Network

Help the New York Academy of Sciences bring late-breaking scientific information about the COVID-19 pandemic to global audiences. Please make a tax-deductible gift today.

This site uses cookies.
Learn more.


This website uses cookies. Some of the cookies we use are essential for parts of the website to operate while others offer you a better browsing experience. You give us your permission to use cookies, by continuing to use our website after you have received the cookie notification. To find out more about cookies on this website and how to change your cookie settings, see our Privacy policy and Terms of Use.

We encourage you to learn more about cookies on our site in our Privacy policy and Terms of Use.

Pitfalls of Peer Review

A recent "sting operation" highlights important questions about the peer review system and how to publish, disseminate, and debate scientific findings.

Published October 03, 2013

Pitfalls of Peer Review

Science writer John Bohannon recently went undercover...for science! As Ocorrafoo Cobange, a made-up biologist at the also fictitious Wassee Institute of Medicine in Asmara, Bohannon wrote a terrible paper about the anti-cancer virtues of a molecule he claimed to have extracted from lichen. "Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless," explains Bohannon in this Science article. Slightly differing versions of the "bait" paper were sent to 304 open access (OA) journals. Just over half, 157, accepted the paper, pointing out some serious flaws in the peer review system.

Balancing quality control with economics—and ethics—isn't straightforward, nor is this a problem uniquely related to OA journals. In this Guardian article, Netherlands Institute for Advanced Study Fellow Curt Rice argues that the practice of charging author fees is at the root of the issue.

"This is a model that invites corruption. Set up a journal, accept some articles, charge a high fee, and publish the article on your website. This corruption is fed, of course, by the fact that researchers feel incredible pressure to publish more and more. It's also fed by a system that uses quantity as a proxy for quality. But it is a mistake to equate open access and author payment. There are traditional journals that require some payment, too, especially in connection with high typesetting costs," he says.

For different perspectives on this issue, subscription-based Nature covers the economics of OA publications and the debate about how to improve peer review. OA arXiv founder Paul Ginsparg considers potential improvements to the peer review system here.

In a blog post on, "Are flaws in peer review someone else's problem?" nanoscientist Philip Moriarty invokes the genius of Douglas Adams to call attention to a related kink in the self-correcting mechanisms of scientific research: What happens when something gets through the process that turns out to have been wrong?

The idea is that it will be caught and rectified by subsequent experiments that yield different results, but there are some "buts." Moriarty, via his colleague Mathias Brust, informally estimates that about 80% of scientists find potential flaws in papers that don't immediately affect their work an insufficient reason to engage in disputes (the "Someone Else's Problem" invisibility field, see above Douglas Adams link). Another 10% eschew "unfriendliness" between scientists. "After all, you never know who referees your next paper." Such reluctance to rock the proverbial boat could leave the next researcher referring to shaky (or worse) preceding work, which may become canonical simply it was published in a prestigious journal and never challenged due to an entrenched culture of hoped-to-be-reciprocated politeness.

Furthermore, it can be logistically onerous and disincentivizing to replicate an experiment with which you take issue. Neuropsychology professor Dorothy Bishop (aka @deevybee) illustrates, "The expectation is that anyone who has doubts, such as me, should be responsible for checking the veracity of the findings... Indeed, I could try to get a research grant to do a further study. might take a year or so to do, and would distract me from my other research. Given that I have reservations about the likelihood of a positive result [and, by extension, being able to publish], this is not an attractive option."

One fairly recent alternative is post-publication peer review—basically, non-anonymously discussing (or criticizing) a published paper on a blog. It's a controversial venue for debate, partly because it's so counter to the norm of deferring to journals as the medium and safeguard of scientific record. It also rubs some people the wrong way. If someone has to go through a burdensome process to publish the fruits of his or her labor, why should someone else be able to publish criticism immediately and with no vetting or regulation? But Dr. Bishop asserts that online forums allow "for new research to be rapidly discussed and debated in a way that would be quite impossible via traditional journal publishing." This can serve to more efficiently catch and cull errors. "In addition," Bishop adds, "it brings the debate to the attention of a much wider readership."

There's a fine line on the internet, however, between debate and vitriol (to be clear, Dr. Bishop wasn't engaged in the latter), and crossing it can also undermine good science, as well as science education. A recent study found that a rude tone in online comments responding to an article adversely affects how readers feel about the scientific content of the article, even when the readers are familiar with the subject and when the science is sound. This issue recently inspired Popular Science to do away with its comments section. Explaining the decision, PopSci online content director Suzanne LaBarre writes,

"If you carry out those results to their logical end—commenters shape public opinion; public opinion shapes public policy; public policy shapes how and whether and what research gets funded--you start to see why we feel compelled to hit the "off" switch. A politically motivated, decades-long war on expertise has eroded the popular consensus on a wide variety of scientifically validated topics...The cynical work of undermining bedrock scientific doctrine is now being done beneath our own stories, within a website devoted to championing science."

Presumably, post-publication peer review would maintain a professional tone. But might seeing scientists questioning each other's conclusions, even politely, also undermine public trust in science? As previously discussed on this blog, it's important to teach the process of science (as opposed to just facts). Marie-Claire Shanahan, Research Chair in Science Education and Public Engagement at the University of Calgary, Alberta, Canada, writes,

"The effects of 'right answer' science teaching [are] clear in the way students responded to disagreements among researchers...They wanted to know what the truth really was, and they became suspicious of the various scientists [with conflicting conclusions] for not knowing how to study the issue properly or for going in with biased preconceptions... Students need much more exposure to real inconclusive and controversial science."

There isn't one clear solution that addresses all of these issues, but increasing awareness is an important step. Encouraging replication studies (also see this article by Ed Yong) and reconsidering the "publish or perish" culture of academia are also important.

The subjects of quality control, questionable publication patterns, and science's ability to be self-correcting overall are discussed in this podcast, featuring excerpted coverage of our event, Envy: The Cutthroat Side of Science.

Disclaimer: The views and opinions expressed in the articles on are those of the author(s) and do not necessarily reflect the views or opinions of the New York Academy of Sciences.