Who is afraid of reviewers’ comments? Or, why anything can be published and anything can be cited
Article first published online: 17 MAR 2010
© 2010 The Authors. Journal Compilation © 2010 Stichting European Society for Clinical Investigation Journal Foundation
European Journal of Clinical Investigation
Volume 40, Issue 4, pages 285–287, April 2010
How to Cite
Ioannidis, J. P. A., Tatsioni, A. and Karassa, F. B. (2010), Who is afraid of reviewers’ comments? Or, why anything can be published and anything can be cited. European Journal of Clinical Investigation, 40: 285–287. doi: 10.1111/j.1365-2362.2010.02272.x
- Issue published online: 17 MAR 2010
- Article first published online: 17 MAR 2010
- Received 3 February 2010; accepted 3 February 2010
We were surprised recently when an associate editor informed us that a manuscript was a resubmission of a paper that had been already peer-reviewed and rejected by our journal last year. The authors did not supply any rebuttal to the previous reviewers’ comments. Then, we received another manuscript that was a copy of a previously peer-reviewed and rejected paper. The two versions were identical, with two exceptions. The title had changed and the study design had been metamorphosed from retrospective to prospective. Again, the cover letter made no allusion to the previous submission. Then, we received a paper that, on close scrutiny, we had already rejected twice after peer-review. In the first submission the study was comparative; in the second submission the study was cut to one-arm description; in the third submission the study had become again comparative. The authors made no allusion to the previous submissions and reviewer comments.
Obviously all these covert resubmissions were rejected. The submitting authors of these second-shots apparently thought that peer-review was a lottery. If you try hard enough, anything can be published even in the same journal that rejected the paper upfront; maybe it will go to different editors and reviewers; no need to address or even mention the problems identified by previous reviewers! This rationale leads to a parody of science and peer-review. We don’t know how extensive this practice is at other journals, but editors should beware.
Such incidents make us think also about wider issues. How about the thousands of papers that are rejected at one journal and then resubmitted elsewhere without any consideration of previous reviewers’ comments? Rejection is currently the most common outcome to a journal submission. While many papers are rejected summarily based on screening by editors, most rejections receive input from peer-reviewers. We have little evidence on how frequently the authors take these comments into account when they resubmit elsewhere. However, we suspect that comments are often ignored. An evaluation of randomized trials submitted to BMJ showed that the published versions of the papers (usually at some other journal) differed minimally compared with the original BMJ submissions [1,2].
There are three types of peer-reviewer comments (with some overlap and grey zone between them): those that suggest improvements, those that find the paper to be less than a least publishable unit and those that identify flaws beyond repair. In our experience, papers that are flawed beyond repair and less than least publishable units are common. A perusal of the scientific literature is also convincing that junk is quite prevalent. One wonders whether some authors of junk had been alerted by peer-reviewers, but they just went ahead to publish their papers wherever they found a niche of opportunity and a more negligent peer-review filter. We are not talking here about useful papers rejected from prestigious journals on the basis of priority . We are talking about clearly wrong papers, petty trivia, and ultra thin salami slices where even their authors (except for incurable megalomaniacs) would confidentially acknowledge triviality.
Why would scientists publish junk? Apparently, the current system does not penalize its publication. Conversely, it rewards productivity. In 1986, Drummond Rennie noted that nothing can deter a paper from ending in print . Since then, more papers are published each year and more authors flock to the masthead of the average manuscript [5–7]. Nowadays, some authors have been co-authoring more than 100 papers annually. Some of these researchers actually published only 3 or 4 papers per year until their mid-forties and fifties. Then suddenly, they developed this agonizing writing incontinence. Such unbelievable productivity makes Erdos, the most famous prolific author of the past, seem lazy by comparison.
Hopefully, many academic reward systems have shifted away from mere productivity towards taking also impact into account. Impact is usually appraised by citation metrics . However, even junk can get cited. Two decades ago, only 45% of published papers indexed in the Web of Science received at least one citation within 5 years . This pattern has now changed: 88% of medical papers published in 2002 were cited by 2007 . Almost anything published this year will eventually be cited. While some interpret this trend as evidence of welcome de-centralization and democratization of science , this interpretation is questionable. Centralization of influential research to few journals is still strong [11,12] and unscrupulous citation practices may partly explain why anything is cited. Authors do not choose carefully every citation [13–16]. Some cite papers they have never read, especially if the titles sound good enough. Such “random” citations affect papers according to Poisson distributions, much like bombers that throw bombs at random over a town. Some blocks are hit more, others less. Random citations are few for single papers, but accumulate for hyper-prolific authors. The more town blocks you have at risk, the higher the total number of hits.
Self-citations are another consideration. An analysis of papers from Norway (a country with overall high-quality research) showed that 36% of the citations received within a 3-year window are self-citations . Interestingly, self-citations also enhance citations by others  and this promotional effect is more advantageous to low-impact scientists . Some researchers can reach very far. Take, for example, ISIhighlycited.com that lists the 300 most-cited scientists in each science (among hundreds of thousands): recently, a scientist entered this prestigious list with 90% of his citations being self citations.
So, here is a recipe for ‘success’: co-author more papers (salami slicing, elimination of quality checks, undeserved co-authorship and acceptance of gifts from ghosts can all help); ignore unfavourable reviewer comments; keep submitting until you find a hole in the peer-review system; self-cite; and expect random citations. While the recipe will probably not earn its practitioners a Nobel Prize, it can still open many doors unfortunately.
Serious researchers usually know how to amend raw productivity and citation metrics, e.g. penalize excessive self-citation and standardize for citations expected for the average paper in the same field. Serious researchers may also scrutinize themselves the quality and breadth of each miserable paper, even if not all problems are readily visible and much (including real author contributions ) may be hidden under the carpet. However, what happens to the comments of peer-reviewers of rejected papers? Practically nobody hears about them. Peer-reviewers are unpaid consultants, they receive no credit for their reviews, they waste their time, and then their comments are discarded, while the papers that they showed to be wrong eventually get published and cited and shape the scientific literature. This is very disheartening. Not surprisingly, it is becoming difficult for journals to find committed reviewers . It is hard to predict who will be a good reviewer , but when experienced reviewers decline due to time constraints or other reasons , editors or their secretaries search who else has published in this field. Poisson distributions get at work again. The authors of junk papers now also become peer-reviewers, then even editors – and sanctify their own standards.
Some journals use more rigorous quality checks than others, but overall peer-review is not perfect. Flawed papers may get through. Over a dozen reviewers may toil for a flawed paper that is rejected four times before overcoming the system to be published. Reviewers may cumulatively spend more time on such papers than their authors have spent. On the other end of the spectrum, many excellent papers that only need some improvements are rejected by highly competitive journals based on priority considerations . Again, the peers’ comments are discarded and the Sisyphian review process is reiterated.
One option is to make the comments of peer-reviewers available in public even for rejected papers. Then authors would have to respond to them, even if they go to a different journal. However, such a solution would require a wider agreement between journals and authors. No journal can afford this policy alone, as this would make it less competitive to attract submissions, if authors realized that rejecting comments would follow them perpetually. Some journals encourage submission of comments from previous rejections. However, do authors willingly share devastating comments with the next unsuspecting editor?
Concurrent submission to multiple journals has been proposed , but not adopted in any major scientific field. Another option is to have drafts deposited to a common public site. Editors from any journal could access them, invite peer-reviews and make offers for publication. Peer-reviews would be attached to the papers regardless of whether they are accepted or not and authors would revise their work until they have an offer from a journal they like to publish in. The central deposit concept is already accepted in the physical sciences (e.g. see ArXiv). However, public availability of peer-review is adopted by few journals, and typically only for accepted papers. Availability of peer-reviews for rejected papers may minimize endorsement of clearly wrong or poor papers. It may also improve the quality of peer-review: even under anonymity, reviewers may become more careful (e.g. peer-reviews that only suggest citing the reviewer’s work will probably disappear). Rejected papers would still remain publicly visible, thus diminishing publication bias . However, the interested reader would know that they are rejected – or to be exact, not yet endorsed for branding  through journal publication. One would see how the authors of not-yet-endorsed papers have responded to previous comments. Some unendorsed papers may actually be masterpieces seeing beyond their times, others would just be junk.
While we may speculate major changes that would require broad consensus across authors, publishers and other stakeholders in the future, we owe transparency and honesty to ourselves right now if we want to have a chance to separate masterpieces from junk. First, at EJCI we understand that occasionally some papers may be rejected unjustifiably. If the authors believe the reviewers were clearly wrong, rebuttals buttressed with solid evidence are welcome. Rebutting authors should avoid sentimentalism in defending their work. It will take strong and lucid arguments to change an editorial decision. However, we will not tolerate covert resubmission of rejected papers and we have instituted more intense checks so that such covert resubmissions would be recognized. Second, we will not accept new submissions of Brief Communications from now on. We want to encourage the submission of full-length papers with substantial essence, not slicing of research. Finally, we strongly encourage authors to share with us the comments of previous reviewers and their responses to them. We understand that authors may try other competitive journals first and many excellent papers are rejected simply based on perceived priority or, sometimes, from erroneous judgments. A cogent response to previous comments will weigh positively in our evaluation in EJCI and may expedite the time to acceptance.
J.P.A.I. wrote the first draft and A.T. and F.B.K. critically reviewed it.
Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece (J. P. A. Ioannidis, John P. A. Ioannidis); Department of Medicine, Tufts University School of Medicine and Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, MA, USA (J. P. A. Ioannidis, A. Tatsioni); Department of Epidemiology, Harvard School of Public Health, Boston, MA, USA (J. P. A. Ioannidis); Department of Internal Medicine, University of Ioannina School of Medicine, Ioannina, Greece (A. Tatsioni).
- 1Comparison of submitted and published reports of randomized trials. Abstracts of the 5th International Congress on Peer Review and Biomedical Publication; 2005 September 18. Available at: http://www.ama-assn.org/public/peer/abstracts.html#random., , , , , .