Time devoted to research is increasingly precious to us in academia. We chastise ourselves for not being able to keep up with the huge volumes of current literature. If only there was some way that all the latest literature on a particular topic could be packaged together for us, and delivered right to our inbox without us even having to lift a finger! Now, what would we call such an improbable utopia – ah yes, peer review.
That's right, that same peer review process in which the third reviewer dashed your Nobel prize hopes (yet again). Volumes have been written about the inherent problems with peer review, and its (sometimes dubious) ability to discriminate high-quality from low-quality science, but it is rare to see the many benefits of peer review given equal praise (Bornmann, 2011). Peer review invitations deliver to you an advance insight into the latest ideas in your field, with an invaluable compilation of current literature. Peer review feedback also improves our publication outputs. Moreover, there has always been a tacit understanding that peer review is better than a post-publication heterarchy of populist ideals, and that all scientists have a vested interest in ensuring its success. The increasing difficulty that journal Editors face in finding peer reviewers suggests, however, that this view might be changing. Researchers are becoming increasingly vociferous about turnaround times and the robustness of the peer review system for their own papers, while at the same time abrogating their reviewer responsibilities in droves. They seem not to see the connection between cause and effect.
One of us (RKD) recently visited a ‘world top 10’ research organisation, and was mildly bemused to find early career researchers debating the use of a spreadsheet formula to calculate whether one of them should take on the latest reviewer invitation he had just received. A discussion about ‘algorithms for paying back reviewer debt’ would have made a great philosophical debate about the pros and cons of ‘bean-counting’ science. Alas, the discussion was actually about the far more prosaic issue of whether he should use a ‘rounding up’ principle or a ‘threshold obligation’ principle to accept versus decline (respectively) the review invitation, given that he only ‘owed’ 0.6 of a review at the time! I recall that sharp moment of clarity that you sometimes get when you look up from the keyboard and realise the world you (thought you) knew had changed forever. Peer review, that irreducible bastion of complex intellectual disquisition, had been reduced to a balance sheet of ‘1's and ‘0's; just one more annoying professional obligation amongst the increasingly long list of reports, lectures, meetings and other administrivia. Why should you do ‘more’ than you have to for someone else's benefit, and how much is ‘enough’ anyway (not ‘0.6 of a review’ apparently)? I felt the need to defend peer review from the banal metricisation of obligation.
So, what is this metric-based approach to salving reviewer ‘obligation’? It is the expression of a peculiar emerging philosophy of peer reviewing known as a zero-sum game in which the researcher incurs a ‘reviewer debt’ when they publish one of their own papers, and thereby feels an obligation to pay back this debt by acting as a peer reviewer for other researchers’ manuscripts. One standard algorithm for zero-sum reviewers is a simple Σk/n formulation (cf Vines et al., 2010), where k is the total number of peer reviews received and n is the total number of co-authors on a paper (cumulatively summed over a body of work). For example if Prof. Z. Erosum produced ten papers in a year with three co-authors on each paper, and these papers received two peer reviews each (except for one that received three reviews), then her personal share of the reviewer debt would be 5.25 peer reviews owed.
In a pragmatic sense, it might seem that peer review accounting is just a more nuanced version of the traditional rule-of-thumb that many of us used in the past, which was to do at least two reviews for every one paper we published. At the heart of both ideas is the sentiment that there is an obligation owed to the field. Zero-sum reviewing, however, seems to be more about minimising individual responsibility and workload, because the application of Σk/n is very obviously flawed in terms of both denominator inflation and numerator deflation. First, our perception is that many co-authors (the ‘n’ component) on multi-authored works are not likely to be able to take up their share of the reviewer debt for a particular manuscript. This might be because they are students or technical support staff who are not in a position to be offered peer review opportunities, or external collaborators from a different field of research who do not have the expertise to contribute to peer review in the field in which the manuscript is published. If the corresponding author(s) on each manuscript were truly playing a zero-sum game then their accounts would have to be corrected for quite significant denominator inflation.
Second, a further problem is that Σk/n does not take into account the full transaction costs of peer review (the ‘k’ component). At Insect Conservation & Diversity (ICD), as at many journals, every manuscript is vetted by a Managing Editor or Editorial Assistant, followed by a primary scientific evaluation by the Editor-in-Chief (in consultation with one or more Senior Editors). If the manuscript is not rejected without review, a Senior Editor is appointed to handle the review process, and s/he then appoints an Associate Editor to invite reviewers and evaluate the reviewers’ and authors’ responses. This is a time-consuming set of tasks. In most cases at ICD, manuscripts require two to three rounds of evaluation, and in rare cases up to four or even five rounds of evaluation. Only some of these costs on the system could reasonably be indebted to co-authors in the zero-sum game, but Handling Editor debt would certainly be a large component that could be accounted in the same manner as reviewer debt, if one were so inclined. Given the large number of authors and relatively few journal editors in most fields, it is a safe bet that there is currently a significant numerator deflation of zero-sum accounting by most researchers.
What bearing then, does all this have on peer reviewing at ICD? We believe that a directional trend towards minimisation of professional input by researchers into the peer review system is rapidly leading to a bottleneck in manuscript processing. Just as Vines et al. (2010) showed for Molecular Ecology, the data for ICD also clearly show that the average number of review invitations sent per review received has been increasing steadily through time, from ~1.8 (S.E. ~0.07) in 2008–2011 to ~2.3 (S.E. ~ 0.13) in 2014–2016. Curiously, Vines et al. (2010) went on to conclude from similar data that there was no crisis in reviewer supply, because the size of the reviewer pool was increasing in proportion to the growing pool of new submissions, and because only 0.6 reviews per co-author would be required to ‘compensate for the review burden of each new article’ (k/n again). We disagree with this general conclusion because Vines et al. (2010) ignore the fact that reviewer willingness to review could be linked to k/n (under the zero-sum reviewer philosophy), which is itself declining through time. Even at a relatively recently established, small journal such as ICD (1642 article submissions from 2008 to 2016, with 3900 different authors), the data clearly show increasing n and declining k due to associated difficulties in obtaining willing reviewers. We believe that these trends are linked. The number of co-authors per paper is increasing through time (Fig. 1a), and for zero-sum researchers this type of hyperauthorship would no doubt lower their perceived obligation to pay back an increasingly tiny share of the reviewer debt. For instance if we use the Vines et al. (2010) approach we see that the average number of reviews per co-author (i.e. k/n) has declined linearly at ICD from ~1.2 in 2008 to ~0.6 in 2016. We cannot definitively say that this drives changing peer review trends at ICD, but certainly the median frequency of reviewers accepting review invitations has dropped noticeably from ~70% in 2008–2013 to ~50% in 2014–2016 (Fig. 1b). Correspondingly, the percentage of manuscripts that now require excessively large numbers of review invitations (i.e. ≥8) to ensure a minimum of two completed peer reviews has increased dramatically through time, from ~6–8% in 2008–2011 to ~15–20% in 2014–2016 (Fig. 1c). This is not due to spurious ‘changes in technology’ as Vines et al. (2010) suggested (such as email spam filters getting better at blocking review requests), because the proportion of ICD reviewers ‘ignoring’ email requests (i.e. neither accepting them nor actively declining them through the online link) has plummeted through time, from a median of ~25% in 2008 to <5% since 2012 (presumably because reviewers are more technologically savvy these days, and one-click ‘declines’ require little effort). Instead, we believe that the data reflect the very real perception amongst journal editors that it is indeed getting harder to find willing reviewers these days.
Figure 1.
Trends in peer-reviewed manuscripts at Insect Conservation and Diversity 2008–2016: (a) trends in number of co-authors per manuscript (note that the y axis is on a log2 scale); (b) trends in proportion of invited reviewers that accept the reviewer invitation; and (c) trends in the number of review invitations that are required to obtain a minimum of two completed reviews per article. Violin plots represent the frequency distribution of response variables within a given year. Boxplots represent the mean and interquartile range of values within years, and whiskers represent 95% confidence limits. Data points are offset for clarity. Dashed lines are for illustrative purposes only. Pre-launch submissions from 2007 are pooled with submissions from the year ICD was launched in 2008. Data in (a) include the full sample of 1642 submissions from February 2007 to August 2016, whereas (b) is calculated after removal of 490 submissions with zero reviewers invited, and (c) is calculated after removal of a further 400 submissions with multiple separate rounds of reviewer selection which could bias the data.
Although much of the increasing difficulty in obtaining peer reviews can be blamed on the changing attitudes of reviewers, journals can still play their part in alleviating a substantial portion of the peer review mountain. One way to relieve the reviewer load within a specific journal is to have stronger editorial process, where expert editors reject more manuscripts without review (perhaps because the subject matter is outside the journal remit, has obvious design flaws and statistical errors, or does not make a significant advance in the field; Leather et al., 2014) rather than assigning reviewers to relatively low-quality papers that have little chance of eventually being accepted. This is particularly pertinent given the increasingly pervasive practice of ‘over-shooting’ on the quality of the target journal for first submission. Much has been written about the relative merits of editorial review versus peer review as different approaches to evaluating scientific literature (e.g. Steinhauser et al., 2012). We believe there is strong merit in using both in a complementary approach to balance the ‘gatekeeper’ role of good Editors in rejecting obviously flawed work, and the ‘facilitator’ role’ of good Editors in weighing up a consensus of reviewer criticisms and author responses. Yes, there will always be issues raised about subjectivity, but this has been balanced to some extent at ICD by having a broader team of seven Senior Editors who can make final decisions on manuscripts, rather than just the Editor-in-Chief making all decisions. Ultimately it is important to remember that good Editors are selected for their skills and perceived objectivity in the first place, and that there is a constant ebb and flow in the dynamics of editorial boards which means that weak Editors are quickly weeded out, while good Editors rise to the top (Editors are not shy about giving their opinion if someone is not working out in the role!). At ICD, we moved to a system of increased editorial review of first submissions in 2016, with 65% (86 out of 133 submissions) rejected without review in the first 8 months of 2016, compared with 34% (81 out of 236) in 2015 (and ~20–30% in preceding years). Initial indications for 2016 are that this has made no difference to overall rejection rates (79% in 2015 and 82% in 2016 so far), which perhaps indicates that the same papers that are now being rejected without review are those that would likely have been rejected after review anyway. We estimate this has ‘saved’ ~78 peer reviews per 100 manuscript submissions and enabled Editors to focus more time on facilitating higher quality reviews for the remaining articles. For ICD authors, the data suggest that if you are not rejected outright without review (usually within 2 days, on average) you have a very high chance of eventual acceptance following peer review (ca. 50% chance, on current 2016 data). For ICD reviewers, it should also mean that manuscripts you are invited to review will be of higher average quality.
