How to avoid the top ten pitfalls in insect conservation and diversity research and minimise your chances of manuscript rejection

Authors


As Editors, we far too often find ourselves in the position where we have to press the ‘instant reject’ button shortly after receiving and reading a new submission to Insect Conservation and Diversity. In many cases, these manuscripts clearly represent a great deal of work, often conducted under adverse conditions in some far-flung corner of the Earth. Nevertheless, there are frequently transparent flaws in the conceptual approach, design, or execution of these studies that might easily have been overcome with appropriate forethought. We certainly understand that rejection without review causes great disappointment for the author(s), and you might be surprised to find that it also causes some considerable angst for us as Editors as well. After all, none of us likes to dash the hopes of fellow entomologists. Consequently, we feel compelled to share what modest advice we can offer here on how best to avoid ending up on the literary discard pile, and maximise the chances of successfully navigating the peer-review process.

It is now 6 years since Insect Conservation and Diversity was launched (Leather et al., 2008) and by the end of 2013 we had published ~250 studies and rejected over 1000 manuscripts (an average rejection rate of around 80%). Many of these rejected manuscripts were rejected without review, or received a firm rejection following peer review, while a lower proportion were rejected but the authors were encouraged to resubmit a new version of the manuscript. Under each of these three scenarios, there tended to be slightly different reasons for rejection, but we have drawn out some common threads across studies. For comparison, we drew an arbitrary cross-sectional sample of ~10% of all submitted manuscripts (n = 130) covering the first 6 years of journal submissions (Table 1). The subset had similar manuscript fates to the total pool, with ~70% rejection rate for first decisions on original submissions (i.e. not including resubmissions or revisions). For rejected manuscripts we compiled the key reason(s) contributing to the decision, and ranked these to determine the top 10 most common reasons for rejection (Table 1).

Table 1. Percentage of manuscripts that were rejected for different reasons, following the first decision after initial submission to Insect Conservation and Diversity between 2008 and 2013 (n = 91 rejected manuscripts out of a subset of 130 submitted; 50% of rejected manuscripts were rejected without review, 26% rejected following review, and 24% rejected with the option of resubmission). Note that percent values do not add up to 100% in each column because several factors often contributed to the decision to reject an individual manuscript.
Reasons contributing to rejectionReject without review (%)Reject following peer review (%)Reject and resubmit (%)All rejected manuscripts (%)Overall rank
Conceptual design issues
Wrong field6.74.20.04.4 
Lack of hypothesis test26.74.218.218.73
Specific case study without extrapolation to general principle53.312.513.633.01
Pattern without process24.44.213.616.55
Complex bioindicator for a simple pattern2.28.30.03.3 
Experimental design issues
Lack of appropriate controls0.08.30.02.2 
Pseudoreplication22.216.713.618.73
Spatial autocorrelation of treatments8.912.54.58.86
Methodological issues
Methods vary across treatment groups4.44.24.54.4 
Inappropriate sampling method0.08.30.02.2 
Inappropriate spatial scale of sampling relative to organism traits4.48.39.16.610
Statistical issues
Lack of taxonomic resolution to (morpho)-species level2.212.54.55.5 
Inadequate statistical analysis2.28.318.27.77
Low sample sizes6.737.531.820.92
Low abundances per sample unit4.44.24.54.4 
Variation in sample coverage among treatments2.24.24.53.3 
Abundance not incidence analysis for social insects0.00.09.12.2 
Weak inference
Inability to discriminate correlated predictors2.28.318.27.77
Conclusions extrapolate beyond inference supported by data/predictions2.216.79.17.77
Lack of applied conservation significance2.24.24.53.3 

What was immediately evident from this exercise was that there were few single-factor causes of manuscript rejection. Most often there were cascading series of problems stemming from poor study design that affected multiple aspects of methodology, analysis, and inference. Of the 91 rejected manuscripts in the subset, the clear dividing line between the 50% of these that were ‘instant rejects’, versus the other 50% that were sent out for formal peer review, was the weight of conceptual design flaws in their approach and/or experimental design flaws in how the concept was implemented (Table 1). Overwhelmingly, the number 1 contributor to rejection without review (in 53% of instant rejects) was the presentation of localised case studies of conservation or diversity without extrapolation to any generalised principles that might transcend the case-specific example, and thus be of broad relevance to the discipline as a whole. In most of these cases, the problems were also compounded by a lack of explicit hypothesis testing (26.7% of cases), and the lack of any analysis or discussion of the processes likely to be driving observed patterns (24.4% of cases). The typical ‘instant reject’ manuscript that we might see in this category was a simple description of patterns of diversity or species composition in habitat A versus habitat B, or low elevation versus high elevation, or season 1 versus season 2, and so forth, with no formal statement of hypothesis, and no context within the wider scientific literature. Frequently (in 22.2% of cases), sampling was also patently pseudoreplicated within treatment categories in these studies. Although the problem of pseudoreplication has been recognised for some time, and methods to deal with non-independence of sample units are widely available, the problem is still rife in both ecological and entomological fields (Brower, 2010; Chaves, 2010; Ramage et al., 2012), as well as in other areas of the biological sciences (Lazic, 2011; Drummond & Vowler, 2012; Nikinmaa et al., 2012). In extreme cases of pseudoreplication in our subset of manuscripts, each level of a treatment variable was sampled in a completely separate spatial location, resulting in complete confounding of treatment effects due to spatial autocorrelation of predictors (in 8.9% of cases). In these cases, ‘true’ replication was non-existent, n = 1 per treatment level, even when quite intensive sampling was conducted at each location.

Surprisingly, in this subset of ‘instant rejects’ there was a relatively low proportion of studies rejected because they were not within the remit of the journal, although we do increasingly handle a large number of such submissions. We should reiterate that we are not an agricultural or forestry journal, nor are we seeking submission from insect biochemists, physiologists, or molecular biologists, unless there is a clear link to practical conservation or methods to assess biodiversity issues.

Equally surprisingly, the subset of ‘instant rejects’ did not contain manuscripts that were rejected as a primary consequence of poor English language, but we have had occasion in the past to request substantial English language editing prior to resubmission. We are increasingly placing this onus more firmly on contributors, and acknowledgement of editing by an English native-language speaker will be sought where necessary before manuscripts can be considered for peer review.

In contrast to the ‘instant reject’ category, we found that there was a different set of reasons for firm rejection of manuscripts following peer review (Table 1). Conceptual design flaws did not feature heavily in the reasons for rejection here, other than a low proportion of studies with too case specific a focus (12.5% of cases) and studies that purported to be testing the use of ‘bioindicators’ of anthropogenic disturbance but were paradoxically using a very complex and expensive measure of invertebrate composition to ‘indicate’ changes in a variable such as habitat structure that would be simpler and cheaper to measure in its own right (8.3% of cases). The lack of major design issues suggests that the instant rejection process was generally successful in weeding out manuscripts that were likely to be of conceptual relevance to other researchers in the field. Instead, rejection following peer review was dominated by statistical issues and weak inference, with the overwhelming reason for rejection (37.5% of studies) being low sample sizes (in the sense of low numbers of sample units per treatment category, leading to low statistical power). In diversity or community dissimilarity studies, the added problem of low abundances per sample unit was only rarely a problem, but in either case low sample sizes most frequently resulted in low sample coverage (Chao & Jost, 2012) leading to potentially large underestimates of true diversity, or an inadequate representation of community composition. In a reasonable proportion of cases (12.5% of rejected studies), these problems were compounded by inadequate taxonomic resolution of the data (although this was usually considered relative to the state of taxonomic knowledge of the taxon or geographic region under consideration).

Other major statistical issues included moderate-to-severe pseudoreplication (16.7% of cases) and spatial autocorrelation of treatment effects (12.5% of cases) that were not adequately taken into account in the statistical analyses (Dray et al., 2012; Ramage et al., 2012). This led to weak inference in many cases, with 16.7% of studies being rejected as a direct result of conclusions that were inappropriately extrapolated beyond the valid inference that could be supported by the data or model predictions.

Generally, the reasons for rejecting a manuscript but leaving the option open for resubmission of a new version were similar to those manuscripts rejected outright (such as low sample sizes and issues with pseudoreplication), but in these cases the flaws usually involved a failure to explain what the underlying hypotheses might be (rather than the lack of a hypothesis test, per se), and inadequate depth and/or rigour of statistical analyses (such as a failure to deal with the spatial structure of the data, or adequately discriminate collinear predictors) (Table 1). In addition to the general sense from the review process that these flaws could potentially be overcome in a resubmitted version, there were also mitigating factors that weighed in favour of a resubmission option for some studies. These included novelty of the taxon or geographic region being sampled (33% of cases), but more particularly novelty of the conceptual question being addressed (67% of cases), especially if this involved manipulative experiments, long-term temporal studies, or taxa which are rarely dealt with such as Diptera other than tephritids or drosophilids. In 40% of the reject-and-resubmit cases, the resubmission was eventually accepted once these problems had been dealt with (although this was often after several rounds of further revision).

From these data, the top 10 reasons for rejection of manuscripts across all submission are ranked in Table 1, and these suggest to us a series of guidelines for ensuring the quality of manuscripts prior to submission.

Pre-submission checklist of manuscript attributes:

  1. Have you specified an explicit test of hypothesis?
  2. Does the study address a topic that transcends the case-specific setting, and tackles one or more general principles of relevance to the wider discipline?
  3. Is the spatial scale of sampling appropriate to the traits of the organisms under investigation?
  4. Are the sample units truly independent replicates of the treatment variables, or does non-random spatial structuring of the sampling (pseudoreplication) need to be taken into account in the statistical analyses?
  5. Have you accounted for potential spatial autocorrelation of treatment effects, and tested for residual (unexplained) spatial autocorrelation of model residuals?
  6. Are the sample sizes adequate to ensure good statistical power, and is sample coverage high and equivalent across treatment levels?
  7. Have the statistical analyses fully explored all aspects of the data, utilising the best available taxonomic resolution of (morpho)-species identification?
  8. Have the statistical analyses effectively considered, and discriminated among, multiple collinear predictors?
  9. Have the processes driving the observed patterns been identified and discussed?
  10. Are the conclusions valid and supported appropriately within the bounds of the available data or model predictions?

If the answers to all of the above are in the affirmative, then the probability of your manuscript being accepted will be much greater, although of course not guaranteed.

Ancillary