Rewarding reviewers – sense or sensibility? A Wiley study explained

In July 2015, Wiley surveyed over 170,000 researchers in order to explore peer reviewing experience; attitudes towards recognition and reward for reviewers; and training requirements. The survey received 2,982 usable responses (a response rate of 1.7%). Respondents from all markets indicated similar levels of review activity. However, analysis of reviewer and corresponding author data suggests that US researchers in fact bear a disproportionate burden of review, while Chinese authors publish twice as much as they review. Results show that while reviewers choose to review in order to give back to the community, there is more perceived benefit in interacting with the community of a top‐ranking journal than a low‐ranking one. The majority of peer review training received by respondents has come either in the form of journal guidelines or informally as advice from supervisors or colleagues. Seventy‐seven per cent show an interest in receiving further reviewer training. Reviewers strongly believe that reviewing is inadequately acknowledged at present and should carry more weight in their institutions' evaluation process. Respondents value recognition initiatives related to receiving feedback from the journal over monetary rewards and payment in kind. Questions raised include how to evenly expand the reviewer pool, provide training throughout the researcher career arc, and deliver consistent evaluation and recognition for reviewers.


INTRODUCTION
Recent peer review scandals [such as the acceptance of fake papers generated by SCIgen (Seife, 2015) and the retraction of papers in Springer and Biomed Central journals (Retraction Watch, 2015) due to reviews from fake reviewers] make it more important than ever to safeguard the integrity of the science we publish, and the reputation of our peer review standards. In order to deliver the best peer review experience for our authors, and deliver the best quality peer review, publishers need to continue to evolve the support and services we offer our peer reviewers.
That is why, in July 2015, Wiley surveyed over 170,000 researchers in order to explore peer reviewing experience, attitudes towards recognition and reward for reviewers, and training requirements; 2,982 usable responses (an effective response rate of 1.7%) were received.
Wiley's research of author needs has repeatedly shown that author's experience of peer review shapes their overall publishing experience. Authors that express the most satisfaction with their publishing experience are those that state they experienced a smooth, problem-free review process. Conversely, authors expressing the lowest levels of satisfaction are those who experience a difficult review process and struggle to communicate with the reviewers of their paper. Furthermore, our editors tell us that recruiting reviewers is a major pain point. One managing editor of a number of Wiley journals reports that the conversion rate of reviewer invitations to acceptances has dropped by at least 5% over the past 5 years. Once reviewers have been recruited, editors are keen to find ways to better reward those who they trust, who deliver on time, and who produce high quality work. In summary, the issue is threefold: 1. a need to increase the reviewer pool; 2. a need to ensure reviewers in that pool are well trained, trustworthy, and produce good quality reviews; and 3. a need to find ways to reward reviewers in order to recognise their work and maintain motivation. Korea, Japan).
A quota system was employed to keep track of how many responses were received by country and subject discipline, ensuring that a balanced data sample, representative of Wiley's publishing community, was gathered. The survey was not closed until sufficient responses were received from each country and subject area. The survey ran from 21 July to 7 August 2015.
• Primary place of work: Nearly 60% of respondents listed university or college as their primary place of work, followed by research institution (19%), and hospital/healthcare institution (10%).
• Age of respondent: The majority of survey respondents (63%) were between 31 and 50 years of age.
Although respondents to the survey are proportionately representative of the regional and subject-based distribution of Wiley's publishing community, the study may be subject to self-selection bias.
Respondents who chose to participate may not represent the entire target population of all reviewers. As responses may not be representative of the target population, inferential statistical tests (viz. hypothesis testing) were not used in this report, instead focusing on highlighting major differences and trends in the data. This report does not examine the biases that may be present in the wording, formatting, and ordering of survey questions.

Reviewer motivation
A considerable amount of researcher time is spent reviewing journal articles. Rubriq (2013) calculated that approximately 30.5 million researcher hours was spent on reviewing. Taking just the top 12 publishers alone, it is estimated that at least 22 million researcher hours was spent reviewing papers in 2013 (Table 1).

Key points
• US researchers bear a disproportionate burden of peer review.
• Most reviewers would welcome further training support.
• Twenty-two million researcher hours is spent reviewing for the top 12 producing publishers.
• There is a need to increase the reviewer pool especially in highgrowth and emerging markets and among early career researchers.
• Journal rank is important to potential reviewers.
• Industry-wide agreement on core competencies may be facilitate the reward and recognition of reviewers.
• Feedback from journals is a vital form of recognition for reviewers.
What are the motivations for spending such a large amount of time reviewing? The results from our survey confirm the findings from other industry reports that reviewers choose to review because it allows them to actively participate in the research community and they feel it is important to reciprocate the peer review that they themselves receive.
However, there are some underlying differences by career stage.
Reviewing as a path to improving one's writing skills, along with reputation building and career progression, is ranked higher by those who have been reviewing for fewer years, and therefore more likely to be in their early career ( Table 2).
The top factor influencing the decision to accept a specific review invitation is the prestige and reputation of the journal, holding true across all age groups (Table 3). Also, both early career researchers and established researchers consider the prestige and reputation of the journal as the most influential factor on the time spent on that review, and their commitment to meeting review deadlines. The personal relationship and opportunity to network with the editor is ranked second for both groups.
This suggests that while reviewers choose to review in order to give back to the community, there is also more perceived benefit in interacting with the community of a top-ranking journal than a lowranking one. Clearly, there are some reputation management and career progression factors involved here.

Who is bearing the reviewing burden?
Our survey indicates that 49% of reviewers currently review for five or more journals. Experienced reviewers, those with more than 5 years of reviewing experience, shoulder even more of the burden with 61% reviewing for five or more journals. There is some ambiguity in the responses here because of the phrasing of the question ('How many journals do you currently review for?'). As the word 'currently' was not defined, reviewers may have answered according to either how many journals they are currently completing a review for or the journals on whose reviewer lists they currently sit. Either way, there is sufficient indication that at least half of respondents are undertaking, or could be asked at any time to undertake, a considerable volume of reviews.
In the survey, respondents from emerging and high-growth markets indicate that they are currently acting as reviewers for  and 2). It has been suggested that a fair 'reviewer commons' is one in which researchers review the same number of papers as they themselves receive review for (Coin, 2015).
As editors frequently invite past authors to be reviewers, it is assumed that as the number of authors grows, so should the number of reviewers. This apparent regional imbalance could be one explanation for the increasing difficulty in finding reviewers, and also the burden of a growing number of articles without a commensurate growth in reviewers.
Anecdotally, some editorial offices observe that Chinese researchers have one of the highest review invitation acceptance rates. As with researchers across the full pool, Chinese researchers rate being an active participant in the research community as the most influential factor in choosing to become a reviewer with mean rating of 4.3. Perhaps unsurprisingly, improving their own writing skills (3.8) and gaining professional recognition (3.8) receive higher mean rankings from Chinese researchers than the overall mean across all regions.  Feedback provided by the journal post-review 3.6 3.5 3.7 Reviewer benefits/rewards offered by the journal 3.7 4.0 3.8 CME/CPD credit/accreditation awarded for review activity 4.8 4.9 4.9 Reviewer credit awarded on 3rd party website 5.2 5.3 5.3 a This was a rank order question. The mean indicates the average ranking each item received. When asked what type of training reviewers would find most useful, journal and publisher guidelines continue to be ranked highest overall as useful resources for reviewers. However, there are some differences in the rankings between early and late career respondents.
Early career respondents rate guidance and mentoring as important, while late career respondents rank general ethics guidelines for peer reviewers as more important.

What training needs do reviewers have?
How confident are reviewers in their reviewing skills? On average, reviewers self-assess their skills at 3.7 out of a possible highest rating  Despite expressing relatively high levels of confidence, 77% of reviewers express an interest in receiving further training. As we would expect, demand is particularly strong (89%) among respondents with 5 or less years of reviewing experience. However, established career researchers also express an interest in training (75% of those with 6-10 years of reviewing experience and 64% of those with 11-15 years of experience) (Table 6). Notably, it appears to be the fundamentals of reviewing such as constructing a review report and providing constructive, useful feedback that consistently elicit the highest interest across all experience levels (Table 7).
Responses show some variation between disciplines. There is higher demand for training in how to review a qualitative research article in the social sciences and humanities, and greater demand for training in performing a statistical review, reviewing a systematic literature review, reviewing data, and handling re-reviews in the health and life sciences.
In addition, there is specific interest in how to review a clinical paper from health science respondents.
There are also some regional differences. Asian reviewers express much higher demand for an introduction to becoming a peer reviewer, working with editors, and reviewing a qualitative research paper than Western counterparts.

Recognition for reviewing
Reviewers strongly believe that reviewing is inadequately acknowledged at present and should carry more weight in their institutions' evaluation process. Moreover, respondents say they would spend more time reviewing if their institution recognised this task (Table 8) Survey respondents were shown a list of reward and recognition initiatives and asked to select the ones that would make them more likely to accept an invitation to review. Individual initiatives were grouped into categoriesacknowledgements, accreditations, rewards, performancebased rewards, and feedback. Three of the top six most selected individual initiatives are related to receiving feedback from the journal on the quality of their review, learning about the decision outcome, and seeing other reviewer comments (Table 9).
Next to this, the second most valued category of reward is acknowledgement, whether in the printed journal, on the journal website, or a personal note from the editor. Respondents are definitely more interested in receiving feedback and editor/journal 'thank you's' or recognition for their reviewing efforts, than cash or in-kind payments, although receiving access to journal content also featured highly (Table 9).
In many ways, this is consistent with what we have learned about reviewer motivation. As covered earlier, reviewers are motivated by their desire to actively participate in their research community -they want to know that their contribution has been well received and was worth the precious time they spent.
The four most preferred reward and recognition initiatives hold true across all markets. However, responses from reviewers in highgrowth markets indicate that acknowledgements in the journal or on its website are less important than receiving access to papers they have reviewed, or a digital 'Top reviewer' badge that could be displayed on personal and social media websites. Reviewers in mature markets show a higher preference for discounts or waivers on Open Access fees.

FUTURE DEVELOPMENTS AND CONCLUSIONS
Returning to the three primary statements outlined at the beginning of this article, we have the following:

A need to increase the reviewer pool
In order to reduce reviewer workload issues, there is a need to increase the pool by attracting early career researchers and new markets, including reviewers from high-growth and emerging regions. However, the findings with regard to the apparent uneven geographical spread of reviewing burden suggest that there is also a need to make sure that the work is spread out evenly.
There may well be process improvements that publishers could make to better identify and record possible reviewers. Increasing use of customer insight tools, either those developed in-house or by vendors such as DataSalon (http://www.datasalon.com) or SalesForce (www. salesforce.com), may offer opportunities to more easily identify potential reviewers.
Formal recognition from research assessment bodies of review activity as a measurable research output [facilitated by a taxonomy of contributor roles, such as project CreDit (http://casrai.org/CRediT), and unique identifiers for reviewers, i.e. ORCID (http://orcid.org/)] could help alleviate some of the time allocation issues, as reviewing would be seen as part of researchers' activity rather than an additional commitment.
With specific reference to Chinese reviewers, further research is needed to assess the extent to which the imbalance is due to any or all  The type of training that respondents would find beneficial. Respondents were allowed to select up to three types of training.
Responses by number of years reviewing (%) All responses (%) of the following factors: a lack of recognition for reviewing activity, which may be causing less Chinese researchers to undertake reviews; a skills deficit, which may mean that Chinese reviewers are less confident at reviewing for international journals; a possible reluctance of international journal editors to use Chinese reviewers (due to either real or perceived skills gap); and/or difficulties in identifying potential reviewers from China.
Our findings suggested that journal rank plays an important role in the decision of researchers to accept a review invitation. This could imply a tiered reviewer market, where researchers are more willing to review for higher impact journals. If this is indeed the case, lowerranking journals may need to work harder or employ different tactics to attract and motivate reviewers, perhaps in the areas of reviewer mentoring or reward. However, further analysis is required to fully determine whether lower-impact journals find it more difficult than high-impact journals to recruit the requisite number of reviewers or encourage reviewers to deliver reviews on time and to the highest standard.
A need to ensure reviewers in that pool are well trained, trustworthy, and produce good quality reviews While specific training needs vary across regions, subject, and experience levels, there is evidence from this study and past research (Sense About Science, 2009) showing agreement that better training is needed, both in order to help bring more reviewers into the pool Reward and recognition initiatives that would make respondents more likely to accept invitations to peer review. Respondents were able to select any number of initiatives.

Reward and recognition initiative Votes (% all responses)
and to make sure that editors trust and are confident in using those reviewers. A need to find ways to reward reviewers in order to recognise their work and maintain motivation In many ways, training and recognition/reward issues are different sides of the same coin. The drivers for more effective reward and recognition initiatives are a combination of the need to adequately compensate reviewers for the effort and time they take, a desire to keep reviewers motivated, and also a need to reward the best reviewer attributes and behaviours in order to maintain ongoing quality of peer review standards. This survey suggested that the most valued recognition initiatives are as much about improving editorial workflows, for example by telling reviewers how useful their review was, and sharing decision outcomes, as they are about receiving more formal compensation. In this sense, feedback is itself a powerful form of reward. Many journals share decision outcomes and other reviews with their reviewers, but how much more powerful would it be if we could harness this form of evaluation and apply the same consistency in feedback as reviewers are asked to supply in the quality of their comments?

Core competenciesa possible solution
Publishers need a consistent answer to the question, what makes a good reviewer? Studies have tried to quantify the characteristics of a good reviewer (experience, proven review frequency, etc.), but these studies are primarily based on quantitative factors (Black, van Rooyen, Godlee, Smith, & Evans, 1998). Alternatives, such as looking at attributes based on the profile of current reviewers, could just serve to reinforce existing distribution of reviewing effort.
One possible solution is the concept of core competencies as advocated by Moher (2014) and others (Glasziou et al., 2014). If we want to train reviewers effectively, and also measure their performance, perhaps there is a need to establish an industry-wide set of minimum core reviewer competencies. A set of universally agreed reviewer competencies, with some variation at subject level, could provide the basis for both a training framework and ongoing measurement and evaluation of reviewer quality.
Earlier in this article, it was asked whether there is a lack of trust in the reviewing ability of emerging and high-growth market researchers. A training and recognition mechanism based on core competencies could help alleviate this issue. There is an opportunity for centrally funded reviewer training programmes, delivered by publishers who have the expertise and the content, tailored according to regional needs and designed to deliver learning outcomes based on these competencies. In 2014 alone, over 30 editorials in Wiley journals offered guidance on peer reviewing.
These were some of the most highly viewed articles in the year, indicating a strong researcher interest in more information and guidance on reviewing.
Finally, this same competency framework could also be used to provide consistent and meaningful feedback on review activity to more established reviewers. The recently launched Think. Check.
Submit campaign (thinkchecksubmit.org) is a good example of what can be achieved by a coalition across the field of scholarly communications. Further collaboration could greatly facilitate progress on this issue also.