Whatever your views on the impact factor, the ‘impact factor game’ continues to be played and remains an important issue as this is often used as a default measure of an article's impact (Bloch & Walter 2001, Bergemann 2006, The PLoS Medicine Editors 2006, Jackson 2010). There are ways the ‘game’ is played, which are unacceptable and may lead to removal from the relevant ‘league tables’ (Bergemann 2006) and these include: deliberately encouraging authors to cite the receiving journal in their manuscripts; selecting manuscripts for publication that contain high levels of citations to the receiving journal; and adding citations to the receiving journal postacceptance. On the cusp of acceptability are the excessive use of editorials and correspondence that cite the receiving journal (known as self-cites), which is closely monitored by Thomson Reuters (Bergemann 2006).
On the other hand, Editors-in-Chief work hard to publish manuscripts that are likely to be read and cited in their journal and they do have some idea of what tends to get cited. However, assessing the potential performance of papers is not without its challenges, and there is no crystal ball that can identify which articles are likely to receive the most citations in the critical first 2 years, which are the only ones that count towards the journal's impact factor (Hunt et al. 2011, 2012). Publishers monitor citations carefully and editorial boards and journal management teams debate and decide on what kinds of papers to encourage and which to discourage or discontinue. Some publishers may exert pressure on the editorial team to improve or at least maintain the impact factor of the journal and to select articles that will contribute to the 2-year citation window (Hunt et al. 2012). Nevertheless, this is not a precise science and there are few, if any, formal studies of citations in any particular journal to support a particular strategy.
The impact factor
The impact factor is an arbitrary but internationally accepted measure of journal performance in terms of citation to the ‘mainstream’ papers in the journal – principally original research papers and review papers. The citations contributing to a journal's impact factor (JIF) are from one calendar year for articles published in the previous 2 years; therefore, the impact factor for 2011 (reported in 2012) are from citations received in 2011 to citable articles published in 2009 and 2010. According to the 2011 Journal Citation Reports (JCR), the Journal of Advanced Nursing 2011 IF was 1·477 and this increased to 1·527 in 2012 (announced in June–July 2013).
There is an irony to the impact factor in that, due to the strict datelines over which it is calculated, papers cited in their year of publication do not contribute to impact factor but are reported separately as the ‘immediacy index’ in JCR. There is still widespread debate what Thompson Scientific deems as citable articles (The PLoS Medicine Editors 2006). For example, editorials and other correspondence do not appear in the denominator of ‘citable articles’ used to calculate the impact factor. However, citations to these items do contribute to the impact factor since they appear in the numerator. These are called ‘free cites’ and one way to lift the impact factor is to manipulate these to count towards the JIF. The current 5-year JIF of JAN is 2·300 (2012 JCR edition) and this is also important to consider as it incorporates the average JIF over 5 years. Although these are based on the 2-year impact factors averaged over 5 years, it is not the cumulative 5-year impact of all citations received up to 2012 for articles published in the previous 5 years.
The present study
The present study was designed to ascertain what types of articles contribute to JAN's impact factor. To do this, we searched for the top 20 articles that received the highest number of citations that contributed to the JIF of JAN in a typical year. Specifically, we used the Web of Science (WoS) and Scopus databases to extract citations received in 2010 and 2011 from articles published in 2009.
Table 1 lists the number of citable articles and review papers published in 2009 and the number of citations they received in WoS for years 2009–2012. This shows that review articles, which constituted approximately 14% of the volume published in 2009 received more citations (8·39 cites per item) than research articles (4·67) in the first 4 years of being published. Keep in mind that these articles will continue to receive citations, but they are not ‘counted’ in future impact factors.
Table 1. Annual citations to articles and reviews published in JAN in 2009 (WoS).
2009 JAN publications
Number of items
JIF Window 2010 + 2011
Average cites per item 2009–2012
Citations from 2009 are used to calculate the ‘immediacy index’ in Journal Citation Reports. Citations from 2010 and 2011 are used in the JIF equation.
Figure 1 shows the typical distribution of citations received from articles in the first 2 years. Citations listed in the WoS show that 43 articles (17%) did not receive any citations and less than 6% (15/251) of articles received eight or more citations. The figures using Scopus are similar to 31 articles (12%) not receiving any citations and less than 12% (32/261) receiving eight or more citations in the 2-year JIF window. The graph reveals some of the problems with interpreting the JIF for a particular article. That is some articles are not cited while others are highly cited. The reasons for the slight differences between the two databases are what constitute a citable article and differences in the overall scope and number of nursing journals included in WoS and Scopus (Johnstone 2007, Ketefian & Freda 2009, De Groote & Raszewski 2012). So, the problem for editors is to try to attract those articles that receive numerous citations in the short JIF window and limit the number unlikely to attract any citations.
Table 2 lists the top 20 articles published in 2009 receiving the most citations between 2010–2011 from WoS and Scopus. These were classified by JAN (column 6) as review papers (n = 8), original research (n = 7), theoretical papers (n = 3) or research methodology (n = 2). Again, one can see differences between the number of citations listed in WoS and Scopus for some of the articles (column 2 and 3). One tactic used by some editors to increase their JIF is to prefer articles from authors who publish often in a particular field and, therefore, likely to self-cite (Bergemann 2006). Table 2, column 4, shows that this is a hit or miss strategy as some highly cited articles have few or no self-cites from all authors, while others have many more. In total, there were 1021 citations from all authors for the 261 articles listed in Scopus between 2010–2011, and 142 (14%) of these were self-citations from one of the authors in the 2-year JIF window.
Table 2. Top twenty articles in JAN published in 2009 that received the most citations in 2010–2011 (Web of Science and Scopus).
It is not always apparent as to what makes an article highly cited (van Driel et al. 2007), and over reliance on citation counts is not without its limitations. Highly cited papers in JAN were drawn from very different specialties in nursing and review articles attracted higher citation counts than other material. Reviews are often well regarded by editors as they attract large numbers of citations (The PLoS Medicine Editors 2006), but sometimes even these and other best-evidenced papers describing a RCT miss out on contributing to the JIF.
Although the impact factor is highly regarded among the research community (thought to reflect academic excellence), this is not necessarily the case in the clinical arena. The majority of clinicians are not actively publishing and widely read journals do not necessarily have high impact factors (Weiss 2007). Although these articles were not cited, it does not mean that they have not been read or used to inform practice, policy or other innovations in health care or that they are not cited in the grey literature. The impact factor is not necessarily a good measure of overall impact, particularly with the move to open access journals in which articles are available free of charge and able to be accessed easily by such diverse groups as patients, policy makers and non-government organizations (The PLoS Medicine Editors 2006, Watson et al. 2012).
In the research community, impact factors are considered important and it is, therefore, likely that the impact factor of a journal will influence the authors choice of which journal to submit their manuscript. Accordingly, editors and their team must take impact factors matters seriously – even if they have reservations themselves – to attract the best papers and to promote the reputation of the journal. As this editorial shows, it is very difficult if not impossible to predict which articles will contribute highly to a journal's impact factor and therefore the impact factor game is likely to continue for some time to come.