Editorial: Rating research performance
David R Thompson,
Department of Health Sciences,
University of Leicester,
22-28 Princess Road West,
Leicester LE1 6TP, UK
Whether one likes it or not (and many do not), there has been a rapid shift towards using metrics in judging research quality. Nursing research in the UK has so far resisted this and, although the 2008 UK Research Assessment Exercise, and its predecessors, purposely avoided metrics, such as impact factors and citation indices, in favour of expert review panels, its replacement, the Research Excellence Framework (REF) will, at least, make metrics available to those judging research quality. This will be a major challenge for many disciplines, including nursing.
Increasing emphasis is being placed on impact factors and citation indices. The impact factor refers to the average performance of all articles in a journal. It is a good measure of journal performance but it is usually an inaccurate reflection of the citability of any individual paper that contributes. Citation indices may reflect criticisms of papers, for example, and be manipulated by heavy self-citation. Counting total papers could mask many mediocre ones, whereas just counting highest-ranked papers may not recognise a large and consistent body of them. As a means of rating individual researchers, due to the above limitations, impact factors are very poor as they, essentially, rate researchers by proxy: by the journals they publish in rather than by their own efforts. It is possible, for example, to publish in a high impact factor journal without ever being cited oneself or without making a contribution to the impact factor rating of the journals one publishes in. As an adjunct, total citations can be used to rate individuals but – even if self-citation is avoided or accounted for –‘citation clubs’ are purported to exist in some disciplines whereby people agree, reciprocally, to cite one another. Of course, there is the additional phenomenon of citation of poor or erroneous papers simply because they are poor and exemplary in that, and this tells us little about the quality of the person being cited. On the other hand, the h-index refers to the performance of an individual researcher. It is assuming prominence as a measure; indeed, some national research funding organisations now require applicants to stipulate their h-index and base funding decisions on it. The h-index can also be used to rate journals, alongside other metrics, as explained in a previous editorial (Jackson et al. 2009).
This measure, proposed by Hirsch (2005), a physicist, rates a researcher’s performance based on their career publications, as measured by the lifetime number of citations each article receives. The h-index depends on both the number of publications (quantity) and number of citations (quality) of these publications. Thus, a researcher has index h if h of their N papers has at least h citations each, and the other (n−h) papers have no more than h citations each. In other words, the h-index is the highest number of papers an author has that have each received at least that number of citations. For example, someone with an h-index of 10 has written 10 papers that have each received at least 10 citations. This metric is useful because it discounts the disproportionate weight of highly cited papers or papers that have not yet been cited.
The h-index is based on lifetime citations received by a researcher’s articles, whereas the impact factor is based on only one year’s worth of citations. You can work out yours or someone else’s h-index through visiting Scopus (http://www.scopus.com) or the Thomson ISI Web of Science (http://www.isiknowledge.com). However, the h-index depends on the number of journals covered by, for example, Web of Science, and the time span. Moreover, the calculation excludes books and articles in non-covered journals. For a more generous assessment of h-index authors can turn to Publish or Perish (http://www.harzing.com/pop.htm) which includes citations of a wider range of published outputs including books and conference papers.
Like total papers published and total citations received as well as citations per paper, there are advantages and disadvantages to the h-index. A major disadvantage of total citations received and citations per paper is that these can be skewed by self-citation. The h-index, on the other hand, is claimed by its inventor to be transparent, unbiased and very hard to manipulate (Hirsch 2005). Further advantages of the h-index include it taking into account both productivity and quality of a researcher’s publications. Thereby, it distinguishes between truly influential researchers, those who just publish many papers and ‘one-hit wonders’ as it is insensitive to a set of uncited or lowly cited papers (so that the impact of a researcher’s high quality output is not diluted), and it does not penalise a researcher with a short career. Disadvantages of the h-index include it being insensitive to one or more very highly cited papers (so an author’s h-index might not be truly reflective of their impact on the field), it not being independent of time, and it not always being a reliable indicator of high personal achievement. Moreover, the h-index cannot decrease with time and so cannot detect a reduction in output or retirement. Nor can it differ between fields. Thus, the h-index is an addition to other available indicators and, like them, has limitations.
The h-index is increasingly likely to influence, at least among the scientific community, membership of learned societies and funding or tenure decisions. Indeed, Hirsch (2005) has suggested that after 20 years in research, an h-index of 20 is a sign of success, and one of about 12 should be good enough to secure tenure. However, despite its potential use as an assessment tool by university administrators and government bureaucrats, its pitfalls need to be recognised. These include age and sex: the h-index increases with age and, at least in some scientific disciplines, females publish fewer papers than their male counterparts and have significantly lower h-indices (Kelly & Jennions 2007). The g-index, a modified alternative to the h-index, has been proposed by Egghe (2006), a bibliometrician. This gives additional weighting to a researcher’s most cited papers and is arguably more reflective of true impact. The g-index is the highest number of papers, g, that have together received at least g2 lifetime citations. It would seem a useful complement to the h-index for a very specific and small group of researchers.
Whatever method is used, it is important to recognise the applications and limitations of the h-index, comparing like with like. The ‘publish or perish’ ethos that currently pervades most academic disciplines, including – increasingly – nursing, often drives researchers and dictates where they publish. We would argue that nursing cannot eschew the use of bibliometrics such as impact factors and h-indices; for too long we have pleaded ‘special status’ where we consider ourselves unlikely to benefit from a method of assessment; we cannot plead to be social science based, for example, and then look with envy at the benefits accrued by our counterparts in laboratory science and medicine through their effective participation in competitions which give credence to bibliometrics. We do need, however, to be careful that we do the best for our subject and we cannot simply embrace, wholeheartedly, the benchmarks of other disciplnes. As referred to above, Hirsch provided some ‘rules of thumb’ for the measurement of success in a discipline – presumably his: physics – according to the h-index. We suggest that nursing does likewise for itself by drawing up some benchmarks for the h-index at different levels of academic achievement. It is quite possible for anyone to calculate the h-index of any nursing academic and we could start with a representative sample of the nursing professoriate worldwide thereby obtaining a range of h-indices and some modal or median values from which to make a start. We assume that no nursing professor has an h-index of zero – that would be ‘too embarrassing for words’– and that we probably cannot compete with ‘big hitters’ in some other disciplines. However, we should get some idea of what nursing academics starting out on their career should be aiming at. For those who complain that the h-index, like any such index, can be manipulated by self-citation then let us develop mechanisms to take that into account. One major advantage for incoming nursing academics – which some of us at the other end of our career may have missed – is that quality is better than quantity; instead of taking every chance to ‘get a paper out’ focusing on the h-index could encourage us more to focus on the quality of our papers and the underpinning quality of our research and scholarly work.