Editors of many scientific journals look upon the arrival of summer with a fair bit of trepidation for it is in summer that the impact factor (IF) for journals is released. This term was first coined by Eugene Garfield in 1955. The idea matured and resulted in the publication in 1961 of the Science Citation Index (SCI). The aim of the SCI was and continues to be to identify journals that are highly cited. That is, the IF is a benchmark for the journal and not for the research work. The calculation of the IF is based on two variables: the numerator is the number of citations in the current year of any article published in the journal in the previous two years, and the denominator is the number of articles published within the 2-year period. According to this formula, CA: A Cancer Journal for Clinicians has the highest IF of all medical journals in 2007 at 69 (see Table 1 for the calculation of IF); it is followed by the New England Journal of Medicine at 52.6 and the Annual Review of Immunology at 48. Contrary to what might be expected, the most cited journal up to 2006 and the second most cited journal in 2007, the Journal of Biological Chemistry, has an IF of just 5.6. The 2008 report on the IF will be published in June 2009.
|CA: A Cancer Journal for Clinicians||Hepatology|
|Citations in 2007 of items published in|
|Sum total of citations in 2007 of items published in 2005 and 2006 (numerator)||2692||6462|
|Number of items published in|
|Sum total of items published in 2005 and 2006 (denominator)||39||602|
|2007 impact factor*||2692/39 = 69.03||6462/602 = 10.73|
The IF is used (and misused) by various sources for different purposes. Publishers use the report on the IF to determine the journal's influence in the field and also to review strategies to increase the IF. Librarians evaluate and document the value of their journal purchases on the basis of the IF. Editors use the IF to identify how influential their journal is; publication committees of different societies use the IF to determine how effective an editorial team has been. Several institutions tabulate the IF of journals in which their staff has published to decide the amount of research funding that departments should receive. In addition, publications in journals with a high IF may be important for promotion in academic rank. With the increasing number of journals being published, researchers use the IF to identify journals in which they believe their research is likely to be most recognized. Researchers and institutions thus imply that the publication of any research in a journal with a high IF is recognition that the research is good.
It is important to understand that fewer than 20% of all articles account for >80% of all scientific citations. Over 90% of the IF for Nature is based on less than a quarter of its publications. Even when a paper is published in a high-impact journal, the paper may not be cited. A study of all citations from 1900 to 2005 shows that only 0.02% of all papers published over this period were cited more than 1000 times. Only 2.44% of papers published were cited more than 100 times, and 60% of papers received fewer than 10 citations in their lifetime. Therefore, researchers should come to terms with the fact that, unfortunately, most papers they publish are not likely to be cited often! On the other hand, there are a few papers like the paper by O. H. Lowry on protein measurement with the Folin phenol reagent, which was published in 1951 in the Journal of Biological Chemistry (IF of only 5.6) and has been cited no less than 293,328 times, or the paper by U. K. Laemmli on the cleavage of structural proteins during assembly of the head of bacteriophage T4, which was published in Nature in 1970 and has been cited 192,022 times. Thus, the total number of times that a paper is cited in its lifetime is likely to be a better representation of the quality of the research work carried out than the IF of the journal in which the paper is published. The important distinction here is that the IF relates only to the journal, but the number of times that a particular paper is cited reflects the quality of the research work.
If institutions so desire, they can use the h index to determine the impact of a researcher's publications. For example, an h index of 25 indicates that the researcher has 25 papers that have been cited at least 25 times, and an h index of 50 indicates that 50 of the researcher's papers have been cited at least 50 times. The higher the h index is, the higher the impact is that a researcher has had on the literature. The h index for individual researchers is obtained from the ISI Web of Science, a publication of Thomson Reuters.
There are numerous ways by which the IF can be manipulated. One can elevate the factor by increasing the numerator (the number of citations) while decreasing the denominator. The easiest way of decreasing the denominator is to decrease the number of articles published. Nature and Science each publish more than 900 articles per year, and Hepatology now publishes approximately 350 articles per year. CA: A Cancer Journal for Clinicians, the journal with the highest IF, publishes only reviews and, moreover, only approximately 20 per year (Table 1). Reviews tend to be cited more heavily than original articles. A broad review on cancer, for example, is likely to be cited more often than an original article in a narrow field such as toll-like receptors in autoimmune liver disease. CA: A Cancer Journal for Clinicians was cited only 2692 times in 2007 (Hepatology was cited 6462 times), and yet it has the highest IF. Consequently, some experts are of the opinion that the citations of review articles should not be counted in the numerator when the IF is calculated. Another way in which the IF can be gamed is by editorial policies that tend to favor manuscripts citing their own journal frequently. A recent editorial has taken this to the extreme.1 The particular journal had an IF of only 0.66. As a means of demonstrating what they termed the absurdity of the IF, the editors cited in the editorial every single article published in that journal over the previous 2-year period, thereby increasing the IF for that journal to 1.44 the following year!
Various measures may be taken by editorial teams to increase the IF during their term at the helm. This is somewhat akin to television anchors who, aiming at higher ratings, present trendy and glamorous items during their news hour rather than broadcasting important news. If professional societies do use the IF to determine the stature of the journal, they should compare the IF only within the field. This is because as the field narrows, the number of citations of the journals in the field decreases. It is appropriate that Nature be compared only with Science, that the New England Journal of Medicine be compared with the Journal of the American Medical Association, that Gastroenterology be compared with Gut, and that Hepatology be compared with the Journal of Hepatology.
The 2-year period used to determine the IF also may not be suitable for journals publishing clinical research. Quality clinical research has an impact over many years. Clearly, frequent citations of a paper within 2 years indicate the immediacy of the research. A high-impact clinical paper may not be cited immediately, but if it is cited for many years, it has a high impact, regardless of the IF of the journal in which it was published. Although the selection of the 2-year period has a somewhat arbitrary basis, when journals are studied within specific disciplines, the rankings based on the 1-, 7-, or 15-year IF do not differ significantly.
There are probably better ways to judge the impact of a journal. The PageRank, named after a formula devised by Google cofounder Larry Page, is another method used to determine the quality of a journal. Unlike the IF, which treats all citations equally, the PageRank gives a higher weight to citations in journals of high caliber such as Nature and Science. That is, if the IF is a reflection of the popularity of a journal, the PageRank indicates the prestige of the journal. For example, the Journal of Biological Chemistry ranks only about 200th among the 5709 journals listed in the SCI according to its IF. On the other hand, it ranks first by its PageRank; that is, it tends to be cited in more prestigious journals. We can extend the concept further by taking into account both the IF and PageRank using the y factor. Journals that score highly on the basis of their y factor are highly ranked by their IF or PageRank or, more likely, both. According to the PageRank, Nature ranks second after the Journal of Biological Chemistry, with Science a more distant third. The Proceedings of the National Academy of Sciences and the New England Journal of Medicine rank lower. With the y factor, Nature and Science are ranked very closely at the top, just ahead of the New England Journal of Medicine. The PageRank and the y factor for Hepatology and related journals are not readily available.
Most of the articles in the field of liver diseases are published in the journals listed in Table 2. Gastroenterology has the highest IF (11.7); it is followed by Hepatology (10.7) and Gut (10). However, if one studies only the highly cited articles in the field of liver diseases between 1993 and 2009, a different conclusion could be obtained. The majority of high-impact articles on liver diseases between 1993 and 2009 were, in fact, published in Hepatology (Table 3). Again, it is important to note that review articles were cited disproportionately more often than original articles.
|Journal of Hepatology||6.6|
|American Journal of Gastroenterology||6.1|
|Clinical Gastroenterology and Hepatology||5.5|
|Scandinavian Journal of Gastroenterology||1.8|
|Digestive Diseases and Sciences||1.3|
|Journal||Number of Citations|
|American Journal of Gastroenterology||–||1 (1)||2|
|Gastroenterology||–||2 (1)||14 (2)|
|Hepatology||1 (1)||4 (1)||39 (10)|
|Journal of Hepatology||1 (1)||1 (1)||2 (2)|
How then should these indices affect editorial policies? We believe they should not at all affect editorial policies. An editorial policy aimed solely at increasing the IF is not in the best interest of either science or the journal. The purpose of a journal is to publish all the research that is fit to be published. Accepting more review articles or fewer original manuscripts does disservice to the journal and to the field. Acceptance of manuscripts should not be based on how likely the paper will be cited frequently in the future. After all, there are no data to suggest that editors are particularly reliable when it comes to reading the future. We at Hepatology endeavor to publish work that is done well: the conclusions are supported by the data, and the contribution is important to the field. Thus, any paper that furthers the understanding of a disease process, results in a change in clinical management, or tests an intriguing hypothesis is deserving of publication. We conclude that the IF should be used only to identify the relative influence of journals within a narrow field. The impact of a researcher's publications can be gauged by the h index. Most important of all, the quality of a paper is to be judged by the reader and recognized in the number of times that it is cited and not by the IF of the journal in which it is published. 4