Much rides on the performance of a scientific journal. Professional societies track where their journals rank so they can gauge quality, relevance, and project future subscription rates. Scientists, university faculty, and other professionals keep track of journal rankings as that information is often used when they are evaluated for promotion or retention. Funding entities consider where applicants publish and how those journals rank when choosing among competitive proposals. Needless to say, quite a bit rides on these numbers.
A movement is currently underway, however, to abandon journal impact factors in lieu of more accurate ways to assess science. This precipitated this past December during the annual meeting of the American Society for Cell Biology. A group of scientists and journal editors convened and developed a document titled the “The San Francisco Declaration on Research Assessment” (http://am.ascb.org/dora/, accessed 20 May 2013). The authors of this declaration acknowledged the need to evaluate the quality and impact of all science outputs, including but not exclusively publications. They also recognized that whatever metric was used, it should be accurate and evaluated correctly.
Traditionally, journal rankings are based on impact factors calculated through Thomson Reuter's Web of Knowledge. Thomson Reuter has pretty much ruled the market with their journal impact factor. Interesting to note, however, is that the impact factor was never intended to be a measure of scientific quality. The primary purpose of the impact factor was to help librarians determine which journals to purchase. In fact, differences among journals in citation policies, types of manuscripts published (e.g., primary research vs. review articles), or even the specific field of study add quite a bit of noise to the impact factor. Some have even suggested that professional societies “game the system” by manipulating impact factors through editorial policy. To address these concerns, the Declaration provided 18 recommendations. Among those, are these 2:
- Do not use journal-based metrics to index the quality of individual research articles in making hiring, promotion, or funding decisions. Make assessments based on scientific content rather than publication metrics.
- Publishers should reduce emphasis on the journal impact factor as the sole metric to promote a journal but present a variety of journal-based metrics (e.g., 5-year impact factor, EigenFactor [http://www.eigenfactor.org/, accessed 17 May 2013], SCImago [http://www.scimagojr.com/, accessed 17 May 2013], h-index).
A couple weeks ago, Dr. Lenny Brennan, Editor-in-Chief for Wildlife Society Bulletin, sent me a copy of journal ratings provided by Google Scholar. They rated Journal of Wildlife Management second among the 20 journals listed in zoology. Even though I knew nothing about how scores were derived and whether or not they were legitimate, I was pleased. In contrast, the Journal's current impact factor calculated through Thomson Reuter's Web of Knowledge was 1.522, which ranked 44th of 146 zoology journals and 85th of 134 ecology journals. At face value, these rankings seem average. I think our science and our journals are far better than average. I also queried rankings for the Journal by SCImago and EigenVector. In both cases, the Journal ranked relatively high, especially in zoology (Table 1).
What does this mean for Journal of Wildlife Management? Each of these ranking systems uses a slightly different algorithm. I will refrain from comparing and contrasting the algorithms here, but I will close with 2 final points. First, Journal of Wildlife Management appears to score consistently higher in the broad field of Zoology than the broader field of Ecology. This makes sense to me given our focus on wildlife. Second, regardless of which metric you consider, our rankings range from excellent to very respectable. I would expect no less given the quality, relevance, and impact of the science conducted by those who publish in this journal.