JASIS, the Journal of the American Society for Information Science, changed its name to JASIST, the Journal of the American Society for Information Science and Technology, beginning in 2001 with volume 52. This was the second time the journal changed its name. It started out as American Documentation in 1950 and changed its name to the Journal of the American Society for Information Science in 1970, starting with volume 21.
In this article we will provide a short bibliometric characterization of the first 10 JASIST volumes – volumes 52 to 61 for the years 2001 to 2010. This characterization includes the list of most highly cited articles published in JASIST as well as citation counts that will be compared to “readership counts” retrieved from Mendeley, an online reference manager (www.mendeley.com).
Bibliometric analyses of JASIS have been conducted before, where the main emphasis was on analyzing different characteristics of authors. In an article published in 1999 Lipetz  studied JASIS authorship during the first five decades of JASIS (and American Documentation) by selecting one volume from each decade. His paper appeared in a special issue of JASIS for the 50th anniversary of the journal. Another paper studying the characteristics of JASIS authors between 1970 and 1996 was published by Al-Ghamdi et al. in 1998 . Different trends in the first 50 volumes of JASIS were analyzed by Koehler et al.  in a study published in 2000 that included article characteristics such as number and type of references, length of paper and title in addition to author characteristics. More recently, Chua and Yang  studied author, co-authorship and keyword distributions for two 10-year periods of JASIST publications.
He and Spink  analyzed the geographic distribution of JASIST and Journal of Documentation authors during a 50-year period, between 1950 and 1999, while Wormell  studied geographical distribution of both authors and readers (based on subscriptions) in the mid-1990s, and JASIS was among the analyzed journals.
Only a few studies emphasized citations: Nisonger  analyzed the position of JASIS in various LIS journal rankings in 1999 and found that one of the most frequently employed criteria for ranking journals in the field was citation data. Earlier Harter and Hooten  carried out a study of nine volumes of JASIS that included citation data as well. In a study published in 1999 Cronin and Shaw  analyzed citation rates and uncitedness of four LIS journals, including JASIST, while in a recent work, Sin  studied the effects of international co-authorship in six LIS journals on citation counts.
The aim of the current study is to analyze the citations received by JASIST articles published between 2001 and 2010. It is well known that citation counts are dependent on the citation database used for data collection, even if all the data were collected at the same time [11, 12]; thus in this study we collected data from three major citation databases: Thomson-Reuters' Web of Science (WoS), Elsevier's Scopus and from Google Scholar (GS).
Citations reflect only one aspect of the use of scholarly articles. Not all the articles we read appear in the reference lists of the works we publish, even though they might be influential. This of course is especially true of readers who are not writers, such as students, librarians, information professionals and other people interested in information science. Thus it is of interest to explore the readership of scientific articles. In the past, this data was gathered through library usage studies, for example [13, 14]), but today this exploration can be done by analyzing download statistics  or by consulting reference managers [16, 17, 18]. In this study we collected readership counts from the reference manager Mendeley and compared the readership counts with citation counts retrieved from WoS, Scopus and GS.
Mendeley readership counts are just one example of a set of alternative metrics that can be derived from the web and from Web 2.0 applications . Other examples include citations or mentions of peer-reviewed journal articles on Twitter  or in blogs [21, 22]. In addition, mentions in CiteULike, Facebook, Delicious and Wikipedia and views and downloads on Slideshare can also be tracked through the total-impact website (http://total-impact.org). Other tools that allow easy production of altmetric measures include ReaderMeter (http://readermeter.org). Publishers are also interested in alternative measures, for example PLoS reports readership counts for Mendeley and CiteULike for all articles it publishes, besides the download and view counts that are sometimes reported by other publishers as well. One of the reasons for the growing interest in alternative metrics is that they can be calculated almost immediately after publication, thus providing almost immediate feedback on interest in the specific article, whereas citation in a peer-reviewed publication takes much longer. Eysenbach  has shown that the number of early tweets might be able to predict whether an article will be highly cited later on. Correlation with citations is interesting, but the value of alternative metrics is that they provide information on “impact” in different senses that compliment citations. As noted above, reading an article and thinking highly of it does not necessarily mean that the reader will actually cite it in a journal paper.