SEARCH

SEARCH BY CITATION

References

  • Althouse, B.M., West, J.D., Bergstrom, C.T., & Bergstrom, T. (2009). Differences in impact factor across fields and over time. Journal of the American Society for Information Science and Technology, 60(1), 2734.
  • Beirlant, J., Glänzel, W., Carbonez, A., & Leemans, H. (2007). Scoring research output using statistical quantile plotting. Journal of Informetrics, 1(3), 185192.
  • Bensman, S.J. (1996). The structure of the library market for scientific journals: The case of chemistry. Library Resources & Technical Services, 40, 145170.
  • Bensman, S.J. (2007). Garfield and the impact factor. Annual Review of Information Science and Technology, 41(1), 93155.
  • Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45, 199245.
  • Bornmann, L., & Daniel, H.-D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation, 64(1), 4580.
  • Bornmann, L., & Mutz, R. (2011). Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization. Journal of Informetrics, 5(1), 228230.
  • Bornmann, L., Mutz, R., Neuhaus, C., & Daniel, H.D. (2008). Citation counts for research evaluation: Standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics (ESEP), 8(1), 93102.
  • Boyack, K.W., & Klavans, R. (2011). Multiple dimensions of journal specificity: Why journals can't be assigned to disciplines. In E. Noyons , P. Ngulube , & J. Leta (Eds.), The 13th Conference of the International Society for Scientometrics and Informetrics (Vol. I, pp. 123133). Durban, South Africa: ISSI, Leiden University and the University of Zululand.
  • Cameron, A.C., & Trivedi, P.K. (1998). Regression analysis of count data. Cambridge, UK: Cambridge University Press.
  • Carpenter, M.P., & Narin, F. (1973). Clustering of Scientific Journals. Journal of the American Society for Information Science, 24, 425436.
  • Cohen, J. (1977). Statistical power analysis for the behavioral sciences. Orlando, FL: Acdemic Press.
  • de Andrés, A. (2011). Evaluating research using impact and Hirsch factors. Europhysics News, 42(2), 2931.
  • Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science 178(4060), 471479.
  • Garfield, E. (1979). Is citation analysis a legitimate evaluation tool? Scientometrics, 1(4), 359375.
  • Gingras, Y., & Larivière, V. (2011). There are neither “king” nor “crown” in scientometrics: Comments on a supposed “alternative” method of normalization. Journal of Informetrics, 5(1), 226227.
  • Glänzel, W. (2010). On reliability and robustness of scientometrics indicators based on stochastic models. An evidence-based opinion paper. Journal of Informetrics, 4(3), 313319.
  • Glänzel, W., & Schubert, A. (2003). A new classification scheme of science fields and subfields designed for scientometric evaluation purposes. Scientometrics, 56(3), 357367.
  • Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87(2), 415424.
  • Leydesdorff, L. (2008). Caveats for the use of citation indicators in research and journal evaluation. Journal of the American Society for Information Science and Technology, 59(2), 278287.
  • Leydesdorff, L. (2009). How are new citation-based journal indicators adding to the bibliometric toolbox? Journal of the American Society for Information Science and Technology, 60(7), 13271336.
  • Leydesdorff, L. (2012). Alternatives to the journal impact factor: I3 and the top-10% (or top-25%?) of the most-highly cited papers. Scientometrics, 92(2), 355365.
  • Leydesdorff, L. (in press-a). An evaluation of impacts in “nanoscience & nanotechnology”: Steps towards standards for citation analysis. Scientometrics.
  • Leydesdorff, L. (in press-b). Accounting for the uncertainty in the evaluation of percentile ranks. Journal of the American Society for Information Science and Technology. Retrieved from: http://arxiv.org/ftp/arxiv/papers/1204/1204.1894.pdf
  • Leydesdorff, L., & Bornmann, L. (2011a). How fractional counting affects the impact factor: Normalization in terms of differences in citation potentials among fields of science. Journal of the American Society for Information Science and Technology, 62(2), 217229.
  • Leydesdorff, L., & Bornmann, L. (2011b). Integrated impact indicators (I3) compared with impact factors (IFs): An alternative design with policy implications. Journal of the American Society for Information Science and Technology, 62(11), 21332146.
  • Leydesdorff, L., & Bornmann, L. (2012). Percentile Ranks and the Integrated Impact Indicator (I3). Journal of the American Society for Information Science and Technology, 63(9), 19011902.
  • Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables in citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62(7), 13701381.
  • Leydesdorff, L., & Opthof, T. (2010a). Normalization at the field level: Fractional counting of citations. Journal of Informetrics, 4(4), 644646.
  • Leydesdorff, L., & Opthof, T. (2010b). Scopus' Source Normalized Impact per Paper (SNIP) versus the Journal Impact Factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61(11), 23652396.
  • Leydesdorff, L., & Shin, J.C. (2011). How to evaluate universities in terms of their relative citation impacts: Fractional counting of citations and the normalization of differences among disciplines. Journal of the American Society for Information Science and Technology, 62(6), 11461155.
  • Lundberg, J., Fransson, A., Brommels, M., Skår, J., & Lundkvist, I. (2006). Is it better or just the same? Article identification strategies impact bibliometric assessments. Scientometrics, 66(1), 183197.
  • Martyn, J., & Gilchrist, A. (1968). An evaluation of British scientific journals. London: Aslib.
  • Moed, H.F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265277.
  • Moed, H.F., & Van Leeuwen, T.N. (1996). Impact factors can mislead. Nature, 381(6579), 186.
  • National Science Board. (2012). Science and engineering indicators. Washington DC: National Science Foundation. Retrieved from: http://www.nsf.gov/statistics/seind12/
  • Price, D.J. de Solla. (1965). Networks of scientific papers. Science, 149(3683), 510515.
  • Price, D.J. de Solla. (1970). Citation measures of hard science, soft science, technology, and nonscience. In C.E. Nelson & D.K. Pollock (Eds.), Communication among scientists and engineers (pp. 322). Lexington, MA: Heath.
  • Pudovkin, A.I., & Garfield, E. (2002). Algorithmic procedure for finding semantically related journals. Journal of the American Society for Information Science and Technology, 53(13), 11131119.
  • Pudovkin, A. I., & Garfield, E. (2009). Percentile Rank and Author Superiority Indexes for Evaluating Individual Journal Articles and the Author's Overall Citation Performance. CollNet Journal of Scientometrics and Information Management, 3(2), 310.
  • Rabe-Hesketh, S., & Skrondal, A. (2008). Multilevel and longitudinal modeling using Stata. College Station, TX: Stata Press.
  • Radicchi, F., & Castellano, C. (2012). Testing the fairness of citation indicators for comparison across scientific domains: The case of fractional citation counts. Journal of Informetrics, 6(1), 121130.
  • Radicchi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: Toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences, 105(45), 1726817272.
  • Rafols, I., & Leydesdorff, L. (2009). Content-based and algorithmic classifications of journals: Perspectives on the dynamics of scientific communication and indexer effects. Journal of the American Society for Information Science and Technology, 60(9), 18231835.
  • Rafols, I., Leydesdorff, L., O'Hare, A., Nightingale, P., & Stirling, A. (2012). How journal rankings can suppress interdisciplinary research: A comparison between innovation studies and business & management. Research Policy, 41(7), 12621282.
  • Rafols, I., Porter, A., & Leydesdorff, L. (2010). Science overlay maps: A new tool for research policy and library management. Journal of the American Society for Information Science and Technology, 61(9), 18711887.
  • Rousseau, R. (2006). Timelines in citation research. Journal of the American Society for Information Science and Technology, 57(10), 14041405.
  • Rousseau, R. (2012). Basic properties of both percentile rank scores and the I3 indicator. Journal of the American Society for Information Science and Technology, 63(2), 416420.
  • Schreiber, M. (in press-a). Inconsistencies of recently proposed citation impact indicators and how to avoid them. Journal of the American Society for Information Science and Technology, preprint available at arXiv:1202.3861.
  • Schreiber, M. (in press-b). Uncertainties and ambiguities in percentiles and how to avoid them. Journal of the American Society for Information Science and Technology.
  • Seglen, P.O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628638.
  • Seglen, P.O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314, 498502.
  • Sher, I.H., & Garfield, E. (1965). New tools for improving and evaluating the effectiveness of research. Paper presented at the second conference on Research Program Effectiveness, July 27–29, Washington, DC.
  • Sirtes, D. (2012). Finding the Easter eggs hidden by oneself: Why Radicchi and Castellano's (2012) fairness test for citation indicators is not fair. Journal of Informetrics 6(3), 448450.
  • Small, H., & Sweeney, E. (1985). Clustering the Science Citation Index using co-citations I. A comparison of methods. Scientometrics 7(3–6), 391409.
  • Vanclay, J. K. (2012). Impact Factor: Outdated artefact or stepping-stone to journal certification? Scientometrics, 92(2), 211238.
  • Waltman, L., & Van Eck, N.J. (2010). A general source normalized approach to bibliometric research performance assessment. Paper presented at the 11th Conference on Science & Technology Indicators (STI), Leiden, September 8–11.
  • Waltman, L., van Eck, N.J., & van Raan, A.F.J. (2012). Universality of citation distributions revisited. Journal of the American Society for Information Science and Technology, 63(1), 7277.
  • White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48, 817830.
  • Zhou, P., & Leydesdorff, L. (2011). Fractional counting of citations in research evaluation: A cross- and interdisciplinary assessment of the Tsinghua University in Beijing. Journal of Informetrics, 5(3), 360368.
  • Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: The audience factor. Journal of the American Society for Information Science and Technology, 59(11), 18561860.