Editorial: Assessing Academic Researchers



original image

On a recent trip to China and India, I had the opportunity to discuss with many young researchers at various universities about the expectations that they must meet in order to succeed professionally. Many of them thought that the road to success was measured in various forms of “scientometric” data, such as h-index factors and the number of publication citations. I do think that scientometric data have their uses, but I am appalled at the overuses and abuses. These discussions encouraged me to share the criteria for making tenure at Stanford University’s Chemistry Department, where I was the department chair for six years. I am not necessarily advocating that anyone adopt our ways, which reflect the current American tenure system, but I do think that a careful study of our criteria might be helpful. I am aware of the problems embedded in the American tenure system. I am also very much mindful of the arrogance of foreigners when they are not sensitive to another country’s culture and ways. Still, let me dare to offer some advice.

Thumbnail image of

In the American university system, under the current tenure setup, we hire researchers as assistant professors and then decide within seven years whether or not we want them to permanently stay in the department. It is always a difficult decision, because faculty members to whom we give tenure determine the quality, reputation, and atmosphere of our department. Beginning faculty members work hard to achieve tenure and the consequences of misjudging the promise of a beginning faculty member are severe, not only for the faculty member but also for the department. Consequently, much emphasis is placed on judging a researcher’s worth, and this task requires great attention to be given to the evaluation process so that it is fair, transparent, and consistent with the standards that the department sets. Every effort is made to avoid decisions based on simply friendship or favoritism on the part of the most established members of the department.

At Stanford University’s Chemistry Department, I tell beginning faculty members that there are three criteria for achieving tenure. The new hires, in order to achieve tenure, need to be:

  • 1.Outstanding departmental citizens. Our department is small so we need everyone to work cohesively together for the common good of our group.
  • 2.Good teachers. Yes, we would love every faculty member to be a great teacher. But we only ask that all faculty members become good teachers because anyone who aspires to achieve that status can do so. Teaching is a critical component of our service to a teaching and research institution, and we owe it to students to take our instruction to the highest level possible.
  • 3.Exemplary researchers. This last criterion makes sense because Stanford University is primarily a research university. But it is the most difficult to assess, and presents the greatest challenge.

How do we judge someone’s worth as a researcher? Of course, all tenured faculty members in the department have a vote on this, but the process goes through many other layers of university inspection and consideration. For this reason, it is important to define this last criterion as best as we can. The worthiness of a faculty member is not solely judged by the members of the department but more importantly, by the contents of 10 to 15 letters of recommendation that we collect from experts outside the department, both nationally and internationally. We ask these experts whether the candidate’s research has changed the community’s view of chemistry in a positive way.

We do not look into how much funding the candidate has brought to the university in the form of grants. We do not count the number of published papers; we also do not rank publications according to authorship order. We do not use some elaborate algorithm that weighs publications in journals according to the impact factor of the journal. We seldom discuss h-index metrics, which aim to measure the impact of a researcher’s publications. We simply ask outside experts, as well as our tenured faculty members, whether a candidate has significantly changed how we understand chemistry.

All of this is quite different from what I heard during my recent trips abroad. It seemed to me that in the assessment of a researcher’s value, too much emphasis appeared to be placed on the number of publications churned out by a researcher instead of the quality and originality of the work. Just as the IQ number does not capture the creativity and originality of a person’s work, the h-index is not a full measure. Some rough correlations do exist, but in judging researchers early in their career, the h-index seems to be a poor measure. It is more a trailing, rather than a leading, indicator of professional success.

It cannot be denied that having knowledge of the number of citations of some publication has value and serves as a first measure of how well known is the work and how much impact does this specific publication have. Not being highly cited does not mean that someone’s work will never have value. Examples exist where the citation numbers do not immediately indicate what value some work has. I want to bring to your attention one such instance, and purposely choose something outside of chemistry to make this point. Consider the publication S. Weinberg, Phys. Rev. Lett. 1967, 19, 1264–1266 entitled “A Model of Leptons”. By no stretch of the imagination is Physical Review Letters considered an obscure journal. In this paper, Steven Weinberg (who was a visiting professor at MIT) showed that the weak nuclear force and the much stronger electromagnetic force could be unified through the interchange of subatomic particles in spite of the huge difference in their strengths. This work laid the basis for what is called the Standard Model of particle physics, for which Weinberg shared the Nobel Prize in Physics in 1979. In 1967 and 1968, there were no citations to this publication, in 1969 and 1970, one citation each, and in 1971, the citation number jumps to four, one of the citations being a self-citation, that is, a reference by Weinberg to his earlier work. At present, this article has been cited 5 224 times, according to the Thomson Reuters Web of Knowledge. It is easy to find other cases where there has been a slow induction period because some idea or measurement lies outside of what is popularly accepted at the time of publication.

Other institutions may need to use different measures, such as the size of the research group or the numbers of papers published, which are all simpler to explain to university administrators who have little understanding of the field. However, we believe our criteria truly help to appoint the best faculty members for our department at Stanford University. We also think our criteria closely reflect the procedures by which various prizes are awarded in our field, and how individuals are elected to membership in the different science academies in our country.

I do not want to leave you with the impression that our procedures are perfect. We have inadvertently tenured a few people who later showed less enthusiasm for research and teaching than we had anticipated. Nevertheless, I think this procedure is the best method for us. Our criteria are not for everyone to follow, but I do believe that they have helped us achieve true excellence and distinction in research.