In this issue, Leide et al. report a study of first-year history undergraduate students that tested the use and effectiveness of “essay type” as a task-focused query-formulation device. The authors discuss that domain novices in C.C. Kuhlthau's (1993) Stage 3 (the exploration stage of researching an assignment) often do not know their information need; this causes them to return to Stage 2 (the topic-selection stage) when they are selecting keywords to formulate their query to an Information Retrieval (IR) system. The authors hypothesize that, rather than returning to an earlier stage, the searchers should be moving forward toward a goal state—the performance of the task for which they are seeking the information. For domain novice undergraduate students seeking information for a course essay, the authors define the task as selecting a high-impact essay structure that will put the students' learning on display for the course instructor. The authors randomly assigned 78 history undergraduates to an intervention group and a control group. The dependent variable was essay quality, based on an evaluation of the student's essay by a research team member and the marks given to the student's essay by the course instructor. The authors conclude that conscious or formal consideration of essay type is inconclusive as a basis of a task-focused queryformulation device for IR.
Lim reports on a study that examined the present status and the influence of systems offices by exploring the power differences among five principal functional units based on strategic contingencies theory. Amail questionnaire was sent to the principal functional unit heads of each of 95 university libraries belonging to the Association of Research Libraries in the United States. A total of 484 questionnaires was sent, and 235 questionnaires were returned. The three major findings of this study were that (1) systems offices had more perceived power than all but public services; (2) systems offices had higher levels on contingency variables than did most of the other units; and (3) criticality was a factor affecting perceived power between systems offices and most of the other units. The study findings imply that strategic contingencies theory may be partially applicable to library settings.
Bar-Ilan et al. investigate the similarities and differences between rankings of search results by users and search engines. Sixty-seven students took part in a 3-week-long experiment, during which they were asked to identify and rank the top 10 documents from the set of URLs that were retrieved by three major search engines (Google, MSN Search, and Yahoo!) for 12 selected queries. The URLs and accompanying snippets were displayed in random order, without disclosing which search engine(s) retrieved any specific URL for the query. We computed the similarity of the rankings of the users and search engines using four nonparametric correlation measures in [0,1] that complemented each other. The findings show that the similarities between the users' choices and the rankings of the search engines are low. The authors examined the effects of the presentation order of the results, and of the thinking styles of the participants. Presentation order influences the rankings, but overall the results indicate that there is no “average” user, and even if the users have the same basic knowledge of a topic, they evaluation information in their own context, which is influenced by cognitive, affective, and physical factors.
Hert et al. focus on how to understand and model metadata requirements to support the work of end users of an integrative statistical knowledge network (SKN). The authors report on a series of user studies that provide an understanding of metadata elements necessary for a variety of user-oriented tasks, related business rules associated with the use of these elements, and their relationship to other perspectives on metadata model development. The authors conclude that this work demonstrates the importance of the user perspective in this type of design activity and provides a set of strategies by which the results of user studies can be systematically utilized to support that design.
Zhao and Strotmann present evidence that, in some research fields, research published in journals and reported on the Web may collectively represent different evolutionary stages of the field, with journals lagging a few years behind the Web on average, and therefore that a “two-tier” scholarly communication system may be evolving. The authors compared the respective intellectual structures of the XML research field, a subfield of computer science, as revealed from three sets of author co-citation analysis (ACA) covering two time periods: from the field's beginnings in 1996 through 2001 and from 2001 through 2006. For the first time period, the authors analyzed research articles both from journals as indexed by the Science Citation Index and from the Web as indexed by CiteSeer. There was also an ACA of SCI data for the second time period. The authors found that most trends in the evolution of this field from the first to the second time period when comparing ACA results from the SCI were apparent in the ACA results from CiteSeer during the first time period. The authors conclude that, in such fields, ACA using articles published on the Web as a data source can outperform traditional ACA using articles published in journals, and that it is therefore important to use multiple data sources in citation analysis studies of scholarly communication.
Leydesdorff examines betweenness centrality as an indicator of the interdisciplinarity of scientific journals. Social network analysis provides a set of centrality measures like degree, betweenness, and closeness centrality. These measures are first analyzed for the entire set of 7,379 journals included in the Journal Citation Reports of the Science Citation Index and the Social Sciences Citation Index 2004, and then also in relation to local citation environments that can be considered as proxies of specialties and disciplines. Betweenness centrality is shown to be an indicator of the interdisciplinarity of journals, but only in local citation environments and after normalization; otherwise, the influence of degree centrality (size) overshadows the betweennesscentrality measure. The indicator is applied to a variety of citation environments, including policy-relevant ones such as biotechnology and nanotechnology. The values of the indicator remain sensitive to the delineations of the set because of the indicator's local character. Maps showing interdisciplinarity of journals in terms of betweenness centrality can be drawn using information about journal citation environments.
Bornmann and Daniel describe a recently proposed (Hirsch, 2005) h index to quantify the research output of individual scientists. The authors argue that the claim that the h index, in a single number, provides a good representation of the scientific lifetime achievement of a scientist as well as the simple calculation of the h index using common literature databases lead to the danger of improper use of the index. The authors describe the advantages and disadvantages of the h index and summarize the studies on the convergent validity of the index. They also introduce corrections and complements as well as single-number alternatives to the h index.