A taxonomy of functional units for information use of scholarly journal articles

Authors


Abstract

Today's readers of scholarly literature want to read more in less time. With this in mind, this study applies the idea of the functional unit to the use of digital documents. A functional unit is the smallest information unit with a distinct function within the Introduction, Methods, Results and Discussion components of scholarly journal articles. Through a review and analysis of the literature and validation through user surveys, this study identifies a set of common functional units and examines how they are related to different tasks requiring use of information in journal articles and how they are related to each other for a particular information use task. The findings, presented in the form of a taxonomy, suggest a close relationship between functional units and information use tasks, and furthermore among a set of functional units for a particular information use task. This taxonomy can be used in the design of an electronic journal reading system to support effective and efficient information use.

INTRODUCTION

Research indicates that although the total time spent reading scholarly articles has increased, the time spent on each item read has declined. For a university science faculty member in the United States, the average number of articles read per year increased from 150 in 1977 to 280 in 2005, while the average time spent per article read decreased from 48 minutes in 1977 to 31 minutes in 2005 (Tenopir et al., 2009a, 2009b). This suggests that there is a need for electronic journal systems to support effective and efficient information use within academia.

The concept of genre, referring to the relatively stable and expected form and content for communication within a particular community (Breure, 2001), provides a means of looking at information system design from a document-oriented perspective. Most genre-based information science research is focused at the document level, but some studies (Dillon, 2004; Vaughan & Dillon, 1998) have taken a more analytical approach by studying the genre of components within journal research articles — Introduction, Methods, Results, and Discussion (IMRD). This research seeks to facilitate information use of journal research articles by exploiting functional units. Here, a functional unit is defined as a chunk of information embedded in an individual component of introduction, methods, results, and discussion, which serves a distinct communicative function. The concept of functional units is based on Swales' CARS (Create a Research Space) model (1990) and Sperber and Wilson's Relevance-theoretic Comprehension Procedure (1995).

This paper reports work that seeks to identify and map the functions in the core (IMRD) components of a journal article and their relationships with typical tasks using information in journal articles. By exploiting the mapping between functional units and tasks, we may help users to fulfill a particular task by presenting them with the most relevant text in the article rather than the entire article. “Journal article” in this paper specifically refers to scholarly journal articles which follow the conventional IMRD format for reporting research. This study focuses on the psychology domain because adherence to APA (American Psychological Association) style in this domain has resulted in a relatively mature research article genre. This study addresses the following research questions:

  • (i)What functional units exist within scholarly journal articles in the field of psychology?
  • (ii)How are functional units related to different tasks requiring use of information in journal articles?
  • (iii)How are functional units related to each other for a particular task requiring use of information in journal articles?

THEORETICAL FRAMEWORK

The concept of functional units as used in this study is based on Swales' CARS model, while the idea that functional units are inter-related is based on Sperber and Wilson's Relevance-theoretic Comprehension Procedure.

Swales' CARS Model

“A ‘move’ in genre analysis is a discoursal or rhetorical unit that performs a coherent communicative function in a written or spoken discourse” (Swales, 2004, p. 228). As noted by Swales, the move structure of an individual article component consists of functionally distinct steps. Based on a move analysis of 48 articles in the “hard” sciences, social sciences, and life & health sciences, Swales (1990, p. 141) proposes the CARS (Create a Research Space) model for writing academic introductions: 

Table  . 
Move 1:Establishing a territory
 Step 1: Claiming centrality and/or
 Step 2: Making topic generalization(s) and /or
 Step 3: Reviewing items of previous research
Move 2:Establishing a niche
 Step 1A: Counter-claiming or
 Step 1B: Indicating a gap or
 Step 1C: Question-raising or
 Step 1D: Continuing a tradition
Move 3:Occupying the niche
 Step 1A: Outlining purposes or
 Step 1B: Announcing present research
 Step 2: Announcing principal findings
 Step 3: Indicating RA structure

In this way, the overall meaning of “introduction” is realized through a sequence of moves, each of which is realized through several steps. The boundary between moves is indicated by changes in the type of information communicated. A number of studies of IMRD components within different corpora in various disciplines have been based on Swales' genre model. These include work on the Results (Brett, 1994; Thompson, 1993) and Discussion (Hopkins & Dudley-Evans, 1988; Lewin et al., 2001) components.

Sperber and Wilson's Relevance-theoretic Comprehension Procedure

Relevance Theory, proposed by Sperber and Wilson (1995), addresses everyday speech utterances from the cognitive perspective. Sperber and Wilson differentiate between two principles of relevance. The Cognitive Principle of Relevance states that human cognition tends to maximize relevance in processing information — to gain the greatest cognitive effects with the least processing effort. Applied to cognitive processes in verbal communication, the Communicative Principle of Relevance states that an intentional act of communication conveys the presumption of optimal relevance: that it is at least relevant enough to be worth the addressee's attention and is as relevant as the addresser could have made it given his or her abilities and preferences. Comprehension, then, starts with the recovery of linguistically encoded meaning, and continues with the recovery of the explicit meaning and the implicit meaning. The audience follows a path of least effort and stops at the first interpretation that satisfies his or her expectations of relevance. This is the Relevance-theoretic Comprehension Procedure (Wilson & Sperber, 2004, p. 613):

  • aaFollow a path of least effort in computing cognitive effects: Test interpretive hypotheses (disambiguations, reference resolutions, implicatures, etc.) in order of accessibility.
  • b.Stop when your expectations of relevance are satisfied (or abandoned).

Saracevic (2007) notes that Relevance Theory as proposed by Sperber and Wilson has had more impact on thinking about relevance in information studies than work on relevance from other fields. Some examples of this impact are Harter's (1992) work on psychological relevance and White's (2007) examination of cognitive effects and processing effort in bibliometric retrieval.

PREVIOUS STUDIES

Current genre research in information studies has focused on the genres of digital documents, such as web pages and weblogs. Nevertheless, a few studies have examined article components.

According to Dillon (2004), a component is a part-genre of a journal article. Vaughan & Dillon (1998) recruited expert users to categorize a set of paragraphs according to where they belong in an academic journal article. The experts' verbal protocols were subjected to “how, why, what” content analysis. IMRD components have well-established roles to play: how they are read, why they are read, what content they should contain. However, “how, why, what” in reading were identified from users' conceptions of article components rather than from the documents themselves. IMRD components are also discussed briefly in related work that considers the role domain expertise can have in helping users locate information in articles (Dillon, 2000; Dillon & Schaap, 1996).

In work by Bishop and colleagues (Bishop, 1998, 1999; Bishop et al., 2000), components are the logical subdivisions of a journal article, including article titles, author names, external links, abstracts, references, etc. The functions of journal article components can be to support finding relevant documents, assessing document relevance before retrieval, reading articles, creating document surrogates, reaggregation and integration into new documents. They found that readers tend to extract individual components from journal articles and incorporate them into their own writing. This idea was applied by Sandusky and Tenopir (2008) to the components of tables and figures. However, the implementations of this idea, Bishop's DeLIVER testbed or Sandusky and Tenopir's ProQuest CSA prototype, were focused on extracting logically discrete components from their embedding articles for the sake of searching and viewing. The work has also raised questions of whether or not individual components can stand alone, and what the minimum necessary information required would be (Sandusky & Tenopir, 2008).

Other studies on structured document retrieval, though addressing the importance of document parts in relation to document structure, do not consider genre conventions.

Unger (2002, 2006) was one of the first to bridge genre and relevance theory through his work on linguistic discourse. He suggests that genre information contributes to the comprehension procedure by providing contextual assumptions for the inferential process, thus fine-tuning expectations of relevance. Genre information can generate expectations of relevance which are more or less precise: more precise in terms of what utterances to expect in which sequence; less precise in terms of the expected form and content of the text, or the kind of cognitive effects or level of relevance to be expected. Unger argues that genre can be incorporated into relevance-theoretic comprehension procedure because of its influence on comprehension.

Yus (2007) extends the idea of bridging genre and relevance to weblog templates with the aim of stabilizing the weblog genre. Relevance Theory differentiates procedural meaning (words encoding the manipulation of conceptual representations) from conceptual meaning (words encoding concepts). Yus considers that the weblog template possesses a procedural quality for verbal or visual features of weblogs can trigger an instant identification of the weblog genre. He suggests that “genre identification is bound to save mental effort and direct the addressee towards particular interpretive paths and lead to specific expectations of weblog information” (p. 124).

Previous studies have not fully investigated the genre of article components and the link between genre and relevance in using digital documents. Research is needed in this area.

OVERVIEW OF THIS STUDY

Derived from Swales' CARS model, a functional unit is the smallest possible unit of information, and related functional units should contain the minimum information necessary in a certain context. Following Sperber and Wilson's Relevance Theory, the comprehension procedure proceeds as follows: expectations of relevance generated by the most relevant functional unit can be extended to other related functional units within the component, which can be further extended to more related functional units beyond the component.

Our approach to studying functional units in the context of information use was, first, to identify common information use tasks of scholarly journal articles from the literature and to identify the functional units within psychology journal articles. We then conducted two surveys to validate these sets of information use tasks and functional units and to validate the relationships between functional units and information use tasks.

TAXONOMY DEVELOPMENT STUDY

Method

Identify information use tasks

Information use tasks of scholarly journal articles were identified from the relevant literature in two areas: the use of scholarly journals and Taylor's information use model.

Notwithstanding variations in the literature (Dillon, 2004; King & Tenopir, 1999; Tenopir, 2003; Tenopir et al., 2009a, 2009b; Wilson, 1994), “keeping up”, “reference” and “learning” are most frequently identified as purposes for using scholarly journal articles for academic work, and thus are taken as three major information tasks. To incorporate information use, the three information tasks “keeping up”, “reference” and “learning” need to be mapped onto an information use model.

Taylor's eight classes of information use (1991) provide a general framework to characterize ways in which people use information: projective, motivational, personal or political, factual, confirmational, enlightenment, problem understanding, instrumental. Thus we further classified each of the eight classes as “keeping up”, “reference” or “learning”, with those that share a category providing a variant focus: projective, motivational, and personal or political are categorized in “keeping up”; factual and confirmational in “reference”; enlightenment, problem understanding, and instrumental in “learning”. Taylor's motivational and personal or political classes arose for personal reasons and thus were not considered here. The other six classes which addressed work-related information use were further adapted to suit the specific context of scholarly journal article use.

Identify functional units

Since not all models of functional units distinguish between macro (moves) and micro (steps) functions, for the sake of a parallel comparison, we took the smallest units of each move structure as the basis for the identification of functional units.

The functional unit taxonomy for psychology journal articles was developed in the following three steps:

  • (a)We examined existing move structures of Introduction, Results, and Discussion components from well-acknowledged models, such as Swales' model (1990) on Introduction, Brett's model (1994) on Results, Hopkins and Dudley-Evans' model (1988) on Discussion. These models served as prototypes, and were complemented with move analyses in other works. Because the Methods component is highly discipline-specialized, we identified a set of functional units within Methods directly from a corpus of psychology articles.
  • (b)We refined the functional units from existing move structures based on their descriptions and examples. To ensure that the taxonomy was mutually exclusive and to have a manageable number of functional units for implementation in an information system, the existing move structures were reduced and refined as follows: duplicate or very similar units were merged; units which seemed supplementary were integrated with more dominant ones; those inapplicable were discarded.
  • (c)We applied the framework of functional units developed in steps 1 and 2 to Introduction, Results, and Discussion components in twelve sample articles in order to assess its applicability to the psychology domain. From a graduate psychology course reading list, we selected twelve articles according to the following criteria: original research articles (excluding review articles and theory articles) in scholarly journals (excluding proceeding papers and book chapters) in reverse chronological order of publication year. In the subsequent validation study, the functional units except those related to the Methods component were from the literature, and there was an opportunity for users to suggest additional items. Therefore, twelve articles were used to represent the psychology research article genre. For the IRD components, the functional units identified above were used as a code book for coding the sample articles. For the Methods component, functional units were directly identified from the corpus. A single paragraph was taken as a coding unit and assigned at least one and at most three distinct functional unit values, providing that these functions were equally important for the paragraph. To determine inter-coder reliability, two coders separately coded six of the twelve articles.

Results

Identify information use tasks

Six tasks requiring use of information in journal articles were identified. The six tasks and their descriptions are as follows:

  • Keeping up

    To keep current with articles in the user's area of research

  • Refer to facts

    To consult specific factual information, e.g., data, phenomena

  • Refer to arguments

    To consult arguments, ideas or suggestions supporting a point made by the user

  • Learn about background

    To get to know a new area on which the user is embarking

  • Learn about particular

    To understand a particular problem with its details and associated interpretation, judgment, etc.

  • Learn how to

  • To learn how to do something, e.g., operation, procedure

Identify functional units

In total, 52 functional units were identified, with 13 functional units in the Introduction component, 10 in the Methods component, 14 in the Results component, and 15 in the Discussion component. The functional units for IRD components were from the literature, and those for the Methods component were from the sample articles. Following the coding of functional units within the twelve articles, we counted the frequency of occurrence within each component. We found 10 of the 13 functional unit values in Introduction were in use, 9 of 14 functional unit values in Results were in use, and 14 of 15 functional unit values in Discussion were in use. In a few articles, some functional unit values were not found in the components where they might have been expected, but they did appear in other components. In all, the initial functional unit framework developed above was able to cover almost all functional units in psychology journal articles. Inter-coder reliability was measured by Cohen's kappa value and percentage agreement. Kappa values ranging from .865 to .958, and percentage rates from 88.31% to 96.64%, showed a high level of agreement in identifying these functional units across IMRD components.

This framework of functional units was identified from the literature, and required validation by users of psychology journal articles. These 52 functional units appear on the validation survey as shown in Table 1.

TAXONOMY VALIDATION STUDY

Method

We conducted two surveys through online questionnaires to validate the findings from the first phase with members of the user population, and to refine the taxonomy. Survey I was conducted to validate the information use tasks and functional units within four components in the case of psychology journal articles. The purpose of Survey II was to validate the relationships between functional units and information use tasks, and the relationships between functional units for a particular task.

Participants

From mid June to mid July in 2009 we sent email advertisements to the graduate student listservs of the Departments of Psychology at both the University of British Columbia and Simon Fraser University. Psychology graduate students were recruited because they were expected to be experienced in using scholarly journals and more accessible as study subjects than faculty members. Each participant was compensated with $10 for completing two online surveys, each of which took approximately 30 minutes. Thirteen people participated in Survey I. Nine participants from Survey I also participated in Survey II.

The thirteen participants, eleven female and two male, included six PhD students, five Masters students, one postdoctoral fellow and one PhD graduate. Eight were in the age range 26–30, three were under 26, one was in the range 31–35, and one was in the range 36–40. Three people had used journal articles for 6 years, three for 7 years, two for 8 years and two for 10 years, and one each for 4, 5 and 18 years. Six participants reported using journal articles daily, three used them 2–3 times a month, two used them 2–3 times a week, one used them once a week, and one used them once a month.

Instruments

On Survey I there were 52 functional units, including 13 functional units in Introduction, 10 in Methods, 14 in Results, and 15 in Discussion. To avoid cognitive overload and inadequate understanding, a one-sentence definition was provided for each functional unit instead of a title. Each definition of a functional unit was listed as a separate item for rating.

First, the participants were asked to indicate how frequently they used journal articles for the six information use tasks listed, by rating on a seven-point Likert scale (1 = Never, 7 = Very Frequently) and also by ranking them (1 = Most Frequently). They were also free to suggest tasks other than those provided. Then, the participants were asked to indicate how frequently they thought each functional unit typically occurred in the Introduction, Methods, Results, and Discussion components of a psychology journal article. They indicated the level of frequency on a five-point Likert scale (Never – Rarely – Occasionally – Very Frequently – Always). They were also free to suggest other functional units they thought frequently occurred but were not in the list.

Based on the responses from Survey I, on Survey II there were 41 functional units, including 11 functional units in Introduction, 10 in Methods, 7 in Results, and 13 in Discussion.

There were six use scenarios: refer to facts, learn about background, refer to arguments, learn about particular, keeping up, learn how to. Given a scenario, the participants were asked to rate the usefulness of functional units within the Introduction, Methods, Results, and Discussion components on a five-point Likert scale (1 = Not Useful at All, 5 = Highly Useful). They also ranked the six most useful functional units within a component by putting 1 next to most important, and so on.

Results

Survey I

For the frequency of using journal articles for six information use tasks, the mean scores of task rating and task ranking showed a high level of consistency: “learn about background” and “refer to facts” came first, followed by “refer to arguments” or “learn about particular”, ending with “keeping up” and “learn how to”.

We placed each set of functional units of the four internal components in three categories according to mean scores, from high to low: 4.0–5.0 (Very Frequently – Always), 3.0–3.9 (Occasionally – Very Frequently), or 2.0–2.9 (Rarely – Occasionally). The three categories differentiated the level of frequency with which these functional units occurred in an individual component. Only two of the functional units had a standard deviation higher than 1.0.

Those scoring 2.0–2.9 were indicated as rarely or occasionally used, and were dropped from further study. These were “announce principal outcomes” and “outline structure of paper” in the Introduction component, and “predict results” and “relate to prior/next experiments” in the Methods component. A number of functional units were rarely or occasionally used for the Results component: “restate hypotheses”, “non-validated findings”, “metatext”, “explanation of finding”, “further question(s) raised by finding”, “admit interpretative perplexities”, “evaluate findings”, “call for further research”, and “implications of finding”. Those rarely or occasionally used for the Discussion component were “outline parallel or subsequent developments” and “metatext”.

The functional units for the Results component were derived from the literature and thus included commentary statements as well as factual statements. Almost all functional units for commentary purposes overlapped with those in the Discussion component, such as “explanation of finding”, “further question(s) raised by finding”, “admit interpretative perplexities”, “evaluate findings”, “call for further research”, and “implications of finding”. Thus in Survey II, we dropped all functional units rarely or occasionally used for the Results component except the top two “restate hypotheses” and “non-validated findings” that had no counterparts in Discussion. Also we dropped those rarely or occasionally used for the other three components with one exception: “relate to prior/next experiments” was observed with a high frequency in the identification study and was temporarily kept for further examination. Additionally, an item “reliability/validity” suggested by a participant was added to the Methods component in Survey II. The result is the 41 functional units as shown in Table 1.

Survey II

An observed outcome of the procedure was that participants had difficulty in understanding the task “learn about particular” on Survey II. So the task “learn about particular” was dropped from further analysis.

To examine how useful different functional units were for different information use tasks, a multivariate analysis-of-variance was conducted on functional units within four components for five tasks. For the functional units of each component, the statistically significant difference varied with information use tasks. Post-hoc tests were further conducted to identify which pairings of group means were significantly different.

For the Introduction component there were significant differences among the means of different functional units for three tasks: “learn about background”, F(10,88)=3.867, p<.001, “refer to arguments”, F(10,88)=4.997, p<.001, and “keeping up”, F(10,88)=2.587, p=.009. Specifically, for the task “learn about background”, the functional unit “review previous research” rated significantly higher than “provide reason to conduct research”, “summarize methods” and “state value of present research”; and “point out contribution of previous research” rated significantly higher than “summarize methods” and “state value of present research”. For the task “refer to arguments”, the functional unit “indicate a gap in previous research” rated significantly higher than “clarify definition”, “narrow down topic” and “summarize methods”; also “provide reason to conduct research” and “state value of present research” rated significantly higher than “narrow down topic” and “summarize methods”. No functional unit was significantly different from others for the task “keeping up”.

For the Methods component there were also significant differences among the means of different functional units for three tasks: “refer to facts”, F(9,80)=3.657, p=.001, “refer to arguments”, F(9,80)=2.794, p=.007, and “learn how to”, F(9,80)=3.004, p=.004. The functional unit “justify methods” rated significantly lower than “experimental procedures” and “tasks” for the task “refer to facts”, whereas it rated significantly higher than “preview methods” and “participants” for the task “refer to arguments”. For the task “learn how to”, no functional unit was significantly different from other functional units.

Table 1. A taxonomy of 41 functional units
original image

There were significant differences among the means of different functional units within the Results component for two tasks: “refer to facts”, F(6,56)=5.126, p<.001, and “learn how to”, F(6,56)=3.462, p=.006. For the task “refer to facts”, the functional unit “state findings” rated significantly higher than “describe analysis conducted”, “non-validated findings” and “restate hypotheses”. For the task “learn how to”, the functional unit “describe analysis conducted” rated significantly higher than “non-validated findings”, “evaluate hypotheses”, “summarize results”, “additional findings” and “restate hypotheses”.

There were two tasks for which the means of different functional units within the Discussion component were significantly different: “refer to facts”, F(12,104)=2.026, p=.029, and “learn about background”, F(12,104)=4.174, p<.001. For the task “learn about background”, the functional units “established knowledge of topic” and “compare results with previous research” rated significantly higher than “generalize results”, “recommend future research”, “indicate limitations of outcome”, “evaluate methodology” and “ward off counterclaim”. No functional unit was shown significantly different from others for the task “refer to facts”.

A score was calculated for each functional unit based on the frequency with which it was assigned each rank, using the formula Σ(7-n)*freq(n) where n is the rank and freq(n) is the number of times the unit was assigned rank n. In the comment box, some participants expressed difficulty in ranking the top six functional units though they completed ranking, so only those receiving the highest values were considered. The top-ranked item resulting from participants' rank ordering was not always the top-ranked item based on their Likert-scale scoring. Nevertheless the top ranking scores were used to complement rating scores in the subsequent analysis.

Based on their mean rating scores, we placed the functional units in one of three categories 2.0–2.9, 3.0–3.9, and 4.0–5.0. The three categories represented the degree of usefulness of functional units within IMRD components (1=Not Useful at all, 5=Highly Useful) for each of five tasks. The functional units were grouped in terms of how useful they were for a particular task: primary functional units, related functional units and additional functional units. This was determined by their rating and ranking scores in Survey II.

Functional units with the highest rating score across the four components were categorized as “primary functional units”. Those categorized as “related functional units in the same component” were functional units that scored from 4.0 to 5.0 in the same component as the primary functional units. Those in “additional functional units in other components” were functional units with the highest rating scores in other three components. The functional units with top ranking scores were added to “additional functional units in other components” if not duplicated with those in this category. For example, for the task “learn about background”, the functional unit “review previous research” (5.00) within the Introduction component received the highest rating score across the four components and thus was selected as the primary functional unit. Other functional units which scored from 4.0–5.0 in the Introduction component were selected as related functional units, including “point out contribution of previous research” (4.78), “indicate a gap in previous research” (4.56), “narrow down topic” (4.11), and “clarify definition” (4.11). Functional units rated highest in components other than Introduction were also selected as additional functional units: “relate to prior/next experiments” (4.00) in Methods, “summarize results” (3.67) in Results, “established knowledge of topic” (4.89) and “compare results with previous research” (4.89) in Discussion. For this task, the functional unit that ranked first yet was different from that rated highest across four components was “justify methods” within Methods, so “justify methods” was added to the category of additional functional units.

Table 2 presents the task-related functional units in three categories. It shows that the Introduction component is more relevant to two tasks “learn about background” and “keeping up”, and there are obvious connections between the Methods component and the task “learn how to”, between the Results component and the task “refer to facts”, and between the Discussion component and the task “refer to arguments”.

DISCUSSION

Taxonomy, which normally implies hierarchical classification, is used here since functional units are viewed at the component level, and can be further classified into three sub-categories: primary, related, and additional. A taxonomy of functional units has been developed in which the functional units are classified by level of relevance for a particular information use task. This taxonomy was developed by identifying and validating the functional units in IMRD components and their relationships with information use tasks in the case of psychology journal research articles.

First, six common information tasks in using psychology journal articles were identified. Also identified were 41 functional units typically occurring in a psychology journal article, with 11 functional units in Introduction, 10 functional units in Methods, 7 functional units in Results, and 13 functional units in Discussion.

Second, it was found that the usefulness of functional units within IMRD components varied with information use tasks. To a component, the extent of relevance is not the same for five tasks. And in a single component, not all functional units are closely related to a particular task. For example, the Introduction component is more useful for the tasks “learn about background” and “keeping up” than it is for the other three tasks. And in the Introduction component, half of the eleven functional units are useful for each related task. Only three functional units are useful for both tasks, yet two of them were not rated as equally useful. Thus a functional unit within a component is more useful for a particular task than for some other tasks.

Furthermore, for a particular task, a functional unit is more closely associated with certain functional units than other functional units in the same component and in other components. We can see how functional units are clustered around the task “learn how to”: the inner circle includes primary functional units “materials”, “tasks” and “experimental procedures” within the Methods component, extended to other related functional units in the same component, i.e., “justify methods”, “variables”, “data analysis procedures” etc., and further extended to more related functional units in other three components, i.e., “summarize methods” in Introduction, “describe analysis conducted” in Results, and “evaluate methodology” in Discussion.

Table 2. Functional unit taxonomy by tasks
original image

The taxonomy of task-related functional units outlines the relevance of functional units for each information use task, with functional units categorized into primary functional units, related functional units in same component, and additional functional units in other components. There is clearly a relationship between functional units and information use tasks, and among a set of functional units for a particular task. This study indicates how functional units support information tasks of journal reading in the following ways:

  • a.A functional unit is the smallest information unit. By employing functional units, we can help users to focus on the highly relevant information within an article.
  • b.A functional unit is associated with other functional units in the same and different components for a particular task. By employing the associations between functional units, we can help users to connect pieces of relevant information across the article.
  • c.Functional units are classified into three categories according to how useful they are for a particular task. By employing functional units of varying relevance, we can help users to move from the most relevant to the least relevant information, and stop at the amount of information the user desires.

A focus on the smallest information unit can help to narrow down reading, while network these information units by functions and furthermore by relevance to a particular task can help to achieve the greatest possible effects with the least possible effort.

The notion of “moves” has long been confined to the pedagogy of academic writing. This study extends the idea of functional units, originating from “moves”, to information use of digital documents. Additionally, this study incorporates the concept of genre into the cognitive processing of relevant information by following a relevance-theoretic comprehension procedure.

Journal usage is discipline-dependent. Even within journal publications, article genre may vary, e.g. theory pieces, review articles, data-based research articles, shorter communications (Swales, 2004). Though the taxonomy is developed from one genre in a specific domain — the psychology journal research article — the research methods and results from this research can be generalized to other genres and different disciplines.

This study explores how to make use of the functions of the smallest information unit for the sake of journal reading. Given the pressure of reading more in less time, it is significant in providing the minimum information necessary for locating and comprehending the text. This taxonomy of functional units provides guidance for the design of a journal system that facilitates information use — filtering out functional units weakly tied with a particular task while promoting the functional units strongly related with that task. Based on this taxonomy, we have created a prototype journal system. User evaluation of the prototype system is currently in progress.

CONCLUSION

This study explores the functions of types of information within the components (Introduction, Methods, Results, and Discussion) of a journal article. Based on the results from analysis of research articles in psychology, the conclusions related to the original research questions are: (i) a set of 41 functional units typically exists within psychology journal articles; (ii) different functional units are more or less useful for different tasks, and this variation is not consistent with respect to all components of the article; and (iii) for a particular task, a functional unit is more or less closely connected with other functional units in the same component and in other components. The taxonomy of functional units can be used to inform system design with the aim of enhancing use of scholarly journal articles.

Ancillary