Searching for information is often driven by some work tasks that involve information use and require certain types of outcomes other than finding information. To explore how search systems can help with work tasks calls for examining factors that influence work task performance. A 3-stage controlled lab experiment was conducted with 24 participants, each coming 3 times to work on 3 sub-tasks of a general task, couched either as “parallel” or “dependent” task type. The full task was to write a report on the general topic, with interim documents produced for each sub-task. Results show that task type and task session did not affect users' task performance, but users' topic familiarity and task experience did. Users' effectiveness in finding useful pages and their time allocated to writing correlated with task performance, so did their reflected perceptions of task experience. Further, users' topic familiarity could lead to a higher writing efficiency, and task experience could lead to higher searching efficiency and effectiveness. These findings help understand factors affecting information use task performance, and have implications on search system design to support information related work task accomplishment.
Current information retrieval (IR) systems are aimed mainly at helping people find information. However, in everyday life, people searching for information are usually driven by some types of work tasks at hand that involves information use and require certain types of outcomes other than finding information (Ingwersen & Järvelin, 2005; Li, 2008). Sometimes, users' searching for information and finishing their work tasks are in parallel with each other. For example, a person writing an essay in Microsoft WORD may need to search online for related useful information meanwhile, and it may happen that the person switches back and forth between the WORD document and the web browser. Further, at times, people may not be able to finish the work task at once, but they need multiple sessions. As Donato, Bonchi, Chi, & Maarek (2010) reported, 25% of the overall query volume on the web corresponds to multi-session tasks. In addition, users' work tasks could be different types, and as previous studies (e.g., Li, 2008) have examined, task type could affect users' search behaviors and performance.
IR systems, although not designed exclusively to facilitate writing in the above essay example, are likely to affect the user's accomplishment of the essay. It would be beneficial if IR systems can assist users to finish their information use tasks by tailoring search toward users' work tasks, or providing other supports than purely searching for information. The literature has seen a fairly extensive amount of research effort on the performance of IR systems and users' information search tasks, however, little has been done to examine users' work task, especially information use task performance, the factors influencing users' task performance, as well as how search systems can be designed to provide support to users' information use task accomplishment. The current research therefore aims at addressing these issues. Specifically, we were interested in answering the following research questions (RQs):
RQ1. Do users finish information use tasks equally well in different types of tasks? If so, how?
RQ2. Do users finish information use sub-tasks equally well in different task stages? If so, how?
RQ3. Do users' background factors affect users' information use task performance? If so, what factors, and how?
RQ4. Are there relationships between users' task performance and their perceptions on task experience? If so, what relations?
RQ5. Do users' information search and use behaviors imply their information use task performance? If so, what behaviors, and how?
Work Task and Performance
Searching for information is usually driven by some types of work tasks (Ingwersen & Järvelin, 2005; Li, 2008) in one's work or life (Agosto & Hughes-Hassell, 2005), such as plan for a trip (Lin, 2001), or shopping, transportation, research, etc. (Kelly, 2006). These often require a certain task outcome, being a report, email reply, paper, summary writing, and so on. While evaluation of search performance, i.e., how well people find information, has constantly been a hot topic in the field of IR, much less effort has been spent on examining work task performance, i.e., how well people accomplish their work tasks besides finding information. As Wilson et al. (2010) noted, search system evaluation on the work task level “is where the biggest challenges remain” (p. 74). The following summarizes a few studies in this line of research.
Wilson, Andre, & schraefel (2008) designed an interface feature, named Backward Highlighting (BH), which allows information searchers to see and utilize associations in columns to the left of a selection in a directional column browser like iTunes. In their evaluation study, they asked participants to find and write facts in the search. One evaluation method used in the study was the number of facts in the reports, and it was found that the proposed BH feature increased discovery of facts. Kammerer et al. (2009) design a social tagging search browser and examined its effectiveness in helping people learn. In their evaluation study, they asked participants to write, besides collecting webpages, a short coherent summary addressing aspects in the question domain. The quality of the summaries was measured by pre-defined topic specific criteria about topic coverage levels based on a scale. It was found that using their proposed interface generated better summary quality.
While these studies examined writing task performance to compare the effectiveness of different interfaces, they did not examine how task performance relates to user behaviors in searching and writing, and users background factors, etc. A better understanding of how work task can be supported by IR systems requires more research in these areas.
Task Features and IR
Much attention has been drawn on examining the effects of different tasks on information searchers' behaviors and performance. A commonly seen basis of this stream of research is to classify user tasks into different types along some task feature(s). These include, for example: fact-finding vs. information gathering (Toms et al., 2007; Kellar, Watters, & Shepherd, 2007), and so on. Li & Belkin (2008) constructed a comprehensive classification scheme that includes a number of dimensions of task features/attributes: task product, complexity, and difficulty, to name a few.
Among the different task features/attributes, the stage of task has been investigated regarding the information seeker's affective, emotional, and physical action changes during the information seeking process. Kulthau (1991) proposed the ISP (Information Seeking Process) model including six stages: initiation, selection, exploration, formulation, collection, and presentation. The user's feelings, thoughts, and actions vary along the different stages. Lin (2001) manipulated the user's task with different sub-tasks to be completed in different search sessions. He proposed a model and designed a system supporting users' multi-episode information seeking. Vakkari (2001) conducted a study with 11 master students preparing a research proposal and engaging in IR search three times, in the beginning, middle, and ending points during the course. He found that along the task stage, users' search tactics, information found, and relevance judgment criteria all changed. Liu & Belkin (2010) conducted a 3-session lab experiment, asking participants to work on 3 subtasks in a general task, searching for information and write reports on hybrid cars. They found that task stage played significant roles in help interpreting webpages' usefulness from reading time.
User Background Factors and IR
Some factors describing users' background such as knowledge and previous experience have been investigated in previous research regarding how they could affect information search. Wildemuth (2004) found that low domain knowledge was associated with less efficient selection of concepts to include in the search and with more errors in the reformulation of search tactics. White et al. (2009) found that within their domain of expertise, experts search differently than non-experts in terms of the sites they visit, the query vocabulary they use, their patterns of search behavior, and their search success.
Researchers have also looked at users' topic familiarity, usually measured by users' self-rating through questionnaires. The examined behaviors often include document features related to reading behaviors, dwell time, the ratio of saved to all viewed documents, etc. Hembrooke et al. (2005) found that experts with high topic familiarity issued longer and more complex queries than novices. They also used elaborations as a reformulation strategy more often as compared to simple stemming and backtracking modifications used by novices. Kelly & Cool (2002) found that with the increase of one's familiarity with topics, his/her reading time tended to decrease and the efficacy, measured by the ratio of the number of saved documents to the total number of viewed documents, increased. Kelly (2006) found that user topic familiarity, as a contextual factor, had significant effect on user behaviors, specifically, document display time.
Search experience was also found to affect users search behaviors and performance. Hsieh-Yee (1993) found that when users have had a certain searching experience, subject knowledge affected their searching tactics – higher knowledge associated with less use of thesaurus for term suggestion, less effort in preparing for the search, etc. Yuan (1997) found that search experience affected several aspects of users' behavior: commands and features used during the search, search speeds, learning approaches, and so on.
A THEORETICAL FRAMEWORK
The literature review shows that task features, users' background, and search behaviors have significant relationships with each other and with other variables such as relevance judgment. In the current research, we were specifically interested in work task performance and factors that may affect it. We propose a model showing the above-mentioned relations (Figure 1). There are five sets of components in the model, which are about:
1)Task performance. As mentioned above, information searchers are often driven by work tasks, which they will eventually finish working with, with some levels of performance. This is marked as “1” in Figure 1.
2)Task features. These include task type, and task session in this research. They are marked as “2” in Figure 1.
3)User background factors. These include knowledge (topic familiarity), task experience, etc. This set of factors are marked as “3” in Figure 1.
4)User perceptions of task experience. These include users' reflections on task difficulty, satisfaction with task accomplishment, etc. They are marked as “4” in Figure 1.
5)User behaviors in the task accomplishment process. These include both search related behaviors, including users' queries used to search in the system, documents (web pages, etc.) opened, viewed, and/or retained, time spent on reading documents, etc., and others in the whole task completion process, such as writing an essay or report. They are marked as “5” in Figure 1.
To answer our research questions, we collected data in a 3-session lab experiment, which was designed to examine information system users' behavioral and performance changes along the way of searching for information to solve a work task.
In this experiment, the 3 sessions were treated as 3 stages. The design was 2*2 factorial with two between-subjects factors (Table 1). One was task type, with two levels: parallel or dependent. The other was search system, with two levels: query suggestion (QS) or non-query suggestion (NQS). The two tasks and the system conditions are described in more detail below.
Table 1. Experimental Design
One aspect of the study as a whole was aimed at exploring whether query terms extracted from useful pages in previous sessions were helpful for the users in their current search, and to this end, two versions of the search system were designed. One version (NQS) is the regular IE window, and the other (QS) offered query term suggestions based on previous sessions, on the left frame of screen, the right being the regular IE window. From the observation of the experiments and the participants' responses, it appeared that users rarely used the query suggestion feature. Therefore, this factor is not considered further in this paper.
Tasks were designed to mimic journalists' assignments since they could be relatively easily set as realistic tasks in different domains. Among the many dimensions of task types, this study focused on task structure, i.e., the inter-subtask relation, varying them while keeping other facets in the comprehensive task classification scheme proposed by Li & Belkin (2008) as constant as possible. This makes it reasonable to attribute the task difference to this single factor of task structure. Two tasks types were used in the study: one parallel and one dependent. This is similar to Toms et al. (2007), which classified tasks into parallel and hierarchical types according to the conceptual structures in the tasks. Both tasks in the current study had three sub-tasks, each of which was worked on by the participant during one separate session, for three sessions in total.
The tasks asked the participants to write a three-section feature story on hybrid cars for a newspaper, and to finish and submit each article section at the end of each experiment session. At the end of the 3rd session, they were asked to integrate the 3 sections into one article. In the dependent task, the three sub-tasks were: 1) collect information on what manufacturers have hybrid cars; 2) select three models that you will mainly focus on in this feature story; and 3) compare the pros and cons of three models of hybrid cars. In the parallel task, the three sub-tasks were finding information and writing a report on three models of cars from auto manufacturers renown for good warranties and fair maintenance costs: 1) Honda Civic hybrid; 2) Nissan Altima hybrid, and 3) Toyota Camry hybrid. It was hypothesized that the sub-tasks in the parallel task were independent of one another, but in the dependent task, there would be perceived to be at least some notional order. To maintain consistency though, sub-task orders in task description in both tasks were rotated and users were allowed to choose whatever order of sub-task performance they preferred.
In each session, participants were allowed to work up to 40 minutes to search for helpful information and write and submit their reports. They were allowed to search freely on the Web for resources in report writing. For logging purpose, users were allowed to keep only one Internet Explorer (IE) window open and use back and forward buttons to move between web pages.
The study recruited 24 undergraduate Journalism/Media Studies students (21 female, 3 male) via email to the student mailing list at the Journalism/Media Studies undergraduate program in the authors' school. Their mean age was 20.4 years. They self reported to have an average of 8.4 years of online searching experience, and rated their levels of expertise with searching as slightly above average (M=5.38) (1=novice, 7=expert). Each came 3 times within a 2-week period based on his/her schedule. Each was assigned randomly to a task/system condition. Each obtained $30 payment upon finishing all 3 sessions. Participants were informed in the beginning of the experiment that the top 6 who submitted the most detailed reports would obtain an additional $20.
Participants came individually to an information interaction lab to take part in the experiment. Upon arrival in the first session, they completed a consent form and a background questionnaire eliciting their demographic information and search experience. They were then given the general work task to be finished in the whole experiment. A pre-session task questionnaire followed to collect their familiarity with the general task topic, previous experience with the type of task, and the expected task difficulty. Then they were asked to pick one sub-task to work with in the current session. A pre-session sub-task questionnaire followed to collect their familiarity with the sub-task topic, previous experience with the sub-task, and expected sub-task difficulty. Then they were given up to 40 minutes to work with the subtask: searching for information and writing reports. Logging software Morae (http://www.techsmith.com/morae.asp) was used to record user-system interactions such as mouse and keyboard activities, webpage, and window display. After report submission, participants went through an evaluation process in which they were asked to rate on a 7-point scale each document that they had viewed, in the order of viewing them in the actual search process, with respect to its usefulness to the overall task. A post-session sub-task questionnaire and a post-session general task questionnaire were then administered to elicit user perceptions on the difficulty of collecting information for the task and sub-task, degree of success with the submitted reports, as well as satisfaction with the reports. This ended the first session.
In the 2nd and the 3rd sessions, participants went through the same processes except for the consent form and background questionnaire, as well as an instruction step on using query suggestion features for those assigned with the QS version system. In the 3rd session, after the post-session general task questionnaire, an exit interview asked them to reflect their overall knowledge gain (rating on a 7-point scale) and to comment on the whole experiment.
The current research examined a number of variables from different aspects, which can be categorized into four groups, as introduced below:
Group 1 Variables: Task Performance Measurements
This group of variables is related to task performance. As introduced before, participants were encouraged to write detailed reports (the bonus $20 payment as an incentive). Therefore, the degree of details is used as the criterion of task performance assessment. Similar to Wilson et al. (2008) that used the number of facts in the reports to assess users' knowledge levels, the current study used the following to assess users' report detailedness:
Number of statements: the number of sentences in each session's report
Number of facts: the number of facts in each session's report
Total number of statements: total number of sentences in the final report
Total number of facts: total number of facts in the final report
Group 2 Variables: User Background Factors
This group of variables reflects users' background in terms of their knowledge base and previous experience. They were elicited through pre-session questionnaires, including:
General task topic familiarity: self-rated degree of familiarity with the general task topic
General task experience: previous experience with the type of assigned task (i.e., searching for information and writing a report)
Sub-task topic familiarity: degree of familiarity with the sub-task topic elicited before each session
Sub-task experience: previous experience with the type of assigned task (i.e., searching for information and writing a report)
Group 3 Variables: User Perception on Task Experience
This group of variables was users' reflected perceptions on their task experience. They were elicited through post-session questionnaires, including:
General task information gathering difficulty: degree of reflected difficulty with searching for information for the general task
Success of general task accomplishment: degree of success with the general task
Satisfaction with general task report: degree of satisfaction with the general task report elicited after each session
Sub-task information gathering difficulty: degree of reflected difficulty with searching for information for the sub-task
Success of sub-task accomplishment: degree of success with the sub-task accomplishment
Satisfaction with sub-task report: degree of satisfaction with the sub-task report elicited after each session
Group 4 Variables: Searching and Writing Behaviors
This group of variables are users' behaviors in finishing their tasks, including both searching for information and writing the reports. These were extracted from logged data. The following lists the variables on a session base:
Number of queries: the number of queries in a session
Number of total pages: number of all webpages viewed in a session
Number of unique pages: number of unique webpages viewed in a session
Number of useful pages: number of webpages viewed in a session with a usefulness rating score above 4 (somewhat useful)
Ratio of useful pages to all: ratio of useful webpages out of all webpages viewed in a session
Number of unique useful pages: number of unique webpages viewed in a session with a usefulness rating score above 4 (somewhat useful)
Ratio of unique useful pages to all: ratio of unique useful webpages out of all webpages viewed in a session
Task completion time (seconds): total time that users spent on finishing the sub-task. This combines both searching and writing times in a session.
Time on searching: time spent on searching for information in a session
Time on writing: time spent on writing report in a session
Ratio of searching time to all: ratio of time spent on searching out of total task completion time in a session
Ratio of writing time to all: ratio of time spent on writing out of total task completion time in a session
As introduced above, task performance could be measured in both the sub-task level and the general task level, although the latter, when counted by numbers, reflects a simple combination of the former. In order to understand task performance comprehensively, we looked at both levels of performance.
General Task Performance across Tasks
We first looked at general task performance in the two types of tasks. As is shown in Table 2, there were no differences between the two tasks in any of the task performance measurements. Users working with the parallel and the dependent tasks finished them equally well as assessed by our measurements.
A Pearson correlation was conducted between general task performance and users' background factors to understand their relationship. As is shown in Table 3, users' general task topic familiarity was positively correlated with their total number of statements in the reports (r(22)=.406, p<.05). The more familiar users were with task topics, the more total number of statements they had in their final reports. Previous experience with the general task did not show significant correlation with task performance.
General Task Performance and Task Experience Perception
A Pearson correlation was conducted between general task performance and users' perceptions on their task experience. No significant correlations were found between task performance measures and users' perception with task difficulty in gathering information, success of gathering information, and satisfaction with general report (Table 4).
Table 4. Correlation between general task performance and task experience perception
Total number of statements
Total number of facts
Task information gathering difficulty
Success of gathering information
Satisfaction with general task report
Sub-task Performance across Tasks
As mentioned before, participants' general task reports were simply the combination of their sub-task reports. To have a better understanding of the relations between users' task reports and their background and behaviors, we examined users' sub-task performance with regard to the differences between task types, task sessions, as well as the relationship between sub-task performance and users' background factors, perception with task accomplishment, and information searching/use behaviors.
First reported here is the comparison of users' sub-task performance in the two types of tasks. As is shown in Table 5, no differences were found between the two tasks in either the number of statements or the number of facts.
Table 5. Sub-task performance in two tasks
Number of statements Mean (SD)
Number of facts Mean (SD)
Parallel Task 18.94 (9.07)
Mann-Whitney U (p)
Sub-task Performance across Sessions
We also examined users' sub-task performance in the three task sessions. As is shown in Table 6, although descriptively, users had more number of statements and more number of facts in their reports in later session than in earlier sessions, no statistically significant differences were found between the three sessions in any of the performance measurements.
Table 6. Sub-task performance in different sessions
Number of statements Mean (SD)
Number of facts Mean (SD)
Kruskal-Wallis H (p)
Correlation between Sub-task Performance and Background Factors
We also looked at the Pearson correlation between sub-task performance and users' background factors and their perception with task accomplishment. As is described above, sub-task performance did not show differences in different types of tasks, nor did it vary in different stages. Therefore, the following analyses combined all task sessions together.
As can be seen in Table 7, the number of statements in subtask reports was positively correlated with sub-task topic familiarity (r(70)=.249, p<.05) and sub-task experience (r(70)=.264, p<.05). The more familiar the users were with the sub-task topics, the more statements they had in their session reports. The more experience they had with sub-tasks, the more statements they had. No significant correlation was found between the number of facts in subtask reports and users background factors.
Table 7. Correlation between sub-task performance and background factors
Correlation between Sub-task Performance and User Perception on Task Experience
A Pearson correlation was conducted to examine the relationship between sub-task performance and users' task experience perceptions. Results (Table 8) show that the number of statements in sub-task reports did not have significant correlation with any examined user perception variable.
With regard to the number of facts, it was found to positively correlate with the success of sub-task accomplishment (r(70)=.283, p<.05) and satisfaction with sub-task reports (r(70)=.252, p<.05). The more success users thought their sub-task accomplishment was, the more facts they had in their session reports. Also, the more satisfaction users were with their sub-task reports, the more facts they had in their reports.
Table 8. Correlation between sub-task performance and User perception on sub-task experience
Correlation between Sub-task Performance and User Behaviors
As mentioned above, users' general task reports were a combination of their sub-task reports. However, it would not make as much sense to combine users' behaviors in three sessions which were conducted at different times. Therefore, we did not examine users' final report with their behavior in general. Nevertheless, it is certainly meaningful to examine the correlation between sub-task performance and users' information search and report writing behaviors in each session, and the following reports these results.
As can be seen in Table 9, the number of statements in users' sub-task reports was found to be positively correlated with the ratio of useful pages to all pages viewed (r(70)=.323, p<.01), and positively correlated with the ratio of unique useful pages to all pages viewed (r(70)=.267, p<.05). These indicated that the higher the ratio of useful pages and unique useful pages out of all pages the users viewed, the more the number of statements they had in their reports.
For the number of facts in the reports, it was found to be positively correlated with time on writing (r(70)=.395, p=.001), positively correlated with the ratio of time on writing to all time spent on the sub-task (r(70)=.297, p=.001), and negatively correlated with the ratio of time on searching to all time spent on the sub-task (r= −.256, p<.05). These results indicated that the more users' time spent on writing, the more facts they had in their reports. Also, the higher the ratio of their writing time to total time, the more facts they had. On the other hand, the lower the ratio of their searching time to total time, the more facts they had in their reports.
Table 9. Correlation between Task Performance and User Behaviors
To further understand the relationship between the number of facts and time spent on searching and writing, we also looked at the Pearson correlation between time and searching other behaviors. As is shown in Table 10, users' time spent on searching was found to be positively correlated with the number of queries (r(70)=.469, p<.001), the number of total pages viewed (r(70)=.487, p<.001), the number of unique pages viewed (r(70)=.506, p<.001), the number of useful pages viewed (r(70)=.401, p<.001), and the number of unique useful pages viewed (r(70)=.396, p=.001). The ratio of users' time spent on searching to all pages in finishing the subtask had the same pattern: positively correlated with the number of queries (r(70)=.513, p<.001), the number of total pages viewed (r(70)=.399, p=.001), the number of unique pages viewed (r(70)=.444, p<.001), the number of useful pages viewed (r(70)=.349, p<.001), and the number of unique pages viewed (r(70)=.325, p<.01). In short, those who spent longer time on searching, and those who had a higher ratio of searching time out of all, tended to have more queries, viewed more pages, unique pages, useful pages, and unique useful pages.
Users' time spent on writing was found to be negatively correlate with the number of queries (r(70)= −.282, p<.05). The ratio of writing time to sub-task completion time was negatively correlated with number of queries (r(70)= −.393, p=.001), number of total pages viewed (r(70)= −.294, p<.05), number of unique pages viewed (r(70)= −.334, p<.001), number of useful pages (r(70)= −.289, p<.05), and number of unique useful pages (r(70)= −.248, p<.05). Those who spent longer time on writing tended to have fewer queries. Those who had a higher ratio of writing time to total time tended to have fewer queries, fewer pages, unique pages, useful pages, and unique useful pages viewed.
Table 10. Correlation between Time and Other Behaviors
It is also helpful to explore the relationship between users' behaviors and their background factors. The following presents the results of Pearson correlation analysis. As is shown in Table 11, sub-task experience was found to be positively correlated with two behaviors examined: time spent on writing (r(70)=.234, p<.05), and the ratio of unique useful pages to all (r(70)=.259, p<.05). Sub-task topic familiarity did not show any significant correlation with search/writing behaviors.
Table 11. Correlation between behaviors and participants' background factors
Our results demonstrated that users' information use task performance, assessed in this study by the number of statements and number of facts in the reports, did not vary in the two task types: the parallel and the dependent. This was true for both the general tasks and the sub-tasks. Although task type has been found in previous studies as a factor that affects users' search behaviors and search performance (e.g., Kellar et al., 2007; Li, 2008; Liu & Belkin, 2010; Liu et al., 2010), the current research did not find it affected users' task outcome, i.e., reports written.
Our results also demonstrated that users' sub-task performance did not vary across task session. Again, although research in the literature has indicated that task session (or stage) affects users' information search including behaviors, process, and usefulness judgment (e.g., Lin, 2001; Liu & Belkin, 2010), it is not necessarily true with regard to the task outcome.
User Background Factors and Task Performance
Two user background factors examined in the current study were found to correlate with users' task performance assessed by number of statements in the reports. One is users' knowledge, i.e., familiarity with task topics. Specifically, for the general task performance, general task topic familiarity had a positive correlation with the total number of statements in the final reports. For the sub-task performance, users' sub-task topic familiarity had a positive correlation with the number of statements in the sub-task reports. Another background factor is users' previous experience with the sub-tasks, which was found to positively correlate with the number of statements in the reports. Our findings indicated that users' topic knowledge and task expertise might affect their task performance.
Task Performance and User Perceptions
Our results showed that users' sub-task performance was correlated with their reflected perceptions of task experience. Specifically, the number of facts in sub-task reports was found to positively correlate with users' perception on their success with finding information and satisfaction with their sub-task reports. This indicated that users' immediate reflected perception on the sub-task experience elicited right after each session was rather indicative of their actual performance.
Meanwhile, users' general task performance was not correlated with any of their perception measures examined in the study. Since users finished the general task in three sessions within a period of about two weeks, it is possible their perception of experience with the whole task may not be accurately reflecting their actual performance.
Task Performance and Behaviors
Our results demonstrated that users' task performance was correlated with some search and writing behaviors. To put it in another way, given a limited amount of time, users made time allocation trade-offs between searching and writing. Specifically, the number of statements in sub-task reports was positively correlated with the ratio of useful pages to all pages viewed, and the ratio of unique useful pages to all pages viewed. Users' experience of effectively finding useful pages, i.e., having more useful pages out of all pages viewed, seemed to help with having more statements in their reports.
On the other hand, the number of facts in users' sub-task reports was found to positively correlate with time spent on writing and the ratio of writing time to all task completion time, while it was negatively correlated with the ratio of search time to all task completion time. This seems to indicate that the experience of allocating more time on writing (in the mean time, less time on searching) helped with having more facts in the reports. While this is not to say that searching for information is not important, it implies that having enough time using the found information in task products, i.e., writing the reports, may be a good strategy to better accomplish information use tasks.
A further look at the relationship between searching/writing time and other behaviors could help us better understand users' searching, writing, and the above-mentioned task accomplishment strategy. The results showed that the more time spent on or allocated to searching, the more queries, viewed pages and useful pages, as well as unique pages and unique useful pages viewed. These findings were reasonable and consistent with those in the literature about information searching performance, for example, in the tasks that users had longer task completion time, they also had more queries and visited more pages (Liu et al., 2010). When information use is not considered, longer searching time leads to better task performance, i.e., search performance, which is usually measured by the number of useful pages, objects, or information found.
On the other hand, it is reasonable to see that time spent on writing, especially the ratio of writing time to total task completion time, was negatively correlated with the number of queries, all pages viewed, useful pages viewed, unique pages viewed, and unique useful pages viewed, which were mostly variables correlated with the searching time. It is easy to understand that finding useful documents/information is one thing, but using them in generating task product is another. Information use task performance cannot be simply assumed by the performance of the information search phase, but it should also be noted that there is a writing phase, which may affect the task outcome.
User Background and Search/Writing Behaviors
Our results showed that users' familiarity with sub-task topics was positively correlated with sub-task performance, so was time spent on writing, however, there was no correlation between users' topic familiarity and their writing time. Users with higher levels of topic knowledge did not seem to need longer writing time in order to generate task outcome with better performance – their topic familiarity probably leaded to a higher writing efficiency in the assigned topics.
Nevertheless, users' previous task experience was positively correlated with time on writing, and the ratio of unique useful pages to all pages. These seems to indicate that those who had more task experience did not need more time on searching for information, but they still were able to find relatively more useful pages – they probably had a higher searching efficiency, as well as a higher searching effectiveness.
Implications on IR System Design and Future Studies
Our findings have implications on IR system design. As mentioned above, IR system design, if for the purpose of improving information task accomplishment, may want take into consideration of supporting information use, in addition to information search. Our results show that users' background factors could affect users' information use task performance, IR system design aimed at supporting the whole task accomplishment should take this into consideration, which also provided theoretical support for personalization systems design. Future studies will explore ways of system design regarding how to provide support to users' information use task accomplishment, for users with different background levels.
In this paper, we examined users' information use task performance and its relationships with task type, task session, users' background factors, perceptions on task experience, as well as searching/writing behaviors. Our results showed that task type and task session did not affect users' task performance, measured by the number of statements and number of facts in the reports. Users' background factors, specifically, topic familiarity and task experience, correlated with task performance. Users' reflected perceptions of their task experience in each session also correlated with their session-based performance. Regarding the behaviors, users' effectiveness of finding useful pages and their time allocated to writing correlated with task performance. In addition, results indicated that users' topic familiarity could lead to a higher writing efficiency, and previous task experience could lead to a higher searching efficiency and searching effectiveness. These findings further our understanding of factors affecting users' information task performance. They also have implications on IR system design in providing support to information task accomplishment besides searching for information.
Our thanks to IMLS for sponsoring the research experiment under grant number LG#06-07-0105-05.