Providing support for multi-session web tasks

Authors


Abstract

In two previous studies, we explored how users perform multi-session web tasks (those tasks that require more than one web session to complete). Using the results of these studies, we proposed three guidelines to help developers design browser support for these types of tasks. In this paper, we describe three prototypes that we designed and developed using these guidelines and the results of a preliminary evaluation of the prototypes in the field. We found that while participants found the simplest prototype to be the easiest to use, it was the other two prototypes with additional functionality that were deemed more useful by participants. In addition, participants preferred the prototypes with additional functionality that helped manage their tasks between sessions.

INTRODUCTION

People use the Web to perform complex tasks that require more than one web session to complete. We are interested in finding ways to help support for users who are engaged in multi-session tasks which we define as those tasks that are goal-based requiring more than one web session to complete and, have a definable endpoint, such as a specific date, an event, or it is abandoned. For example, when a person wants to purchase a new computer they may use the resources on the Web over several sessions to research prices, read reviews, and make comparisons before making a final purchase decision.

A main difficulty for users performing multi-session tasks is how to manage the transition between sessions. Previous research on multi-session tasks [13, 14] has shown that people do use particular browser tools, such as bookmarks and copy and pasting information to help manage web resources between web sessions; although many users are not satisfied with the current browser tools. In previous work [13, 14], we proposed three main guidelines to help design and develop browser tools to help support multi-session tasks.

We used these guidelines to develop three prototypes with different levels of assistance for users performing multi-session tasks. Each prototype included a logging feature that allowed us to record the participants' browser interactions with each prototype. In addition, participants provided their opinions on the different features of the prototypes. We evaluated all the prototypes in a preliminary field experiment over several days and present the results of this study in this paper.

In this paper, we first present the related literature. We describe each of three prototypes, followed by the methodology of the study. We then present the results of the study, including data from browser logs and participants' questionnaires.

RELATED WORK

Web Tasks and Web Sessions

Previous research has tried to understand and/or categorize different types of tasks that people do on the Web [1,5,7,8,12,17,18]. Sellen et al. [17] observed six web task categories which were further studied by Kellar, et al. [12]. The six main web tasks included fact finding (e.g., looking for specific information), information gathering (e.g., researching a paper), browsing (e.g., entertainment or passing time), transactions (e.g., online banking), communications (e.g., emailing), and maintenance (e.g., updating a web page). In our studies on multi-session tasks [13,14], we identified eight multi-session task types: School Work, General Topic Search, Research, Travel/tourism, Projects, Action-based, Shopping, and Status-checking. As well, multi-session tasks often had sub-tasks that were easily classified according to those tasks previously defined by Sellen et al. [17] and Kellar et al. [12].

Revisitation is frequently conducted in multi-session tasks and is relevant to our work. In 1997, Tauscher and Greenberg [18] analyzed web usage logs for revisitation patterns. They found that 58% of pages visited by users had been previously accessed. Just a few years later in 2001, Cockburn et al. [8,9] reported that 81% of visits to web pages are previously seen and speculated that the increase in their numbers was since users had one or two pages they visited more frequently than others and that the user's expectations to access more information was increasing. In both of these studies, logs were used to examine revisitation patterns; it did not examine how these revisitation patterns may change depending on task. As well, this increase in revisitation may also be due to people monitoring information on sites. Kellar et al [11] explored web monitoring and found that monitoring of information exists across all task types [12] and involves revisiting web pages to view new or updated information (e.g., reading the news or checking Facebook). Recently, Adar et al. [2] examined the reasons that people revisit information and found different revisitation patterns that could have an impact on the design of browser tools and web pages themselves.

In our previous studies (a diary and field study), we examined non-routine multi-session web tasks. While we did not measure the pages revisited between sessions for these types of tasks we found that participants used certain browser tools more frequently for multi-session tasks than for other web tasks [13,14]. In the field study, we logged all user interactions with the browser for a month and found that multi-session tasks accounted for about 30% of all the participants' web activities. We also found some interesting trends. In particular, participants depended on particular tools to help them re-start their task in a new session which could include revisiting previously seen pages. For example, while we would expect the use of a browser tool to be about 30% for multi-session tasks, we found bookmarking actions accounted for 46%, and ‘find in a page’ represented 61% during multi-session tasks. Similar to other research, many of our participants noted the same frustrations with the traditional revisiting tools of bookmarking and history lists.

Current Web Browser Revisitation Tools

There are web browser tools designed to revisit web pages during a single session including the back button, forward button and search. As well, people may use tabbed browsing to help organize tasks within a session. However, for tasks which continue over several sessions, there is limited support available in the common browsers. The two most familiar tools for revisiting pages over multiple sessions are history lists and bookmarks although both have issues. For example, it can be difficult to refind the desired pages within these tools due to web page naming conventions and cluttered bookmark lists [3,6,8,9,12,14,18]. As well, neither were designed to help users return to task specific web pages. People may use other strategies [3,4,5,15] such as the use of search engines to relocate pages, or printing web pages.

New tools to help users revisit and organize their information for use over several sessions have been explored although they are not currently available in common browsers. In 2002, Sellen et al. [17] recommended a ‘webscrapbook’ that saves web pages, pieces of text, graphics and search results for a flexible management of information. Jhaveri and Räihä [10], and Aula et al. [3] introduced Session Highlights that allowed users to compare page content, and create and save collections. Landmarks [15] combine the revisitation of web page with the refinding of information on the page itself. Landmarks allowed users to mark information on a web page similar to a bookmark but when re-opened, the landmark would return the page at the exact location of indicated text, highlighted, for easy recognition. Recently, Morris et al. [16] created SearchBar that enabled users to easily view a history of their search topics (including queries and results) to help users resume their search task or refind information in later sessions.

Recommendations for Multi-session Browser Tools

The results of our previous studies [13,14], including the post-session interviews, provided a better understanding of how users are utilizing the current browser tools to perform multi-session tasks. From the results we developed the following guidelines for use in the design of tools for multi-session tasks. First, any tool should maintain a list of active multi-session tasks. Second, the tool should have a reminder feature in order to keep users on task within each session (for multi-tasking). Third, the tool should help users manage their multi-session tasks between sessions.

PROTOTYPES

Using these guidelines, we designed three prototype tools. All the prototype tools were built into a customized version of Firefox and each included a logging feature that recorded the browser interactions that the participants did while using the prototypes. Each prototype built on the previous tool by adding more complex features, while still trying to maintain a balance between ease of use and complexity. In addition, the prototypes were designed to provide easy access to the multi-session tools while minimizing a change in appearance to the browser. The first prototype was the simplest and provided limited functionality for the three guidelines. Prototype 2 and Prototype 3 added extra support for multi-tasking within a session as well as for managing the task between sessions (Prototype 3 with additional features to Prototype 2).

For all the prototypes, an interactive toolbar (Figure 1(a)) was added to the top of the browser. The interactive toolbar for this prototype included access to the multi-session prototype features (e.g., the “Start New Task” button), allowed the users to view their browser interaction logs that we captured during the study (“View Log”), delete any URL they did not want the researchers to view (“Manage Log URLs”), submit their logs (“Submit Log”), and access to a help function (“Help”).

Figure 1.

The Interactive Toolbar for all Prototypes. It contains the functionality of the tool, as well as the log of the participants' interactions with the browser for the study.

Figure 1.

Working on a Multi-session Task. The active task name is large and highlighted yellow on the toolbar for easy visibility

Prototype 1

Prototype 1 was the simplest of the prototypes. It helped users keep track of their current multi-session tasks and reminded users when they were currently working on a multi-session task during a session. To start a new multi-session task the user pressed the “Start New Task” button and gave the task a name (Figure 2(a)). The task name was then highlighted in large letters on the toolbar (Figure 1 (b)) and remained visible on the toolbar regardless of the number of tabs or separate windows open. When the user finished working on the active multi-session task for that session, they pressed the “Stop Task” button and the highlighted name was removed from the toolbar and the toolbar was returned to its normal state.

Figure 2.

Starting a New Multi-session Task

Figure 2.

Resuming a Multi-session Task

Between sessions, the user resumed a task by selecting a task from their list of active multi-session tasks. When a task was resumed (Figure 2(b)), the name appeared on the toolbar in large letters, highlighted in yellow. The highlighted name served as a reminder that they were working on a task, and the drop-down list served as a reminder of all their multi-session tasks that they were working on between sessions. When the user completed a task, they pressed the “Finish Task” button and the task name was removed from the list of active tasks.

Prototype 2

Prototype 2 had the same features as Prototype 1. In addition, Prototype 2 let the users indicate which web pages that were included in the current task (using the include/exclude toggle buttons as seen in Figure 3 on the toolbar). If a tab or window was included in the task, the task name was highlighted on the toolbar as with Prototype 1. If a tab or window was excluded from the task, the task name, while still visible on the toolbar, was greyed out. This feature was added as a result of the field study where we found that people often worked on more than one task at a time in their browser (e.g., working on a multi-session task, reading emails, reading blogs, etc.). The included tabs were grouped together and highlighted yellow like the task name (Figure 3). The grouping and common colour of the ‘included’ tabs (Figure 3) made it easy for the user to distinguish which tabs belonged to the active multi-session task while multi-tasking. When a user selected a task to resume from the drop-down list, that task name appeared on the toolbar highlighted in yellow and all the saved pages (the ‘included’ web pages from the previous session) were reopened in the browser grouped and coloured yellow.

Figure 3.

Include/exclude Toggle buttons and groups of included web pages

Prototype 3

Prototype 3 contained all the features of the first two prototypes with four additional features. First, Prototype 3 had a timeline feature that kept track of when the task ‘should’ be finished. Second, the user could edit the list of pages to save for later sessions when they stopped working on a task. Third, when the saved pages re-opened, they were automatically scrolled to the exact location in the page where they were left off. Last, Prototype 3 automatically engaged the user in the management of saved web pages when the task was finished.

When starting a new task, the participant gave the task a name and an estimated date of completion (Figure 4). The estimated completion date helped users track timelines associated with tasks. On the given date if the task was not completed, a dialog window popped up to ask the user if the task was still active. If it was still active, the user could extend the completion date; otherwise they indicated that the task was no longer active and the system removed the task name from the task list.

Figure 4.

To start a task, the user enters a name and an estimated date of completion.

Like Prototype 2, there was an include/exclude toggle button on the toolbar and included pages were grouped and coloured yellow. When the user finished working on the active multi-session task for that session, they pressed the “Stop Task” and a Save Web Pages dialog window (Figure 5) appeared with a list of all the web pages that were currently open in the browser. It had checkmarks beside the ‘included’ pages which the user could save as the default; or they could unselect checked pages and/or check unselected pages.

Figure 5.

The Save Web Pages dialog window appears when the user Stops the task (it lists the open Web pages with ‘included’ pages already checked. Users can choose to select and unselect any pages that they want saved for later sessions)

Like the first two prototypes, users accessed their current multi-session tasks through a drop-down box. When they resumed a task from the drop down list, the task name appeared on the toolbar and all of the saved pages (from the Save Web Pages dialogue window) were reopened in the browser grouped, coloured yellow, and scrolled to the last position viewed.

In our previous studies, participants indicated that they would like to be able to organize their multi-session tasks to help them resume working on the task at a later time. While some participants used bookmarks, many (even those who use bookmarks) indicated problems with bookmarks (such as, having to delete bookmarks when they were finished). Prototype 2 and Prototype 3 helped users by saving web pages between sessions. While Firefox has a session saving feature it saves all the tabs in the window, not just the tabs that are of interest to the user. Prototype 2 saved only the ‘included’ tabs for a later session and Prototype 3 gave users more control over which pages to save. As well, Prototype 3 marked the place on the page where the user last was reducing the need to for users to have to refind their last position by scrolling or using the find within page function.

Prototype 3 had an additional feature that helped the user to manage their saved web pages when the multi-session task was finished. They could choose to either delete all the saved pages or create a bookmark folder that included all the saved pages. Several field study and diary study participants [13,14] indicated that they would like to save their task information when finished for potential future similar tasks. This feature provided this option.

METHODOLOGY

To study the effectiveness of these prototypes, we recruited 12 students from Dalhousie University. Participants had to be Firefox users, use the web daily, and not be apprehensive to use new tools. Most research on user behaviour on the Web has been conducted with experienced users, either from the academic community and/or industry [7,11,12,16,17]. Participants were compensated $25 for the study.

Participants ranged in age from 19 to 34 years old, with an average age of 25. There were eight female and four male participants. The majority of students were from the faculty of Computer Science (8 of the 12), two students from Management and two were Arts students. There were nine graduate level students (6 Masters, 3 PhD.) and three undergraduate students. All the participants said that they visit the Web “Several Times a Day (5 or more times)”. Nine participants spent at least 16 hours a week on the Web, and the others said that they use the Web at least 6 hours a week. To study the effectiveness of these prototypes, we recruited 12 students from Dalhousie University. Participants had to be Firefox users, use the web daily, and not be apprehensive to use new tools. Most research on user behaviour on the Web has been conducted with experienced users, either from the academic community and/or industry [7,11,12,16,17]. Participants were compensated $25 for the study.

Participants ranged in age from 19 to 34 years old, with an average age of 25. There were eight female and four male participants. The majority of students were from the faculty of Computer Science (8 of the 12), two students from Management and two were Arts students. There were nine graduate level students (6 Masters, 3 PhD.) and three undergraduate students. All the participants said that they visit the Web “Several Times a Day (5 or more times)”. Nine participants spent at least 16 hours a week on the Web, and the others said that they use the Web at least 6 hours a week.

To study the effectiveness of these prototypes, we recruited 12 students from Dalhousie University. Participants had to be Firefox users, use the web daily, and not be apprehensive to use new tools. Most research on user behaviour on the Web has been conducted with experienced users, either from the academic community and/or industry [7,11,12,16,17]. Participants were compensated $25 for the study.

Participants ranged in age from 19 to 34 years old, with an average age of 25. There were eight female and four male participants. The majority of students were from the faculty of Computer Science (8 of the 12), two students from Management and two were Arts students. There were nine graduate level students (6 Masters, 3 PhD.) and three undergraduate students. All the participants said that they visit the Web “Several Times a Day (5 or more times)”. Nine participants spent at least 16 hours a week on the Web, and the others said that they use the Web at least 6 hours a week.

The Study Process

Participants took part in a ten to fourteen day study where they evaluated the three prototypes. Participants were required to download our customized version of Firefox on their own laptop and could then assess the prototypes at the time and location of their choosing.

We had an initial meeting with each participant in the Computer Science building. At that time, participants filled in a pre-study background questionnaire that was used during both the diary and field studies. We also provided an explanation of multi-session tasks and demonstrated how to download and use the customized version of the browser and the online questionnaires. Participants used Prototype 1, followed by Prototype 2, followed by Prototype 3. We did not counter the order of the prototypes since each prototype contained the previous prototypes functionality plus additional features and we were testing feature sets within each prototype. By having participants use each prototype in order, we ensured that the participants evaluated a new set of features and not just ones they had already seen and used.

To ensure that the participants did not view the other prototypes and their features before they had finished using and evaluating the current prototype, we provided each participant an envelope with sealed paper instructions (for the prototype download, how to use the prototype, and the task to complete with that prototype) for each prototype. Online versions of these same instructions were also available to the participants once they completed an online questionnaire about the prototype they had just used.

Participants were asked to evaluate each prototype using a multi-session task that we provided (see “The Tasks” for a description of the tasks). Every participant used the same task for each prototype. They were asked to work on the assigned task at least three different times (over three web sessions) for at least two days and to spend a minimum of five to ten minutes per session working on the task. The time to do the task was a guideline only. Many of the participants took longer than the two days to meet the ‘three sessions’. We also asked the participants to perform these tasks within their normal browsing behaviour (e.g., while also checking their emails, reading the news, going onto Facebook, etc.). Participants had to perform the assigned task with each of the prototypes but could also try the prototypes with their own multi-session tasks if they wished (five of the participants did). Participants did not need to finish the entire task to evaluate the prototype and move onto the next one, but they did need to work on the task for the minimum set time and minimum number of sessions.

For each prototype, we provided a practice task, to expose the participants to the prototype features. The practice task did not need to continue over several sessions; rather this task was to simulate how to use the prototype's features.

We also provided the participants with questions related to the given multi-session task for them to consider while performing the task. These questions focused the participant on specifics of the task and ensured that participants actually performed the task. After finishing evaluating a prototype, participants filled in an online questionnaire that asked them their opinions on the different prototype features (e.g., how did this tool help them, how often they would use the tool, and what would they like to see as improvements to the tool). Participants submitted their logs after they finished evaluating each prototype. We met with the participants at the end of the study in the computer science building where they filled in a post-study questionnaire that compared the different prototype for multi-session tasks.

The Tasks

We designed three tasks loosely on tasks recorded by participants in our two previous studies [13,14]. We piloted the tasks to ensure that they were similar in terms of the ease to find information, the availability of information sources, ease to start and continue the task, and the time that each task took to finish. We also ensured that the three tasks were similar in: (1) the number of sub-tasks for each task, (2) the static and dynamic web pages that could be accessed, and (3) the possible simple and complex comparisons between different elements in the task.

In Task 1 participants planned a graduation dance. They had to find a banquet room in a local downtown hotel that could hold about 150 people for the dance. The hotel also needed accommodations that were at most $150 a night and they needed to find a DJ for at most $400.

In Task 2 participants were told that they were moving to Toronto and that they needed to find employment in the downtown area in the financial field. They also needed to find a place to live for a maximum of $1000/month and there had to be a gym nearby their residence.

In Task 3, participants had to decide on a laptop to purchase. It had to be under $1200 with a range of features. They needed to compare brands, prices, and consult reviews.

Results

Total Time Spent on the Tasks

The total time spent on the study for all of the prototypes and their respective tasks (by adding all time spent for each participant) was over 27 hours (8:51:42 using Prototype 1, 9:56:39 using Prototype 2, and 10:24:31 using Prototype 3). For Task 1, participants spent on average 44 minutes and 19 seconds over 2.92 sessions. The average time spent on Task 2 per participant was 49 minutes and 43 seconds over 3.5 sessions. On Task 3, participants spent on average 52 minutes and three seconds over 3.25 sessions.

Browser Interactions Recorded for the Study Tasks

We used the participants' logs to examine the different browsers interactions (e.g., open a new window, follow a new link, use bookmarks, etc.) for each of the three prototypes (Table 1). There were 4366 total interactions recorded for all the prototypes while working on the given task. Prototype 1 had the largest count of interactions (37.33% of all interactions), while Prototype 2 and Prototype 3 were almost equal (31.77% and 30.90% of all interactions respectively).

Table 1. Total Browser Interaction Counts
original image

Differences in usage of existing browser tools

Using these logs, we were able to see if there were differences in the usage of browser tools between prototypes. We grouped these interactions into four themes: window and tab interactions, revisitation, navigation and copy/paste, and task switching interactions (see Table 2).

Table 2. Differences and similarities of browser Interactions between Prototypes (counts and % of totals)
original image

Window and Tab Interactions

Prototype 1 had the largest number of window operations compared to the other two prototypes (69.58% of the total). There was a large difference in the number of windows that were started by using the auto-complete feature (74.29% for Prototype 1, 14.29% for Prototype 2, and 11.43% for Prototype 3) which could be in part due to the fact that Prototype 2 and Prototype 3 had the saved page feature that automatically re-opened selected web pages between sessions. With Prototype 1, participants had to rely on other methods to revisit web pages (e.g., auto-complete).

Overall tabs were used more often than windows indicating that users prefer tabs over windows. Usage of tabs was more prevalent with Prototype 1 (35.76%) and Prototype 2 (38.62%), as seen in Table 2. Perhaps the reduced usage for Prototype 3 (25.62%) reflects that users were able to self select which web pages to save for a later session. With Prototype 2, web pages were saved when a page was toggled to be included in the task. With Prototype 3, participants could select/deselect pages with the Save Web Pages dialogue window which gave participants the opportunity to deselect some saved web pages thereby potentially reducing the number of tabs opened at the start of a new session. One participant commented “I liked that I could choose which tabs to save because while some may be related to the task they're not necessarily good enough to go back to…”.

Revisitation

For the most part, revisitation tools were most often used with Prototype 1. Prototype 1 recorded 70% of history list use (see Table 2). While participants created more bookmarks using Prototype 2 (65.71%) than with either Prototype 1 (34.29%) or Prototype 3 (0%), participants only opened 20% of these with Prototype 2. This could indicate that when the participants used Prototype 2 they were not used to the system reopening the saved pages and instead created bookmarks related to the task. Once they realized that the included web pages would re-open in a later session they did not need to rely on opening the created bookmarks. Participants opened the most bookmarks when using Prototype 1 and the number opened was almost double of that created. Perhaps users did not need to depend on bookmarking pages or history lists as much when web pages were automatically saved between sessions.

Navigation

In terms of navigation interactions, participants followed links more when using Prototype 1. We speculate that they were trying to retrace their steps from a previous session to refind pages. Searching (using “Google”) was used with all the prototypes, although it was used less with Prototype 2 (27.67%) and Prototype 3 (31.66%) than with Prototype 1 (40.7%). Both Prototype 2 and Prototype 3 had the save web page feature so participants could return to previous information easier than with Prototype 1. It is interesting to note that with the exception of copy and paste actions, Prototype 1 had the highest number of interactions using tools commonly associated with revisitation and refinding.

Copy/Paste

Each time that a participant performed a cut, copy or paste in the browser a dialog box would appear asking the participant where they were copying to and where they pasted from as the logger was unable to track actions outside of the browser. Participants could type this additional information into the dialogue box or could ignore the request which was then logged as “Unspecified”. The majority of information copied from web pages was pasted to a word processor (78.42%, 149 of the total 190 paste actions). The majority of pasting to a web page was copied from another web page (54.55%, 12 of the total 22 pastes).

Task Switching

For Prototype 2 and Prototype 3 the logs indicated each time a web page was included or excluded from the multi-session task currently in progress. In total, we logged four different actions that indicated a switch between tasks. The action “Included this tab…” and “Excluded this tab…” was logged each time a participant used the include/exclude button on the toolbar. The “Switched into task” and “Switched away from task” was logged each time a participant switched to a previously defined tab/window (included or excluded). The frequency of task switching was almost equal between Prototype 2 and Prototype 3. This was true for both actions related to switching from the task and switching back into the task. More importantly, it reinforces that users multi-task within sessions.

Differences in user preference for Prototypes

In the post-study questionnaire, participants were asked to rank all three prototypes (first, second and third) based on five user preference questions: the easiest tool to use, I preferred using, the most helpful tool for multi-session tasks was, the most helpful tool to resume a task was, and the most helpful tool for switching between tasks was (note that there was one participant who only ranked their top choice). Notice that while participants ranked Prototype 1 the easiest to use (see Figure 6), that Prototype 1 quickly fell behind the other prototypes when they were asked to rank the prototypes on other usability attributes.

Figure 6.

The ranking responses to which Prototype was the “Easiest to Use”

Easiest Tool to Use

As can be seen in Figure 6, 75% (9/12) of participants ranked Prototype 1 as being the easiest tool to use. This is not surprising as it was designed to be the simplest prototype with the fewest number of features. Prototype 2 was ranked as the next easiest to use (75% as second, 9/12) with Prototype 3 getting the most scores as being ranked third (67.67%, 8/12). While Prototype 1 was ranked the easiest to use, based on participants' responses to the other statements the participants liked using Prototype 2 and Prototype 3 more, probably due to the extra features of these two prototypes.

Preferred Using

As seen in Figure 7, participants preferred using Prototype 3 (67.67% ranked it as first, 8/12). Prototype 2 was second behind Prototype 3 with 33.33% (4/12) of first rankings and 50% of second rankings. Prototype 1 had the highest number of third rankings (75%, 9/12).

Figure 7.

The ranking responses to which Prototype participants “Preferred Using”

Most Helpful

Prototype 3 was ranked as the most helpful tool for multi-session tasks (83.33%, 10/12) as seen in Figure 8. In second place was Prototype 2 (16.67% ranked it first (2/12), while 75% ranked it second (9/12)). Again, Prototype 1 was considered to be the least helpful with all participants ranking it in third place.

Figure 8.

The ranking responses to which Prototype participants found “Most Helpful”

Helpful to Resume, Keep on Task, Switch between Tasks

When we inquired about the different ways that the prototypes helped the users perform their multi-session tasks, the results were mixed between Prototype 2 and Prototype 3 (Figure 9). Prototype 3 was ranked to be the most helpful to resume a task (75% ranked it first, 9/12). But the scores for the most helpful to keep on task and for switching between tasks were less definite. Fifty-eight percent (7/12) of participants ranked Prototype 3 as being the most helpful for keeping on task, while 41.67% (5/12) thought it was Prototype 2. Forty-two percent (5/12) of the participants ranked Prototype 3 as first for being the most helpful for switching between tasks, while 33.33% (4/12) ranked Prototype 2 as being most helpful in this case.

Figure 9.

The ranking responses to which Prototype participants found “Most Helpful to Resume”, “Keep on Task and “Switch between Tasks”

Prototype Features

Participants rated (using a 5 point Likert Scale) the different prototype features after using each prototype with the online questionnaires. We examine how well each prototype rated on five features: the toolbar and task information, tabs and including task related web pages, reopening saved web pages, landmarked saved web pages, and managing saved web pages at the completion of the task (see Table 3). While we are aware that with a small sample size small differences in opinions make a large difference in the percentages, it is still interesting to note the high use of “strongly agree” in the responses.

Toolbar and Task Information

We asked participants to rate the customized toolbar and its multi-session task features (see Table 3). In general, participants felt that the toolbar was easy to use (75% strongly agreed and 17% somewhat agreed). Specifically, 92% of the participants liked having the task name visible on the toolbar (42% strongly agreed and 50% somewhat agreed) and 75% liked that they could name their task (42% strongly agreed and 33% somewhat agreed).

Table 3. Summary of the Participants Opinions Features
original image

Tabs and Including Task Related Web Page

The responses for the ‘include/exclude web pages’ feature in Prototype 2 and Prototype 3 were very positive (see Table 3). In general, all the participants found that the ‘included’ web pages were recognizable (92% strongly and 8% somewhat agreed). Specifically, participants liked that the included tabs were grouped (33% strongly and 58% somewhat agreed) and coloured (50% strongly and 33% somewhat agreed). Ninety-two percent also found that it was easy to toggle between the included and excluded web pages (67% strongly and 25% somewhat).

Reopening Saved Web Pages

Reopening saved web pages in future sessions was very popular with the participants (Table 3). All the participants liked that the saved web pages were reopened during a later session. Ninety-two percent of participants liked being able to save groups of web pages and all the participants agreed that it was easy to restart the task after the reopening of the saved web pages. As well, 83% of participants liked that they could manage their saved web pages at the end of session with the Save Web Pages form. One participant said… “I liked this tool. It's a combination [of Prototype 2 and Prototype 3]. You could save the pages that you wanted and not others which did help with these tasks”. A few participants, however, mentioned that they found this step to be tedious “It was more work when asked if should save tabs at the end because I would have excluded the page. It was annoying to be asked twice”.

Landmarked Saved Web Pages

In Prototype 3, the saved web pages were landmarked and reopened at their last location (see Table 3). Participants were in general, positive to this feature (42% strongly agreed and 42% somewhat agreed).

Managing Pages at Completion of the Task

With Prototype 3, participants could choose to delete the saved web pages from their computer or to save the pages in their bookmark list in a folder. As seen in Table 3, participants were somewhat receptive to this feature (67% somewhat agreed that they liked this). One participant said “Prototype 3 is the best one… it can move my saved pages into bookmarks and since it used the task name as the folder it is convenient too.” but another participant was less positive “I don't like bookmarks and don't use them, that's why I like the save session [saved pages] because I don't need to go to the bookmark list.”

Do the results of this study support the guidelines?

We used the guidelines presented in the Related Work section when we designed our prototype tools. After conducting the evaluation of the prototypes we were interested in learning if the results supported these guidelines. While participants did have suggestions on improvements or additional functionality for the tools, the logged data and the opinions supported the usefulness of the guidelines. Prototype 1 had the minimum level of functionality for the guideline features with a focus on within session support. Participants found that it was the additional, more complex features of Prototype 2 and Prototype 3 to be most useful (namely the saving of relevant web pages and the landmark feature). This may indicate that while users need to revisit information within sessions more importantly in the multi-session context is the need to help users re-start their tasks between sessions.

Guideline 1: List current multi-session tasks

The drop-down list of the participants' active multi-session tasks on the toolbar (all prototypes) was used to meet this guideline. In the questionnaires, there were no suggestions made by participants that were directly related to this guideline. In the questionnaires, participants agreed it was easy to select the tasks from a drop-down box (91.6%) and were happy with the toolbar overall (92%).

Guideline 2: Reminder of the current multi-session task

The features for this guideline included having the multi-session task name on the toolbar when the task was active (all prototypes), and having selected included web pages grouped together and coloured yellow (Prototype 2 and 3). While there were suggestions for additional features to support this guideline (for example, to track time spent on an active task to remind the user when they had been working away from the task, and to be able to change the defaults of the current features), these all reinforced the need of the user to recognize and be reminded of the active multi-session tasks within a session. For the most part, participants were very receptive to the features that reminded them of what they were working on (or suppose to be working on). Participants found that the included tabs were easy to recognize, that it was easy to toggle between the included and excluded tabs, that it was easy to recognize which task was active and it reminded them that they were working on a multi-session task when multi-tasking.

Guideline 3: Management of information between sessions

To meet the third guideline the following features were tested: a drop-down list of the active multi-session tasks (all three prototypes), reopening of included web pages from a previous session (Prototype 2 and 3), automatic scrolling to the last location on the page when re-opened (Prototype 3), adding a finish date to the task (Prototype 3), and choosing to save relevant pages of a task as bookmarks at the end of a task (Prototype 3).

All of the suggestions made by participants related to this guideline had to do with providing more new functionality to help support multi-session tasks between sessions. For example, some participants said that they would like to know how much time they had before the deadline of a task was reached. This was an extension of the existing feature that only advised the participants when the task deadline was up; “[it] only points out when the deadline has come but now it's too late do anything.” All the participants liked the current feature of having the ‘included’ pages in a task be saved and reopened in a later session, with 84% liking that the pages were reopened and scrolled to their last viewed position in the page. In general, participants agreed it was easy to select and unselect pages to save at the end of a session (92%) and that they liked being able to save groups of pages (92%).

Participants were less enthusiastic about having the system manage their saved pages when the task had finished. This in part may be due to the fact that many participants commented that archiving the saved pages would depend on the task. For example, one participant said “It depends on what type of task [whether to archive information or not]. If it is a shopping or personal interest topic I tend not to save however, if it's for school and for an assignment soon to be due, I will cut/paste into word document (with web use) for later”. Others commented that they do not like or use bookmarks. Perhaps an alternative method of saving pages (e.g., to a file) would be more useful to these participants.

In the future, the functionality and features of these prototypes as well as the additional suggestions should be incorporated in one prototype and should be tested in the field with participants using it with their own multi-session tasks. This would help us better understand and evaluate how participants would use the prototype features with multi-session tasks.

LIMITATIONS OF THE STUDY

While a laboratory setting could have provide a stronger controlled environment for the study, we were evaluating prototypes for tasks that take more than one session to complete. We could have asked the participants to come back multiple times to work on the tasks in a laboratory, however to do so for all three prototypes would have been a major inconvenience and the participants would be performing the tasks in an unnatural setting. We wanted the participants to experience all the features of each prototype in order to compare each feature while also having the freedom to evaluate the prototypes in their own setting on their own time schedule. Another limitation of this study was the lack of counterbalancing (both for the prototype use order and task order), therefore the results of the study are based largely on user reflection. While the tasks themselves were contrived they were based on previous study participants' tasks who were of a similar population. Also, we felt that by having the participants evaluate the prototypes with the same tasks that we could better compare the prototype features for this preliminary exploration.

CONCLUSIONS

In this paper, we presented three prototypes with features developed using guidelines established from two previous studies [13,14]. Participants found the additional browser features helpful for multi-session tasks. Even for the very simple tool (Prototype 1) participants said that they like to have a list of their current tasks available. It was, however, evident by the results (including the participant comments) that the additional features available in the other two prototypes were more helpful and that the participants preferred using them. Participants noted that the first prototype was suited for simple, uncomplicated tasks, and the second and third prototypes provided additional functionality for more complex multi-session tasks.

The suggestions made by the participants for future features or enhancements to the existing features of the prototypes are consistent with the three guidelines. This work shows the importance of developing tools to enhance current browsers to help users perform multi-session tasks with a balance between ease of use and more complex features as seen in both the participants' opinions and the logged data. Other researchers can use these recommendations and the results found in this study to help guide the design and development of other multi-session task tool features.

Acknowledgements

Thanks to Nicholas Miller for his assistance in the implementation of the prototypes and to the participants of this study.

Ancillary