The impact of presentation vs. interaction design on user satisfaction with digital libraries

Authors


Abstract

Interface design includes two aspects: interaction design and presentation design, but there have not been many works examining the effect of these two aspects on user satisfaction with the system. We conducted a comparison study analyzing user satisfaction with system features in three digital libraries: ACM Digital Library, IEEE Xplore Digital Library, and IEEE Computer Society Digital Library. Our results showed that the two interface aspects of a system: the presentation and interaction designs, may receive rather different, even opposite (positive or negative) user opinions. Weak design of the interaction aspect could possibly be compensated by a good design in interface presentation. We observed that interface presentation is no less important than the interaction aspect. Based on the findings, we suggest interface design balancing between these two aspects instead of emphasizing either one but ignoring the other.

1. INTRODUCTION

User interface is important in system design. To the end users, “the interface is the system” (Norman 1986, p. 61). User interface (UI) design has gathered a lot of research attention, and nowadays, interaction design is more often emphasized (c.f., Preece, Rogers, & Sharp, 2007). But UI design is not just interaction design. Heller (2005) notes that interface design includes two aspects: interaction design and presentation design. Interaction design deals with the user's interaction with the system, including the operation of buttons, menu items, system feedback, and so on. Presentation design refers to the static visual appearance of the interface. It is concerned with the ways in which information is presented to users, such as page layout, use of fonts and colors, overall attractiveness and aesthetics, and so forth.

UI design for digital libraries (DLs) follow the same principles and guidelines as other information systems, and the UI consists of both interaction and presentation aspects. The major and functional design for DLs includes search and browse. Regarding the interaction aspect, search design involves the usability of search buttons, search fields, etc., while browse design concerns buttons, links, or the menus used for navigating throughout the collection. In terms of the presentation aspect, search design takes care of issues such as the appearance of search function, search result list presentation, etc., and browse design concerns organization of the collection for browse, browse result list presentation, and so on.

Ideally, both parts of an interface are designed excellently to enable the user's experience in the system both effective and pleasurable. However, in reality, design problems exist more or less in either the presentation or the interaction aspect, or both, which affect the system's usability. We were intrigued to examine how each of the two aspects, interaction and presentation, affects user satisfaction with the interface, and what implications could be discovered towards better interface design. Specifically, in this paper, we were interested in answering the following research question:

RQ: How do the presentation and the interaction designs of digital library interfaces affect users' satisfaction with the systems, respectively?

By investigating the easy-to-ignore presentation aspect of the interface design and comparing it with the more emphasized interaction aspect, our findings would be helpful for system designers in balancing their efforts when designing DLs, and maybe even other types of online systems.

2. LITERATURE REVIEW

In the broad context of user interface design area, form and content, as well as the derived issues of aesthetics and usability, and structure and function, are pairs of trade-offs in artifact design, including the design of information systems and their interfaces. This is especially true under the condition of limited resources invested, such as personnel and funds (Tractinsky, 1997). The former in each pair of issues is usually a somewhat “soft” aspect while the latter is somewhat “hard”. The area of Human-Computer Interaction (HCI) has long viewed an emphasis on the “hard” aspects, although the “soft” aspects, such as the aesthetics, social, and cultural issues, have begun to draw more attention in computer system design (e.g., Tractinsky, 1997; De Angeli, Sutcliffe, & Hartmann, 2006).

Among the softer issues in design, aesthetics is a forerunner area which has been drawing quite some attention recently. Traditionally, it received only passing interest in the HCI literature despite that people (e.g., Green & Jordan, 2002; Norman, 2002) acknowledge it is a basic approach in web site design. The marginalization status of the aesthetic dimension in design has been fading away with the many publications appearing in both the popular and the academic presses that show awareness of this issue, e.g., Tractinsky (1997), Jordan (1999), Karvonen (2000), and Lavie & Tractinsky (2004), to name a few.

The progressing efforts on aesthetics counterbalancing those on the harder aspects in system design make it necessary to summarize their findings. Previous studies have examined the effects of aesthetics on users, as well as its relations with users' perceptions of other system attributes and with the overall experience of their interaction. Their findings basically fall into two main categories. One demonstrates that aesthetics is an important determinant in the user's overall perception on the system. Visual aesthetics of computer interfaces is a strong determinant of user satisfaction and pleasure (Lavie & Tractinsky, 2004). Beauty is even suggested to be the most important determinant of preferring a web site and thus a primary predictor of overall impression and preferences of web sites (Schenkman & Jönsson, 2000).

The second group of findings shows that aesthetics affects user experience and also other factors in evaluating a system. Tractinsky (1997) argues that user perception of interface aesthetics is closely related to apparent usability (defined as the pre-used perception of a system's usability) and thus increases the likelihood that aesthetics may considerably affect system acceptability. Tractinsky, Katz, & Ikar (2000) discovered that the system's aesthetics affected user's post-use perception of both aesthetics and usability, whereas actual usability had no such effect. Kurosu & Kashimura (1995) found that users may be strongly affected by the aesthetics aspect of the interface even when they try to evaluate the interface in its functional aspects. Similarly, van der Heijden (2003) demonstrated that beauty has affected perceptions of other web site qualities, such as usefulness, enjoyment, and ease of use. In another study, Hartmann (2006) found that judgment of aesthetics influenced users' ratings of usability, content, and overall preference of systems; furthermore, interestingly but reasonably, users tended to discount negative attributes in their favorite interface display conditions.

Besides aesthetics, researchers also advocate other factors being considered in assessing web site, for example, simplicity. Nielsen (1993) argues that simplicity is a key guideline in creating usable systems. Karvonen (2000) suggests that simplicity may serve as a linkage between usability and aesthetics.

In the domain of Internet business, Kim et al. (2002) conducted a research comparing three aspects of system features: structural firmness, functional convenience, and representational delight. They found that representational delight is more important than structural firmness. However, in the DL domain, few studies have compared the presentation and interaction aspects directly.

With the increasing popularity of the term “interaction design”, a tendency could be seen that people in the interface design community emphasize on the interaction aspect, and it is so easy to overlook the presentation aspect, which involves a number of static factors, including not only aesthetics, but others such as simplicity, label representativeness, and overall attractiveness, etc. While the aesthetics issue is not negligible any more, more research is still needed with regard to other factors. We speculate that in order to design the best system interface, the multi-faceted presentation aspect in system interface should also be highlighted to reach a balance between itself and the interaction aspect. Just as Norman (2002) advances, “[g]ood design means that beauty and usability are in balance”, and “all the many factors of design must be in harmony” (p. 42). Research in the social behavior area has demonstrated that physical attractiveness does influence evaluations of others, with attractive individuals receiving more favorable evaluations (Dion, Berscheid, & Walster, 1972). It is of interest to see how the presentation of a DL influences the user's overall perception of the system.

3. METHOD

To address the research question, a controlled lab experiment was conducted to test the usability of three DLs: ACM (Association for Computing Machinery) Digital Library (simplified as ACM hereafter), IEEE Xplore Digital Library (simplified as Xplore hereafter), and IEEE Computer Society Digital Library (simplified as IEEE CS hereafter). The study employed both interactive task and heuristics assessment methods.

3.1 The three DLs

The three DLs, Xplore, ACM, and IEEE CS were chosen since they all collect similar full-text information on computer science and engineering but vary greatly in their interface features. At the first glance at their homepages (Figures 1, 2, 3), one can see that they have very different interface design. With further interactions, one will also find more differences in the features and functions provided in these systems.

Figure 1.

The Xplore DL interface

Figure 2.

The ACM DL interface

Figure 3.

The IEEE CS DL interface

(Note: This study was conducted in 2006. The IEEE CS DL interface has changed since our study)

3.2 Participants

Thirty-six Rutgers University students were recruited as participants. They came from three groups: 12 were undergraduate engineering or computer science (UE) students, 12 were graduate engineering or computer science (GE) students, and 12 were master of library and information science graduate (MLIS) students. These were considered to mimic the end users (engineers, engineering students and engineering librarians) of the DLs investigated. Each participant was invited individually to an on-campus usability lab to perform the experiment and get a pro-rated payment.

3.3 Procedure

Upon approval, each participant was asked to sign the consent form. Then they worked with a background questionnaire. A brief instruction session followed to inform the participant of the tasks that needed to be completed. The participants then performed one search task and one browse task in each of the three DLs in a specified order. To avoid potential learning effect, for example, users may behave better in the later DLs than in the first DL, the three DL orders were balanced using a Latin Square design, which is often seen in usability experiments to counterbalance orders. Search and browse orders were also balanced. Before each search or browse task, they worked with a pre-search or pre-browse questionnaire asking about their familiarity with the task topic. After a search or browse task was completed in one DL, a post-search or post-browse questionnaire was administered to collect the information about their perception and experience. After they finished the post-search or post-browse questionnaire in each DL, they conducted a heuristic assessment to evaluate the search or browse features of that DL. After they finished working with both the search and the browse tasks and heuristic evaluations in each DL, they worked with a post-system questionnaire. Finally, after they finished working with tasks and evaluations in all three DLs, an exit interview was conducted asking about their overall experience. The whole experiment lasted for approximately two and a half hours. Users' interaction with the systems was recorded by the Morae logging software.

3.4 Tasks

Both the search and the browse tasks were designed to explore users' interaction process with the systems. The process starts with an information need (the task) and ends at making a relevance judgment for located document(s). The search task required the participants to search in unlimited time, using any search method, in each of the three DLs for relevant information about “protecting the online repository from fraudulent activity by watermarking”. Users were asked to save the top ten ranked retrieved documents from their most satisfactory search result list among all they obtained with different, if more than one, query tries. They were then asked to make relevance judgment in relation to the search topic on their saved search results, based on a three point scale: relevant, partially relevant, and not relevant. The browse task required the participants to browse through each of the three DLs to locate the specified source and relevant articles on the specific topic. Participants were asked to make relevance judgment (relevant or not relevant) and save the relevant results on the experiment computer. For IEEE CS and Xplore, users were asked to browse the proceedings of ITCC: International conference on Information Technology (2004): Coding and Computing, and locate two papers about data streaming, and save their abstracts. For ACM which does not have ITCC, Annual Symposium on Computational Geometry (SCG) (2004) was used.

Considering that a task for usability test of a DL could not be too simple or too complex, as Notess, Kouper, and Swan [22] pointed out, these two tasks were selected from a list of candidate tasks provided by the IEEE Xplore library reference services. These candidate tasks were then tested in the three DLs by the researchers of this study in order to identify the appropriate ones that could lead the participants to experience the different aspects of interaction designs. Although it would be ideal to use multiple tasks for each individual function (i.e., search or browse), our pilot study indicated that only one search and one browse tasks can be finished in about 2.5 hours, which is already pretty long for an experiment session.

3.5 Evaluation questionnaires

User perception on the system features was measured at both the system overall level and the specific feature level. It was elicited by user ratings to the statements on system features. The rating statements can be roughly divided into two categories: 1) statements/constructs for interactions with the systems, e.g., the system responded quickly to my queries; and 2) statements/constructs with the system's presentations, e.g., the search result list was easy to read. Questionnaires used in the study to collect user perception and satisfaction adopted a 7-point Likert scale. For the majority of the questions and statements eliciting user responses, except for the ones specified, the rating score 1 stands for the most negative opinion, 7 for the most positive, and 4 for the neutral.

4. RESULTS

4.1 Participant information

Among the 36 participants recruited, one quit in the middle of the experiment, so the valid number of participants was 35. In general, the participants as a whole were considered as computer literate but occasional or new users of the three digital libraries. Over 94% of them self-rated above the medium level (above scale 4 on a 7-point scale of amount of computer experience statement) of computer experience, and nearly 50% considered themselves experts (ratings of 6 and 7 on a 7-point scale of level of computer expertise statement) in using computers. About 54% considered themselves very experienced with browsing on the Internet. However, with regard to the three DLs, the majority (23 out of 35, 66%) of the participants did not have any experience with the ACM system and the Xplore system. An even higher percentage of participants, 80% (28 out of 35), had never used the IEEE CS digital library.

4.2 Task performance and user interactions with the three DLs

Besides the interface, a system's performance could be a significant determinant of the user's satisfaction with the system. In order to get an idea of their possible impact, we firstly looked at search and browse performance (this part of data was also reported in Zhang et al., 2008). As shown in Table 1, for search function, the three DLs had similar performance measured by precision. For browse function, IEEE CS showed extremely poor browse outcomes in terms of number and correctness of articles located, while ACM and Xplore provided similar browse results better than IEEE CS.

Table 1. 
original image

We also looked at users' interaction with the three DLs since it may also be an important factor influencing users' satisfaction with the systems (this part of data was also reported in Zhang et al., 2008). As can be seen from Table 1, when working with their search tasks in Xplore, users issued the most number of queries and experienced the most steps to accomplish their tasks, which indicated that among the three DLs, they spent the most efforts in Xplore to complete their search tasks. Moreover, the users met with the most number of “no results” pages and errors when searching in Xplore. As for the browse function, users spent more time in IEEE CS than in the other two, and they went through fewer steps in ACM than in the other two DLs.

In sum, the above data showed that in terms of search, the three DLs had comparative performance, but users had a harder time interacting with ACM. As for browse, IEEE CS showed the poorest performance and users had a harder time in this system as well. Having these in mind, we went ahead to look at the effect of interface design on users' satisfaction with the three DLs, which is the main focus of this paper. In order to comprehensively examine our research question, we conducted thorough analyses both on the general system level and with the search and browse functions respectively, as reported in the following sub-sections.

4.3 User's satisfaction with the systems in general

4.3.1 User satisfaction with overall experience

In the post-system questionnaire for each DL, users were asked to self-assess how satisfied they were with their overall experience in that system and to rate based on a 7-point scale their satisfaction degree. From Table 2, it can be seen that IEEE CS gained the least user satisfaction. While no statistical difference was found between Xplore and ACM, the former had a slightly higher mean score.

4.3.2 Favorite system votes

In the exit interview, participants were asked to name their favorite system among the three based on their overall experiences. Three out all the 35 participants voted two systems, which did not meet the “favorite” requirement that only one system should be named, so their data were not used. The rest people had 32 votes altogether (Table 2). The descriptive data showed that Xplore received 15 votes, followed by ACM, 14 votes, and that IEEE CS obtained only 3 votes. A Chi-square analysis showed that the difference between IEEE CS and the other two systems was significant. This was consistent with their rankings of users' satisfaction with overall experience in these three DLs.

Table 2. 
original image

These data demonstrated that Xplore was an optimal system generally preferred by the users. This was surprising given the findings in Section 4.2, that for the search function, which is the core, if not the most important, function of a DL, Xplore was the most unsatisfactory one. The inconsistency of the poor search interaction features of Xplore and its overall preference by the users indicated that a system's interaction features may not necessarily predict its overall interface satisfactory degree. Given that interface had two aspects, interaction and presentation, it is necessary to scrutinize the interface features in details for the possible explanations of the seemingly contradictory ratings.

4.3.3 User ratings on interface presentation features

User ratings on a number of presentation features of the DLs were collected by the interface evaluation questionnaires. Each participant completed three such questionnaires, one for each DL. Table 3 shows the ratings on each statement in the questionnaire, and one-way ANOVA was used to test the differences. Results showed most of the interface presentation features demonstrate significant differences in user ratings among the three DLs. For these features, most of the times, Xplore was judged as the best and IEEE CS was the poorest. Although the examined factors in the presentation aspect are not necessarily an exhaustive list of all presentation features, they can still lead to an argument that the excellence of Xplore in its presentation aspect counterbalance the weakness of its interaction aspect, and contributes to the overall preference of Xplore by the users. While the poorness of IEEE CS in its presentation aspect affects its overall satisfying degree, although its search interaction features were acceptable.

Table 3. Average Precision Summary of Each Expansion Strategy
original image

4.4 Search features comparison

4.4.1 Search interaction features

The post-search questionnaire in each DL collected user perception on their experience with the search function of the DL (see Table 4). Data in Table 1 showed that participants had the worst search experience in Xplore. Nevertheless, they did not rate the search interaction features of Xplore that worse. Although data in Table 4 shows that descriptively, Xplore did have the least mean score with the examined features, one-way ANOVA analysis did not find any statistical differences in the examined aspects among the three DLs.

4.4.2 Search related presentation features

Table 4 also reports users' perception ratings on the presentation features of the systems' search function. Although no significant differences were found for the examined presentation feature of the search function in these three DLs, descriptive data showed that Xplore did much better in the presentation aspect than in the interaction aspect. It was the best among the three in presenting enough information in their search result display, and its result list was easier to read than ACM's. The result list of ACM was rated as the most difficult to read, while IEEE CS was viewed as the worst in providing enough information in its result list.

The presentation aspect we examined included only two features though, they were the features closely related to the search function. It should be noted that the users' opinions on the systems' overall presentation, as shown in Table 3, could have inevitably affected users' satisfaction with the search function in general. It is necessary to examine the users' overall satisfaction with the search function of the three DLs.

4.4.3 Overall satisfaction with search function

Table 4 also include users' overall satisfaction with the search feature in the three DLs. As can be seen, ACM was descriptively rated as the best, and IEEE CS was the poorest, while Xplore is in between. Again, it was indicated that the poorness of Xplore in its search interaction features could have possibly been compensated by the merit of its presentation features, which leaded to a mediocre overall satisfying degree of its search feature. It was also indicated that the relative nice performance of IEEE CS in interaction aspect did not predict its overall satisfying degree of the search function, possibly because its interface presentation was poor. The highest rating of ACM seemed reasonable considering its excellent interaction features and the acceptable presentation features.

Table 4. 
original image

4.5 Browse features comparison

4.5.1 Browse interaction features

As Table 1 shows, IEEE CS received worse browse outcomes than the other two DLs. We should keep this in mind and be aware of the effect of the browse outcomes on the users' subjective opinion on this system. Meanwhile, the other two systems, i.e., ACM and Xplore, had equally good browse outcomes, which made it fair to compare the effect of their interfaces. As for the interface features, data in Table 1 show that IEEE CS exerted users the most efforts to complete their browse task. Between the other two DLs which had similar browse outcomes, there were also differences in their browse features. The users experienced more steps in Xplore to complete their browse tasks than in ACM, which meant that they spent more effort in the former system than in the latter.

While we had analyzed the data of the users' actual browse interactions, we wanted to compare them with the users' subjective statement on the systems' interaction features. User ratings on browse interaction features were collected by post-browse questionnaires. It was shown in Table 5 that user ratings on browse interaction features mainly match their actual experience in that IEEE CS received the worst ratings in all the examined aspects. However, between Xplore and ACM, there were no significant differences in any of the examined aspects. This was inconsistent with the fact that the users actually spent more efforts in Xplore than in ACM.

4.5.2 Browse presentation features

Users expressed their significant disliking of IEEE CS regarding if the browse result lists is easy to read, and they thought that its result lists were too long. The other two systems, however, did not show significant differences in the examined aspects. It should be noted that the factors listed in Table 5 were the ones most closely related to the browse function. Other general interface presentation features listed in Table 3, for which Xplore was the best among the three DLs, could have inevitably affected users' satisfaction with the browse function in general.

4.5.3 Overall satisfaction with browse function

IEEE CS received the lowest rating scores in respect to users' overall satisfaction with its browse feature. The mean rating value was even under the neutral value of 4, indicating that users offered quite negative opinions with IEEE CS's browse feature. This could be considered reasonable given the fact that IEEE CS all did poorly in the browse outcome, user ratings on its interface interaction features, and user ratings on its presentation features. We wanted to discuss the other two systems more in details as they did equally good in their browse outcomes. There were no significant differences in users' overall satisfaction with these two systems' browse features. Xplore had a relative poorer browse interaction performance, but it had pretty nice presentation features. That said, if we accept that the interface presentation counterbalance the interaction features, it would not be surprising that Xplore's overall browse function is as satisfying as ACM.

Table 5. 
original image

4.6Summary of results

In summary, we found that users had significantly unsatisfactory interaction experience when searching in Xplore (Table 1). However, user satisfaction with the system features showed that Xplore had comparative ratings with ACM: there were no statistical difference between these two; descriptively, Xplore even received higher ratings and more votes than ACM (Tables 2, 6). The presentation design appears to affect users' overall ratings with the systems.

Table 6. 
original image

5. DISCUSSIONS

5.1 Presentation and interaction

As Nielson (1993) notes, users take the interface as the system. It seems reasonable to assume that user's self-reported satisfaction with the system or the system's functions (i.e., search or browse) could be, at least generally, represented by their satisfaction with the system's interface, which includes two aspects of interaction and presentation. Through our inspections on the UIs of the three DLs, it could be seen that the two interface facets of interaction and presentation showed different, even opposite performances in some cases. While we believe that these two aspects could both have an effect on users' overall satisfaction with an interface, we feel that the effect of the presentation aspect is no less than that of the interaction aspect, especially in the circumstances that the users' IR outcomes are comparable in the alternative systems. The weakness of the interaction aspect in an interface could possibly been compensated by the excellence of the presentation aspect, leading to an outcome that a system could still receive high satisfaction ratings and be users' favorite choice despite its poor interaction features.

One example showing strong evidence to our findings is the search function of these three DLs. Since these three systems had similar search outcomes, it appeared fair to compare their interfaces. Xplore was extremely poor in its search interaction aspect, but relatively fine in its presentation aspect, and it received an overall equal degree of user satisfaction as the other two systems in the search function comparison. The weight of the presentation aspect in contributing to the overall satisfaction degree seems to be no less, if not more important, than that of the interaction aspect. This is consistent with Kim et al. (2002) finding, that the presentation delight is more important than the structural firmness.

With regard to the browse function, IEEE CS did poor jobs in both the browse outcome and the interface level including the interaction and the presentation aspects. The extremely low assessment on the overall browse function it received is unsurprising. ACM showed to be the optimal in the browse function in most of the cases in users' actual interaction with the systems, however, it did not receive a significant better ratings than Xplore in users' overall satisfaction with the browse function. We think that the excellent interface presentation of Xplore could have played a significant role.

Further, our data on the general interface level also confirmed the importance of the presentation aspect. The users' perceptions on IEEE CS were consistent in the interaction, presentation, and overall level, which all showed as the poorest. Between ACM and Xplore, there is more to inspect. Although ACM appears to be better than Xplore in the interaction aspect, Xplore did a better job in the presentation aspect, and it received a higher mean rating on users' satisfaction with their overall experience in the system.

In the interface design area, initially, researchers' approaches were focused more on the softer aspects, i.e., the visual appearance at a static level. Gradually, more efforts have been shifted to the interaction level and the concept of interface design has almost been substituted by that of interaction design, which could be seen simply from the growing popularity of this phrase in the titles of HCI textbooks and other reading materials. Current IR system designers and researchers tend to be more interested in striving for systems that are aimed at avoiding confusion and errors, and requiring less user effort and time to accomplish information tasks. Meanwhile, attention and effort on the “softer” presentation level, such as the issues of the interface organization, information display, and overall attractiveness, are less frequently seen.

Previous findings on the issue of aesthetics have found that it is an important determinant towards the user's overall satisfaction with a system (Lavie & Tractinsky, 2003; Schenkman & Jönsson, 2000). Our findings are consistent with them in that they demonstrated the importance of the “soft” aspect in the overall interface design. Even though the system did poorly in the interaction aspect, a better visual display may counterbalance it and lead to more user satisfaction in general.

5.2 Presentation and preference

Our data showed that user ratings on Xplore's interaction features were not as bad as the users had really experienced in both the search and the browse functions. As for users' search experience, despite that they spent more effort and time in Xplore than in the other two systems, and that they had the most number of frustrating errors in completing their search tasks, they did not rate Xplore as bad on its interaction features. In their browse experience, users again rated Xplore and ACM equally good even though they actually had experienced significantly more steps to finish their tasks in Xplore than in ACM.

Hartmann's (2006) argument makes good sense on our observations. It states that users have a tendency to neglect or discount the negative attributes in their favorite system styles. Users could have tended to recognize Xplore as their favorite system due to its better presentation, and therefore reduced their disliking of Xplore's interaction features, resulting in acceptable ratings on the system's interaction features. Users' verbal protocols support our findings. Participants were in favor of Xplore's visual appearance, for example, when being asked why choosing Xplore as the favorite system, one participant (LIS2) comments that she just liked the appearance of Xplore.

Our findings imply that a system can possibly improve and gain more user satisfaction by improving its presentation aspect. We think this will be extremely helpful for some systems which can do little in making their interaction aspect better. We therefore highly suggest that the presentation aspect should obtain more attention in interface design. This is not to disregard the interaction aspect, but our results indicated the need to balance the two aspects of interaction and presentation in interface design, rather than swinging the pendulum extremely to the former as what has been done currently. Simply put, we suggest interface design going beyond just interaction design and also paying attention to the presentation aspect which is easily overlooked.

By improving their interface presentation, information systems can possibly be made more impressive to their users. Although data from our study (Tables 3, 4, 5) showed that all three DLs have room to improve since even the best rated DL's features only received mediocre scores (generally lower than 6 out of 7), some advice still could be made based on users' preference. Generally, clear information item arrangement, representative label, and trying to avoid confusion could make a system being favored by the users. For search function, result list should be easy to read and search result snippet should present enough information. For browse function, it is desired that document categories are well organized, browse result list are easy to read, and result lists are not too long, nor too short.

6. CONCLUSIONS

In this study, we examined the effect of interface presentation and interaction aspects of three DLs on user satisfaction with the systems. Our results showed that the two interface facets of interaction and presentation may have different, even opposite performances in some cases. The weakness of the interface interaction aspect could possibly be compensated by an excellent interface presentation. Their counteraction indicates a need to balance these two aspects in interface design, rather than swinging the pendulum extremely to the presentation aspect as what has been done currently. Having found the importance of interface presentation in affecting the user's overall opinion on a system, we also offered some suggestions on how to make a system more impressive to the users by improving its presentation aspects.

Similar to what is usually done in the literature, user perception on the systems in our study was self-reported ratings. We also collected users' think-aloud data which contained users' perception on the systems' interface, but they were mixed with users' verbal protocol on all other aspects in the whole experiment. Further effort will be made to analyze those data. We acknowledge that the system presentation features examined in this paper were limited and were not an exhaustive list. A further study will cover more interface presentation features to examine their effects on the users' overall satisfaction with the system interface. It is also expected that a model could be developed with different parameters measuring the effect of both presentation and interactive aspects on the overall user satisfaction.

Acknowledgements

This paper is supported by IEEE, Inc. and IMLS #LG-06-07-0105-07.

Ancillary