Library portal images that positively influence their users' perception of the portal

Authors


Abstract

This paper provides an example of how visual information – the images used on academic library Web portals (ALWPs) – transforms users' perceptions of, and preference for, the portal. As a result of the information derived from this research the manner in which visual information is presented in ALWPs may be transformed. The study reported here compared the effect that high-image-pertinent (HIP) academic library portals and low-image-pertinent (LIP) academic library portals have on the users' preference for one portal over the other.

One hundred undergraduate students searched for the answers to two ten-question information retrieval exercises using matched-pairs of HIP and LIP academic library portals. The exercises were constructed of questions similar to those asked at an academic library's reference desk.

Data collected and statistically analyzed included: the scores from the information retrieval exercises, the time to complete the information retrieval exercises, the mouse-clicks used to complete the information retrieval exercises and the users' stated portal preference. The HIP portals outperformed the LIP portals and the subjects preferred the HIP portals to the LIP portals in 3 out of 4 measures of performance and preference.

Introduction

Studies of Web-based information systems have considered issues of interface design (Hersh, Pentecost & Hickam, 1996; Cohen & Still, 1999; Klein, 2001), functionality and usability (Battleson, Booth & Weintrop, 2001; Cockrell & Jane, 2002; Shropshire, 2003), and aesthetics (Battleson, Booth & Weintrop, 2001). But the issue of Web portal images has remained, for the most part, unexamined.

The two classes of ALWPs used here, HIP and LIP were arrived at after a close examination of 50 ALWPs randomly drawn from all academic library portals in the USA (found at http://sunsite.berkeley.edu/Libweb/). The 50 portals were matched by their parent institutions' Carnegie Classification. They were also matched by portal type using McDonald's (2004) classification scheme (See Appendix #1).

McDonald identified his first category of Web portal as a framework to present “customizable e-resource[s]”; his second category of portal as an “integrated Web service portal,” linked with other campus information management and distribution systems such as BlackBoard and WebCT; and his third portal type as an enterprise or “metasearch system” (2004, 10).

What resulted was a set of three Carnegie/McDonald matched pairs of HIP/LIP ALWPs. (See Table 1 below.)

Table 1. List of Match-Paired Academic Library Portals
Low Pertinence PortalsCarnegie ClassHigh Pertinence Portals
West Valley College (Type I)← Associate College →Moraine Valley Community College (Type I)
Skidmore College (Type I)← Baccalaureate Lib Arts →Randolph-Macon College (Type I)
Rockhurst University (Type I)← Masters College & U. →Northern Michigan University (Type I)

The Concept of Image Pertinence

This research introduces the concept of “image pertinence” (Hildreth, 2004). Key to this concept is the observation that some library portals contain images that are more pertinent to their textual content than the images found in other portals. This conceptualization is analogous to the definition of the term “pertinent” provided by the Oxford English Dictionary, “Pertaining or relating to the matter in hand…” (1989, 614).

Take for example an ALWP's “Welcome” page with textual information about general library resources that also displays an image of what appears to be the main entrance to the library building and an image of library book stacks in the foreground with a student working with pen and paper at a table and chair in the background. This text and image combination represents a strong conceptual linkage between the text and the images. (See Figure 1 below for a screen capture example of a HIP portal page.)

Figure 1.

 

On the opposite extreme is an ALWP with an opening page that contains text about general library information and also displays a header-banner of what appears to be an aerial view of the college, an image of what appears to be the library exterior entrance, an image of what appears to be a group of cheerleaders and an image of a group of students to “meet.” This text and image combinations contain little conceptual linkage. (See Figure 2 below for a screen capture example of a LIP portal page.)

Figure 2.

 

For the purposes of this research, portals with a preponderance of pertinent images are referred to as HIP portals. And portals which contain images that do not pertain to textual content are referred to as LIP portals.

It could be suggested that the concepts of image pertinence and document relevance are one and the same. Image pertinence may very well be one of the “many kinds of relevance” to which Mizzaro (1997, 811) referred.

Document “relevance is subjective: Different judges may express difference relevance judgments.” (Missaro, 1997, 815) It is very likely true that different judges of image pertinence can be expected to judge an image's pertinence differently.

Froehlich wrote about “how dynamic information-seeking behavior is” (1994, 126). It follows that relevance is dynamic as well and is subject to change over time. On the other hand, this author suggests that image pertinence is static and remains the same over time.

At this time this author does not know the answer to the question of the relationship between document relevance and image pertinence. Further study of image pertinence is required in order to more accurately identify the similarities and dissimilarities of image pertinence to document relevance.

Literature Review

Of particular importance to this research are the following articles and studies: Soergel's (1976) suggestion that the retrieval task performance of an Information Retrieval System (IRS) is the best measurement of the system's performance; Jacobson and Fusani's (1992) recognition of the effect users' knowledge and experience have upon their performance of information retrieval tasks; Smithson's (1994) user-centered interpretive approach to the evaluation of IRS; Spink's (1996) problem-based approach to the study of end-user information seeking; Battleson, Booth and Weintrop's (2001) use of student subjects and real reference desk questions in their study of Web-based IRS performance; and Shopshire's (2003) method of ranking ALWPs by their Carnegie Classification.

Also of importance to this research is McGillis and Tom's (2001) study of IRS success based on their subjects' completion of information retrieval tasks; by the time it took their subjects to complete the information retrieval tasks; by their use of a user satisfaction rating scale and by their use of software to record their subjects' movement about the test site.

Research Design

Undergraduate students were used as subjects in this study. Because this study used human subjects, approvals from the Long Island University and the host's (St. Francis College) Institutional Review Boards were acquired. Signed LIU consent forms were obtained from all the subjects. A written script was used to present the subjects with exactly the same information and instructions in exactly the same manner.

The students conducted information retrieval exercises (Cockrell & Jane, 2002) consisting of questions similar to those that had been asked and recorded at the reference desk of the host site's academic library. The subjects used one portal of each image-pertinence type to find the answers to the questions in the two information retrieval exercises. The subjects then performed usability and performance evaluations of the portals.

The Hypotheses

Four hypotheses were considered:

  • Hypothesis #1: The length of time taken to complete the information retrieval exercises; an indicator of information retrieval (IR) performance. Less time to complete the exercises, when combined with the other hypotheses listed below, would indicate a better performing portal.

  • Hypothesis #2: The number of mouse-clicks taken to complete the information retrieval exercises; an indicator of IR performance. Fewer mouse-clicks would indicate a better performing portal.

  • Hypothesis #3: The information retrieval exercise scores; an indicator of better IR performance. Higher scores would be an indication of better information retrieval performance.

  • Hypothesis #4: Users' portal preference; a statement of portal preference from the user.

Selection and Pair-Matching of the Portals Used

A 50-portal simple random sample was drawn from the list of all, at the time the study was conducted, 2,174 ALWPs found in the USA provided by the University of California at Berkley (located at http://sunsite.berkeley.edu/Libweb/). The paired portals were matched by their Carnegie Classification (Shropshire, 2003). The portals were also matched by portal type, as defined by McDonald (2004).

Each of the 50 portals in the sample drawn of portals was reviewed by the author and assigned a score for image pertinence. The image pertinence score ranged from “1” (for LIP) to “3” (for HIP).

The total pertinence of the images found in the portal was then divided by the total number of images found on the portal. This resulting number was assigned to the library portal as its image pertinence score. (See Appendix 1 for a list of the 50 portals and their image pertinence scores.)

The Information Retrieval Exercises

The subjects required two sets of information retrieval tasks to perform; one for each of the two portals. A pool of questions was prepared using a log of questions asked by students at the reference desk at the host college's library during a two month period in the fall term of 2005. The exercises were modeled after the “browsing and searching” information seeking behavior described by Choo, Detlor and Turnbull (1999, 5). The questions were pre-tested on the portals used in the study by the author (Hersh, Pentecost and Hickam, 1996). These information exercises were presented to the subjects on paper.

The Questionnaire

A Web-based questionnaire software package (Perseus SurveySolutions 6) was used to create the study's questionnaires and to gather the responses and perform preliminary data analysis. The questionnaire included ten demographic questions, ten questions about the search experience and the subjects' portal preference and a final open-ended fill-in-the-blank question soliciting their comments about “library Web sites.”

The questionnaires were pre-tested on four individuals who were representative of the student subjects who were to participate in the study to ensure that the instruments were not confusing, and that the instruments elicited the desired response from the subjects. The information retrieval exercises were similarly pre-tested.

The Sample

The sample size for the student subjects was set at 100, given the following calculations: The target population of this research is users of the library. The student population of the host institution for the study, St. Francis College, is 2000. Pareto's 20/80 rule would indicate that 400 (20% of 2000) of those students are users of the library 80% of the time. With an error rate of 8.5%, a 95% confidence rating would require a sample of 100. (This calculation was performed using the Sample Size and Confidence Interval Calculator found at http://survey.pearsonncs.com/sample-calc.htm.)

A sample of 100 subjects is the optimum size for an exploratory study such as this. Any patterns of user preference for portals should begin to appear within the data created from this sample size. Because this was a preliminary study, and a pilot study had not been previously conducted, there was not an established level of standard deviation to use to apply a more traditional formula for the determination of sample size such as Sproull's (1995).

Announcements were posted on bulletin boards throughout the host site. Faculty were asked to announce the study to their classes. A $100 American Express gift card raffle was offered as an inducement for participation.

The sampling method used for the student subjects was simple convenience. Whoever responded to the call for volunteers was allowed to participate.

The experimental setting was a computer classroom equipped with 12 computers, Internet access and browser software. The participants viewed the questionnaire Web page containing the links to the two ALWPs they were to search, one HIP and one LIP, and the portal evaluation questions about the search experience. Screen capture software (Camtasia Studio V. 3) was used to record search time (see Hypothesis #1 above) and mouse-clicks (see Hypothesis #2 above).

The information retrieval exercises were scored for the number of correct answers that had been located (see Hypothesis #3 above). The subjects were asked to compare the two library portals they had used to search for information and state their preference. (See Hypothesis #4 above.)

Findings

Hypothesis #1: Time to Complete the Information Retrieval Exercises Analysis of the data indicated that the null hypothesis may be rejected. (See Table 2 below.) There was a significant difference (2-tailed p value at .040) between the HIP portal completion mean times and the LIP portal completion mean times. Students took significantly less time to complete the search exercises using the HIP portals.

Table 2. Hypothesis #1, Comparison of the Means of the Time to Complete the Information Exercises
  Levene's Test for Equality of Variancest-test for Equality of Means
  FSig.tdfSig. (2-tailed)Mean DifferenceStd. Error Difference95% Confidence Interval of the Difference
         LowerUpper
TimeEqual variances assumed6.111.0142.069161.040168.21981.2857.696328.742
 Equal variances not assumed  2.077118.476.040168.21980.9807.863328.575

Hypothesis #2: Mouse-Clicks to Complete the Information Retrieval Exercises Analysis of the data indicated that the null hypothesis may be rejected. (See Table 3 below.) There was a significant difference between the means (2-tailed p value at .001) of the HIP portal mouse-clicks to task completion and the LIP portal mouse-clicks to task completion. Students used significantly fewer mouse-clicks to complete the search exercises using the HIP portals.

Table 3. Hypothesis #2, Comparison of the Means of the Number of Mouse-clicks Independent Samples Test
  Levene's Test for Equality of Variancest-test for Equality of Means
  FSig.tdfSig. (2-tailed)Mean DifferenceStd. Error Difference95% Confidence Interval of the Difference
         LowerUpper
ClicksEqual variances assumed1.108.2943.490158.00114.2004.0686.16522.235
 Equal variances not assumed  3.490157.937.00114.2004.0686.16522.235

Hypothesis #3: The Information Retrieval Exercise Scores

The data analysis did not support rejection of the null hypothesis at a significance level of level of ≤ 0.05. (See Tables 4 and 5 below.) The research hypothesis was not supported. While there was an arithmetic difference between the means (HIP = 7.4885 and LIP = 7.1124), there was no significant statistical difference between the means (2-tailed p value at .244, with equal variances assumed) of the search exercise scores. The two systems of information retrieval performed the information tasks equally (Soergel, 1976).

Table 4. Hypothesis #3, Comparison of the Means of the Information Search Exercise Scores
 PertinenceNMeanStd. DeviationStd. Error Mean
ScoreLow897.11242.23448.23685
 High877.48852.02453.21705
Table 5. Hypothesis #3, Comparison of the Means of the Information Search Exercise Scores
  Levene's Test for Equality of Variancest-test for Equality of Means
  FSig.tdfSig. (2-tailed)Mean Differenc eStd. Error Differenc e95% Confidence Interval of the Difference
         LowerUpper
ScoreEqual variances assumed.252.6161.170174.244−.37615.32163−1.01094.25865
 Equal variances not assumed  1.171173.01 1.243−.37615.32127−1.01025.25796

Hypothesis #4: Users' Portal Preference

Analysis of the data indicated that the null hypothesis may be rejected. (See Table 6 and 7 below.) When the students were asked which portal they would prefer to use if they were to perform another information search exercise 58% indicated they would prefer the HIP portal while 34% indicated they would prefer the LIP portal. The subjects expressed a significant preference for the HIP portals.

Table 6. Hypothesis #4, Comparison of the Means of the Scores for Users' Portal Preference Frequency Distribution
PreferenceValid PercentCumulative Percent
Low Image-Pertinent34.134.1
High Image-Pertinent58.292.3
I'd prefer to use neither6.698.9
I don't know/I'm not sure1.1100
Total100 
Table 7. Bar Graph for Hypothesis #4, Comparison of the Means of the Scores for Users' Portal Preference
original image

The Chi-Square Test results for this data (Significance = 0) indicates that the observed distribution was not different from the experimental hypothesis. The subjects expressed a preference for the HIP portals. (See Table 8 below.)

Table 8. Hypothesis #4, Comparison of the Means of the Scores for Users' Portal Preference Statistics
original image

[0 cells (.0%) have expected frequencies less than 5. The minimum expected cell frequency is 22.8.]

Qualitative Data Analysis

The final question of the survey offered an open-ended fill-in-the-blank question: “…from your perspective as a student, do you have any comments or suggestions about library Web sites?”

The analysis method used was matrix based. The comments were assigned a positive or negative score (“p” or “n”).

The topic analysis of the student responses found four major themes. The four themes were:

  • 1)Slow/fast response time
  • 2)Difficult/Easy to use (confusing; scary; readability issues; fewer clicks to search option)
  • 3)Overall design issues (cluttered; reading capabilities; frames; flat structure; consistent; etc.)
  • 4)Content (information access)

There was no consensus among the students about which of the portals were superior. One individual would give a portal high praise for a specific attribute, while another would criticize the same portal for a different, equally important, attribute.

The analysis of the data revealed that the students had an opinion on the usability and design of the portals. An over-riding and often repeated theme from the student subjects was “ease of use.” Variations of this theme occurred more than 18 times in the students' comments.

Discussion

The analysis of the data found significant support for three of the four experimental hypotheses. HIP portals were superior to LIP portals in the amount of time that it took the students to complete the search exercises, the number of mouse-clicks that it took the students to complete the search exercises and the students' stated portal preference.

The results appear to show that the images contained in HIP and LIP portals, when used for student-driven information retrieval in an academic setting, most definitely exert influence upon the information retrieval performance and the users' perception of that performance.

It could be suggested that the significant performance differences noted between the HIP and the LIP portals is simply a reflection of the overall superior design of the HIP portals. However, if that were the case, one would expect a significant difference to have been noted as well in the information retrieval exercise scores between the HIP and LIP portals. But the difference in HIP and LIP portal information retrieval exercise scores was not statistically significant.

The exact root causes for the differences between HIP and LIP portals noted in the statistical analysis of this study's data is yet to be determined. It is expected that those insights will come with additional research to determine more precisely what the relationship is between the images contained in ALWPs and their usability; both actual and perceived.

Conclusion

Do images exert an effect on users? It appears that, based on the results presented here, they do. What is the exact nature of that impact? More research is needed to know exactly.

This research provides the first step in an exploration to determine the impact images have upon the ALWP use experience. This research has demonstrated that HIP portals perform better on 3 of 4 measurements and that the users accurately perceive this superior performance.

The designers of ALWPs should ensure that the images they use on their Web pages are pertinent to the text.

Acknowledgements

The research reported here is part of the author's dissertation. The author would like to thank the members of his committee: Dr. Charles Hildreth (chair), Dr. Chuck Broadbent, Dr. Michael Koenig, Dr. Mark Stover and Dr. Mary Westerman.

Appendix

[Insert Appendix #1 Here]

Ancillary