SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Related Research
  5. Research Study
  6. Results
  7. Discussion
  8. Conclusion
  9. Appendix A: Sample Phrases from No Results Found Pages
  10. References

When examining the ways in which individuals evaluate the credibility of information sources, it is easy to assume that these judgments are based solely on the quality of the information being presented. The study described here questions this assumption by establishing a link between the quality of the interface (rather than the content) and perceived credibility. Essentially, interface design and the “form” of information (the messenger) can negatively impact the perceived quality of the “content” of information (the message). This study tests the hypothesis that individuals searching a poorly designed digital library will perceive the contents of the collection as less authoritative and credible than a digital library with a superior interface. This focus on interface design illuminates one of the methods by which individuals evaluate new or poorly understood information: by examining the quality of its distribution mechanism. Generally speaking, this research is an indication of how individuals are prone to the carrier effect, allowing features of the messenger (the interface) to affect the perception of the message (the digital library content).


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Related Research
  5. Research Study
  6. Results
  7. Discussion
  8. Conclusion
  9. Appendix A: Sample Phrases from No Results Found Pages
  10. References

There is often an unintended consequence of our desire to perfectly match search results with the semantic intention behind a search query: when an empty result set is returned to a user's search query. This consequence emerges as an often unattractive visual: the “no results found” page returned by a search engine (see Appendix A for sample phrases taken from the “no results found” pages of current academic search tools).

By itself, a “no results found” page is not improper (although it may be poorly designed, or have any number of usability or navigational issues); however, the meaning it conveys may not be the appropriate one. “No results found” is a misleading phrase because it masquerades as a definitive answer; in reality, the collection being searched may actually contain content that matches a user's query.

In examining this problem, it helps to narrow our focus a bit; in this case, we will be restricting our discussion to digital libraries and their search tools. Digital library retrieval systems are presented with a much different and a much more particular subject of focus than their larger siblings, Web search engines. The most obvious difference between the two is collection size, but there are others. In this investigations of a specific information retrieval system type (i.e., digital library search tools), we hope to provide a narrow enough focus to develop and defend research hypotheses, with the ultimate goal of extrapolating the results back to the larger community of information retrieval systems.

Related Research

  1. Top of page
  2. Abstract
  3. Introduction
  4. Related Research
  5. Research Study
  6. Results
  7. Discussion
  8. Conclusion
  9. Appendix A: Sample Phrases from No Results Found Pages
  10. References

Previous studies that have examined empty result sets (see DeFelice, Kastens, Rinaldo, & Weatherley, 2006; Kan & Poo, 2005; Zhuang, Wagle, & Giles, 2005) have been more focused on the collections themselves rather than the users encountering the empty result sets. In order to probe the effects that null result sets have on digital library users, this study addresses two broad questions. First, what are the affective implications of encountering a null result set? Measures of affect are an appropriate indicator of many aspects of the user experience, and bear a relationship to many expectations for a digital library: namely, retention of users, ease-of-use, minimizing extraneous cognitive load, satisfaction, and sense of accomplishment (Dillon, 2001). This proposal is aligned with Dervin's concept of Sense-Making (Dervin & Nilan, 1986; Dervin, 1993; Tidline, 2005) and Nahl's Affective Load (James & Nahl, 1996; Nahl, 2005), in that the active process of triaging information and making sense of it is hugely impacted by the momentary affective state of an individual, which in turn is impacted by the interface that communicates the information.

Second, what impact does the digital library interface have on the interpretation of its contents? An examination of the effect of a digital library's interface on the perception of its contents may have strong repercussions for digital library designers and administrators. The general assumption is that the content and the interface to the content are separate entities. For example, a research paper can exist in two different digital libraries-one with a highly usable interface, the other a disaster of design-but it is assumed that users who end up finding the research paper in question will treat it the same way, regardless of which library they found it in. In other words, the content of the message is self-contained, and not in any way affected by the messenger (i.e., the way in which it arrived in the hands of a reader-in this case, the digital library and its interface). This study seeks to test whether or not a poorly developed user interface will have a strong effect on the way a user sees the content offered by that interface.

Thumbnail image of

Research Study

  1. Top of page
  2. Abstract
  3. Introduction
  4. Related Research
  5. Research Study
  6. Results
  7. Discussion
  8. Conclusion
  9. Appendix A: Sample Phrases from No Results Found Pages
  10. References

Participants interacted with a mock digital library via a simple search engine interface (see Figures 2–4); they were under the impression that they were evaluating a new digital library and its overall design and responsiveness. They were given a topic to search for (this topic was chosen to be academic in nature, but not a topic that might be intimately familiar to potential participants) and several comprehension questions to answer regarding the topic. The mock digital library contained a small (15 in each topic) set of documents pertaining to the topic at hand; their presentation on the search results screen is the main manipulation in this study.

Participants were divided into two groups: the first (control) group had appropriate results returned to them on the search results pages, regardless of the terms they use in their search query; the second (experimental) group always obtained an empty result set back in response to their first search query; further search queries would alternate between delivering the appropriate results and empty result sets. When search results were shown, all participants received the same set of results (randomly ordered within each result set).

Measures. Participants were asked to rate their perception of the quality of the search results using a Likert scale on the following items: authentic, factual, inaccurate, scholarly, trustworthy, relevant, important, pertinent, unrelated, and significant.

These descriptors were intended to represent a sense of the authoritativeness (the first five descriptors) and meaningfulness (the latter five) of the articles viewed in the digital library. These two categories, authoritativeness and meaningfulness, broadly encompass the ways in which individuals triage information: meaningfulness is more of a personal judgment (similar to cognitive authority) that is unique to individuals, while authoritativeness is seen as an outside force that lends credence to a particular piece of information. Once again, because of this difference between interior and exterior indicators of validity, it is hypothesized that a user interface problem (here represented by no results found pages) will more likely affect user' sense of authoritativeness rather than meaningfulness.

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Related Research
  5. Research Study
  6. Results
  7. Discussion
  8. Conclusion
  9. Appendix A: Sample Phrases from No Results Found Pages
  10. References

The main article validity items were subjected to a principal components factor analysis with oblique rotation (due to the amount of correlation between the two factors, r=.31). Two factors-Meaningfulness and Authoritativeness-were extracted; these agreed with the hypothesized structure. The Meaningfulness construct (composed of the following five items: relevant, important, pertinent, related, and significant) reveals a conception of the amount of substance, value, worth, and meaning within the articles in the search results. The Authoritativeness construct (composed of the following five items: authentic, factual, trustworthy, scholarly, and accurate) reveals a conception of the dependability, veracity, and authoritativeness within the articles in the search results. These two factors accounted for 55% of the total variance. Further, these two factors showed satisfactory reliability (Cronbach's a=.84 for Meaningfulness, a=.71 for Authoritativeness) in this sample (N=91). Table 1 illustrates the factor loadings, and Table 2 reveals correlations between the items.

Table 1. Factor loadings (Oblique Rotation) of validity items
Validity itemsFactor 1Factor 2
  1. Note: only values ⩾ .50 are in bold.

Relevant.78.27
Important.79.36
Pertinent.73.31
Related.77.23
Significant.82.19
Scholarly.42.55
Accurate.45.64
Authentic.09.65
Factual.21.78
Trustworthy.32.75
Table 2. Correlations between validity items
Validity items12345678910
  1. a

    p<.05.

  2. b

    p<.01

1. Relevant1.48b.49b.55b.55b.24a.30b.21a.09.28b
2. Important 1.51b.45b.64b.39b.38b.09.27b.30b
3. Pertinent  1.54b.43b.27b.23a.15.27b.23a
4. Related   1.48b.09.50b.05.20a.15
5. Significant    1.35b.29b.09.05.23a
6. Scholarly     1.13.13.23a.53b
7. Accurate      1.32b.50b.31b
8. Authentic       1.33b.30b
9. Factual        1.40b
10. Trustworthy         1

The individual factor scores were computed and used as dependent variables in a MANOVA comparing participants in the results and no results groups. Individuals that got results differed significantly from those that had trouble getting results with respect to the second factor, Authoritativeness (F(1,90)=13.99, p<.01). Specifically, those who had no troubles getting results considered the articles in the search results to be more trustworthy, authentic, factual, scholarly, and accurate (hence “authoritative” as that factor is labeled). To further support the hypotheses, the two conceptual factors were then used to compute mean validity scores (one representing meaningfulness and the other authoritativeness), and a one-way ANOVA was run for each of them with results. The results for the Authoritativeness factor were significant (F(1,90)=12.45, p<.01).

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Related Research
  5. Research Study
  6. Results
  7. Discussion
  8. Conclusion
  9. Appendix A: Sample Phrases from No Results Found Pages
  10. References

The results of this study strongly support the hypothesis that individuals' judgments of authoritativeness are affected by frustrating aspects of the interface that conveys the information in question. We have discussed at length how interface design and the “form” of information (or, alternatively, messenger) can negatively impact the perception of the quality of the “content” of information (the message).

How can we explain this apparent interrelationship between the message and the messenger? One framework we can use is Wilson's concept of cognitive authority (Wilson, 1983), discussed more recently in reference to online materials in Rieh's work (Rieh, 2000; 2002; 2005; Rieh & Belkin, 1998). Based on Wilson's dichotomy between first-hand knowledge (e.g., direct experiences of an individual) and second-hand knowledge (e.g., things learned from other people not based on direct experience), information found in an online environment (and thus implicitly placed there by another individual) is a form of second-hand knowledge. This implicit relationship with the other individual or organization, the source of the second-hand knowledge, speaks to the importance of authorship in an individual's judgment of quality when encountering

these information sources, and also lends credence to the idea that interface problems are likely to cause a decrease in perceived authority. In a sense, the presentation of the information (the messenger) is a surrogate for the actual author, so an interface that lacks credibility implies an author that lacks credibility, and hence content (message) that is questionable. Further, because of the lack of gatekeepers in online environments, an individual's sense of administrative authority is diminished, in effect placing a greater emphasis on the actual presentation of information as a cue of its credibility and authority.

Another framework that provides a helpful perspective is Fogg's concept of computers as persuasive actors (Fogg, 1997; 1998; 2003). The idea that computers are persuasive, and thus potentially authoritative, supports the notion that interface frustrations can cause a loss of faith or trust in the “agent” communicating information, which in turn can lead to an attenuation of the perception of quality of the search results. Because the computer (and by extension the interface) is considered an actor that exhibits qualities that represent believability, any affront to this believability affects the message being conveyed. In other words, in much the same way that we tend to distrust information coming from a questionable source (like a tabloid or a person with signs of mental illness), a computer that exhibits a questionable countenance is likely to be treated in the same way.

Conclusion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Related Research
  5. Research Study
  6. Results
  7. Discussion
  8. Conclusion
  9. Appendix A: Sample Phrases from No Results Found Pages
  10. References

“No results found” is a misleading phrase because it masquerades as a definitive answer. The study discussed here was an attempt to understand the effect of null result sets on search behavior and on the perception of contents in digital libraries. In particular, this research supports the hypothesis that interface and design flaws have an effect on the perceived authority (here defined in terms of being authentic, factual, trustworthy, scholarly, and accurate) of the information being communicated by the interface in question. At a high level, this research acts as an indication of how individuals are prone to interpret the content of a message in relation to its messenger, in this case allowing features of the messenger (the interface) to negatively affect the reception of the message (the digital library content).

Thumbnail image of
Thumbnail image of
Thumbnail image of
Thumbnail image of

Appendix A: Sample Phrases from No Results Found Pages

  1. Top of page
  2. Abstract
  3. Introduction
  4. Related Research
  5. Research Study
  6. Results
  7. Discussion
  8. Conclusion
  9. Appendix A: Sample Phrases from No Results Found Pages
  10. References

(Note the difference in punctuation, capitalization, and phrase vs. sentence structure.)

  • “No results found”
  • “No results returned for your criteria.”
  • “No results were returned.”
  • “Nothing Found”
  • “Sorry, no documents were found matching search terms.”
  • “There are 0 results”
  • “No Results Found.”
  • “0 articles with title/keywords/abstract containing *”
  • “Your search matched 0 documents.”
  • “There are no products that match your search”
  • “No videos were found to match your query.”
  • “No results were found.”
  • “Sorry, your request returned no records.”
  • “Results: Not Found”
  • “No documents were found for your search.”
  • “No Results matching your search term(s) were found.”

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Related Research
  5. Research Study
  6. Results
  7. Discussion
  8. Conclusion
  9. Appendix A: Sample Phrases from No Results Found Pages
  10. References
  • Amento, B., Terveen, L., & Hill, W. (2000). Does “authority” mean quality? Predicting expert quality ratings of Web documents. Paper presented at the 23rd annual international ACM SIGIR conference on research and development in information retrieval, Athens, Greece.
  • Chesney, T. (2006). An empirical examination of Wikipedia's credibility. First Monday, 11 (11).
  • DeFelice, B., Kastens, K. A., Rinaldo, C., & Weatherley, J. (2006). Insights into collections gaps through examination of null result searches in DLESE. Paper presented at the Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries. Retrieved November 2, 2007, from http://doi.acm.org/10.1145/1141753.1141823.
  • Dervin, B., & Nilan, M. (1986). Information needs and uses. Annual Review of Information Science and Technology, 21, 1938.
  • Dervin, B. (1993). Verbing communication: Mandate for disciplinary invention. Journal of Communication, 43 (3), 00219916.
  • Dillon, A. (2001). Beyond usability: process, outcome, and affect in HCI. Canadian Journal of Information Science, 26 (4), 5769.
  • Fogg, B. J. (1997). Charismatic computers: Creating more likable and persuasive interactive technologies by leveraging principles from social psychology. Unpublished Dissertation, Stanford University, Stanford, CA.
  • Fogg, B. J. (1998). Persuasive computers: Perspectives and research directions. Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 225232. Los Angeles: ACM Press/Addison-Wesley Publishing Co.
  • Fogg, B. J. (2003). Prominence-Interpretation theory: Explaining how people assess credibility online. Paper presented at the ACM CHI 2003 Conference on Human Factors in Computing Systems.
  • Fritch, J. W., & Cromwell, R. L. (2001). Evaluating Internet resources: Identity, affiliation, and cognitive authority in a networked world. Journal of the American Society for Information Science and Technology, 52, 499507.
  • James, L., & Nahl, D. (1996). Achieving focus, engagement, and acceptance: Three phases of adapting to Internet Use. Electronic Journal of Virtual Culture, 4 (1).
  • Kan, M.-Y., & Poo, D. C. C. (2005). Detecting and supporting known item queries in online public access catalogs. Paper presented at the Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries.
  • McKenzie, P. J. (2003). Justifying cognitive authority decisions: Discursive strategies of information seekers. The Library Quarterly, 73 (3), 261288.
  • Nahl, D. (2005). Affective load. In K. E.Fisher, S.Erdelez & L. E. F.McKechnie (Eds.), Theories of Information Behavior (pp. 3943). Medford, New Jersey: Information Today, Inc.
  • Rieh, S. Y. (2000). Information quality and cognitive authority in the World Wide Web. Unpublished Dissertation, Rutgers University, New Brunswick, NJ.
  • Rieh, S. Y. (2002). Judgment of information quality and cognitive authority in the Web. Journal of the American Society for Information Science and Technology, 53, 145161.
  • Rieh, S. Y. (2005). Cognitive authority. In K. E.Fisher, S.Erdelez & L. E. F.McKechnie (Eds.), Theories of Information Behavior (pp. 8387). Medford, New Jersey: Information Today, Inc.
  • Rieh, S. Y., & Belkin, N. J. (1998). Understanding judgment of information quality and cognitive authority in the WWW. Paper presented at the ASIS Annual Meeting, Pittsburgh, PA.
  • Schmalbeck, L., Stuart-Moore, J., & Evans, M. (2006). Adapting peer verification, validation and accreditation processes for digital libraries. Paper presented at the Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries.
  • Tidline, T. J. (2005). Dervin's Sense-Making. In K. E.Fisher, S.Erdelez & L. E. F.McKechnie (Eds.), Theories of Information Behavior (pp. 113117). Medford, New Jersey: Information Today, Inc.
  • Wilson, P. (1983). Second-hand knowledge: An inquiry into cognitive authority. Westport, CT: Greenwood Press.
  • Zhuang, Z., Wagle, R., & Giles, C. L. (2005). What's there and what's not?: Focused crawling for missing documents in digital libraries. Paper presented at the Proceedings of the 5th ACM/IEEE-CS joint conference on Digital libraries.