SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS
  5. PROCEDURE
  6. RESULTS
  7. CONCLUSIONS
  8. References

Social feedback in the form of audience ratings, community tags, recommendations, and text comments is becoming increasingly commonplace on the Web. Prior research has uncovered a number of Web site features that can impact its perceived credibility. However, to date research has not investigated whether social feedback on a Web page can influence the perceived credibility of the information on the page or increase or decrease the likelihood that an individual will subsequently use the information contained within it. This paper describes a study investigating whether one type of social feedback, audience ratings, can influence perceptions of credibility. The results of the study suggest that the type of audience feedback, positive, mixed, or negative, can influence perceptions of credibility while the size of the audience giving feedback does not. Also, audience feedback does not appear to increase the likelihood of use of the information on a web page.


INTRODUCTION

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS
  5. PROCEDURE
  6. RESULTS
  7. CONCLUSIONS
  8. References

While the abundance of information available through the Internet has been beneficial in numerous ways, individuals are now increasingly left to determine for themselves whether the information they encounter is trustworthy and credible or not (Drapeau, 2009; Metzger, Flanagin, Eyal, Lemus, & McCann, 2003; Metzger, 2005; Robins & Holmes, 2008). In the digital world, few web sites and online resources, outside of those that duplicate information found in traditional media, tackle credibility issues through rigorous fact-checking and peer review of to-be-published documents. Nonetheless, the appeal of instant information access and near ubiquitous availability encourages individuals to turn to the Internet more often than other information sources.

Information credibility has been conceptualized as trust in a source, as whether or not the information is believable, and whether the information is true (Corritore, Kracher, & Wiedenbeck, 2003; Fink-Shamit & Bar-Ilan, 2008; Metzger, 2007; Walther, Wang, & Loh, 2004). The most widely used definition of credibility is that credibility equates to believability, or whether the individual demonstrates belief in the information (Dutta-Bergman, 2003; Fogg & Tseng, 1999; Hilligoss & Rieh, 2008; Metzger, 2007).

Fogg and Tseng (1999) have outlined four types of credibility that can be associated with a piece of information: (1) presumed credibility, (2) reputed credibility, (3) surface credibility, and (4) experienced credibility. Presumed credibility is the credibility given to information because of the referring source (e.g., a trusted friend recommends a web site to you). Reputed credibility is credibility given a source via third-party endorsements, or through the credentials of the referring source (e.g., a professor recommends a web site to you). Surface credibility relates to superficial characteristics of the source (e.g., the design of a web site enhances your feelings of trust and/or expertise of the entity behind the web site) and experienced credibility is contingent upon and enhanced by interactions with the source and the outcome of those interactions (e.g., regularly conducting transactions with Amazon.com).

Experimental research is only now beginning to address the question of how users of the Internet judge the quality of the information sources they find. Thus far, research has shown that users appear to make limited initial judgments about a web site's quality based on aesthetic aspects and intuitive reactions to the site's overall visual design (Lindgaard, Fernandes, Dudek, & Brown, 2006; Metzger, 2005). In addition, assessments about an information source's credibility appear to be influenced by several factors including the user's prior knowledge of the domain of the problem, their prior experience with using the Internet as an information source, and information about the source's author and their perceived expertise in the problem area (Gugerty, Billman, Pirolli, & Elliot, 2007; Holscher & Strube, 2000; Jenkins, Corritore, & Wiedenbeck, 2003).

Several characteristics of the information document itself also appear to influence credibility judgments. Existing findings from the literature on what can contribute to a perception of credibility can be characterized within the traditional framework of source – medium – message – receiver paradigm of communication (McLuhan, 1964; Metzger et al., 2003). Some of the source characteristics that can influence perceptions of credibility include author expertise or credentials, the domain of the web site (e.g., .edu instead of .com), and the search result ranking of the site (Alexander & Tate, 1999; Fink-Shamit & Bar-Ilan, 2008; Hovland & Weiss, 1951). Message characteristics include the writing style used on the web page, and its perceived accuracy and objectivity (Fink-Shamit & Bar-Ilan, 2008; Metzger, 2005, 2007). Medium characteristics such as the physical design of the web site and whether or not the content must be paid for have been shown to impact perceived credibility (Fogg et al., 2001; Metzger, 2005). Finally, receiver characteristics such as time constraints and the importance of the information have been shown to change how the credibility process unfolds (Agosto, 2002; Chaiken, 1980).

Since credibility implies a willingness to believe in the information and can inspire trust and subsequent use of that information, it acts as a crucial component of the information search process. In today's complex information environment, trust can help to reduce complexity and uncertainty by acting to facilitate choice as a social decision heuristic (Lee & See, 2004). When faced with a complex set of relevant or partially relevant sources, those deemed most credible, and therefore most trustworthy, will likely receive greater weight within a decision making context (Anderson, 1981).

The rise of collaboratively created Internet content written and edited by the online community, like that found in Wikipedia© and YouTube©, means that users cannot rely solely on authorship judgments or even source judgments as a basis for establishing information credibility (Giles, 2005; Miller, 2005). Instead, users must bring their own knowledge and experience to bear in evaluating information quality or, in the absence of prior knowledge, rely on other's judgments to determine whom to trust.

Unlike more static forms of communication, the advent of Web 2.0 technologies allows for easier audience response to Web-based content and this additional information may significantly affect the credibility determination process (Drapeau, 2009; O'Reilly, 2005). Examples of dynamic Web 2.0 technologies include Web-based applications, social networking web sites, wikis, blogs, and user-generated ratings, comments, and tags on Web articles. Anonymity in a publicly-edited online work, such as an article in Wikipedia, relies on the “wisdom of the crowds” to produce quality content, but leaves up to individual users whether or not to trust that particular crowd (Surowiecki, 2005).

Often, the more consensus given by a group of individuals to a particular piece of information, the more it is accepted as trustworthy and correct (Burns, 2008; Mackay, 1841). Sharing of information can build consensus and acceptance of that information as fact, whether it is true or not (Fogg & Tseng, 1999; Wang, Walther, Pingree, & Hawkins, 2008). The rise of online social networking and collaborative editing tools makes the sharing and agreeing upon of information by the collective Web crowd especially easy. Rating tools, comment boxes, collaborative linking, and community-edited information resources allow individual users to provide feedback on and edit information available on the Web. Social navigation tools, recommender systems, reputation systems and rating systems are all forms of social feedback on Web information sources (Dieberger, Dourish, Hook, Resnick, & Wexelblat, 2000; Hitlin & Rainie, 2004; Metzger, 2005; Resnick, Zeckhauser, Friedman, & Kuwabara, 2000; Shardanand & Maes, 1995). Audience ratings of online information are now common on most blogs, social media tools, and major network news sites. The popularity of these tools means that it is becoming increasingly unlikely to encounter a piece of information on the Web without a surrounding social context (Grifantini, 2009; Kennedy et al., 2007).

While in the early days of the Web most users searched for information in isolation, it is now more common for Web users to encounter information in a context similar to being a member of an audience. Not all members of the virtual audience are equally active, however. Many Internet users who encounter information do not give feedback (Grifantini, 2009; Kelly, 2005; Resnick & Varian, 1997; Resnick et al., 2000). Users who are more likely to have rated something online include those who are experienced Internet users, those with high-speed or broadband Internet connections, males, and those who are more educated or younger (Hitlin & Rainie, 2004). It is interesting to note that those who have given feedback on Internet content are more likely to be skeptical of information found online and more likely to engage in in-depth assessments of credibility (Hitlin & Rainie, 2004).

Rapid audience feedback is new to the Internet medium and is likely to influence credibility assessments in a number of ways. In other mediated contexts, past research has revealed that both audio and videotaped audience reactions can affect individual perceptions of a speaker's message (Duck & Baggaley, 1975; Hovland, Janis, & Kelley, 1953; Hovland, & Weiss, 1951; Kelley & Woodruff, 1954; Landy, 1972). A similar process may occur in the online information medium (Rafaeli & Noy, 2002). Social consensus can serve as a strong cue for acceptance of a piece of information, and hence acceptance of the credibility of that information (Tormala & Petty, 2004). Audience feedback on information can serve to establish a sense of social consensus for acceptance of information (Chaiken, 1980; Hovland, & Weiss, 1951). Social consensus can reduce both uncertainty related to receiver thoughts generated in response to a message and also the perceived risk associated with accepting a particular piece of information as credible (Brinol & Petty, 2009; Festinger, 1954; Kim & Srivastava, 2007).

Social consensus and informational social influence was best demonstrated in the well-known Asch experiments where confederates gave incorrect answers to a task which asked them to state aloud which line on a piece of paper was longest (Asch, 1951, 1955). Participants in the experiment were likely to agree with the other confederates, even when the answer they gave was obviously incorrect (Baron, Valdello, & Brunsman, 1996). In an online environment an effect similar to that found in the Asch study may occur, with users reacting to an article either positively or negatively simply because the rest of the audience has demonstrated either a positive or negative attitude toward the information. The crowd's opinion may either validate a user's belief about the credibility of information if it is similar or create cognitive dissonance if the crowd's opinion differs. Similar to the recent versions of the Asch experiment, the extent to which an individual is willing to accept and internalize the crowd's opinion is likely to be moderated by task difficulty, motivation, and incentives for accuracy (Baron, Valdello, & Brunsman, 1996).

While source, message, and receiver characteristics on the establishment of the credibility of information have been investigated in past research, audience effects are less well-studied and audience effects on the acceptance of Internet-based information is nearly absent in the literature. The development of rapid audience feedback on Web documents was not possible or was very difficult to implement until approximately 2004. Therefore, our understanding of audience effects on credibility assessments and the acceptance of online information is limited.

An understanding of how feedback from a virtual audience impacts the credibility of online information is crucial to our understanding of how information is assessed on the Web. In the current Web environment, users are more likely than not to be given audience feedback on each piece of information they encounter. It is not yet known, however, whether this feedback will significantly alter the users' process for determining the credibility of online information. This study will investigate whether audience feedback affects assessments of credibility through the following research questions:

Hypothesis 1: Audience feedback will affect credibility appraisals of online information. Documents that have positive ratings from the online community will be viewed as more credible than documents that have either mixed, negative, or no rating from the online community.

Hypothesis 2: The effect of audience opinion will be influenced by the amount of social feedback given on the web page, with a larger crowd having given feedback having a larger effect on credibility perceptions than a smaller crowd.

Hypothesis 3: Audience feedback will influence decisions to use information. Treatments described in documents that have positive feedback will be more likely to be chosen as a treatment of choice than treatments described in documents that have negative, mixed, or no feedback.

MATERIALS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS
  5. PROCEDURE
  6. RESULTS
  7. CONCLUSIONS
  8. References

A health topic was selected for this study as the Web has become a popular resource for obtaining disease and treatment information, with 61% of American adults as of June 2009 having searched for health information online (Fox, 2006; Fox & Jones, 2009; Hart, Henwood, & Wyatt, 2004). In addition, research has suggested that social feedback may be especially impactful for information concerning health topics (Hardey, 1999; Lau & Coiera, 2008; Sarasohn-Kahn, 2008; Wang et al., 2008). Lyme disease was chosen as the health topic for the current study due to its relevance to a wide range of age groups and its likely degree of unfamiliarity among study participants.

The experimental web pages were captured from web sites on the first three pages of the Google search result for the query: “treatments for Lyme disease”. Web sites from the first three pages of search results have been shown in prior research on online health information seeking to form the majority of a searcher's information set (Eysenbach & Kohler, 2002; Peterson, Aslani, & Williams, 2003). The web pages selected all dealt with alternative or non-standard treatments for Lyme disease including Cat's Claw, Magnesium, Salt and Vitamin C, Hyperbaric Therapy, Coenzyme Q10, Mild Silver Protein, Miracle Mineral Supplement, and the Marshall Protocol. The web pages were copied directly from the source and audience feedback was added to each page during the experiment.

Factors known to impact credibility assessments other than the experimental variables of interest were controlled during the study by either fixing them to a particular level (e.g., including no external site links on any of the experimental web pages), or through measurement. Those factors that were not able to be either constrained or measured are ignored.

Participants viewed three practice and eight experimental web pages. The three practice and eight experimental web pages had social feedback present just below the article title at the top of the page, the most common location for these types of ratings. Audience feedback was provided in the form of a “thumbs-up, thumbs-down” rating, with a thumbs down indicating a generally negative reaction and a thumbs up indicating a generally positive reaction. Numbers were present alongside the rating icons indicating how many audience members responded either positively or negatively. In the positive audience reaction condition 90% of audience members were shown to have indicated a positive reaction and the remaining 10% were shown to have given a negative reaction. In the negative audience reaction condition 90% of audience members were shown to have indicated a negative reaction and the remaining 10% were shown to have given a positive reaction. In the mixed audience reaction condition 49.5% of the audience was shown to have indicated a positive reaction and 50.5% of the audience was shown to have given a negative reaction. Examples of positive and negative audience reaction icons are given in Figure 1.

thumbnail image

Figure 1. Sample positive and negative audience reaction icons

Download figure to PowerPoint

In the experiment, two of the web pages had a negative audience reaction rating, two of the web pages had a positive audience reaction rating, two of the web pages had a mixed audience reaction rating, and two of the web pages had no audience feedback. In the no feedback condition, the “thumbs-up” and “thumbs-down” icons were presented as grayed out, with the value “0” alongside each icon to indicate that no audience members had yet provided feedback on the article.

The amount of audience feedback was present in two levels: high (20,000 audience members responding) and low (2,000 audience members responding). The proportions indicating positive, mixed, or negative audience reaction were kept the same for both the high and low amount of feedback conditions. Three of the experimental web pages had a low amount of audience feedback and three of the experimental web pages had a high amount of audience feedback. The type and amount of feedback present on each page was randomly assigned to each of the eight web pages for each participant to avoid confounding audience response with web page content.

PROCEDURE

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS
  5. PROCEDURE
  6. RESULTS
  7. CONCLUSIONS
  8. References

Participants

Participants were recruited through the psychology department of a large southeastern university. 183 participants completed the study. There were a total of 132 participants with complete data sets which were then used in the final analysis. The incomplete data sets are presumed to be due to technical glitches, not attrition, since many of the incomplete data sets were started, but not completed due to technical issues such as difficulties completing the online forms and problems with the experimental web site not advancing automatically at the appropriate time. All participants, including those that were not able to complete the study, received course credit for their participation.

The mean age of participants was 21 years (SD = 5.90). There were 65 females and 67 males in the study sample. Participants' average self-reported Internet experience level was 4.97 (SD = 1.00) on a scale of 1 to 6, indicating that most participants believed themselves to have a high level of Internet experience. Participants had an average of 10.7 years of Web experience (SD = 3.43) and reported using the Web an average of 5 hours per day (SD = 3.02). Taking the mean age of participants into account, the group appears to be mainly composed of members of the Millenial generation who began using the Internet around the age of 11 (Lenhart, Madden, & Hitlin, 2005; Metzger & Flanagin, 2008; Palfrey & Gasser, 2008).

Experimental Procedure

Participants completed the experiment online using an interactive web site (http://sona.ucfwebstudy.com). Participants first completed a consent form and a demographics questionnaire. They then read instructions for the experiment. These instructions indicated that they would be viewing three practice and eight experimental web pages during the experiment. They were instructed that they would view each page for 60 seconds and would then be asked to respond to a questionnaire concerning the web page they had just viewed. Participants clicked a button at the bottom of the page to indicate they understood the experiment directions before beginning.

Participants first viewed three practice pages to familiarize themselves with the experimental procedure. The three practice pages contained descriptions of alternative treatments for chronic fatigue syndrome. All three of the practice web pages contained audience feedback on them. One contained a high level of negative feedback, a second contained a low level of positive feedback, and the third contained no feedback. The type and amount of audience feedback presented on the practice web pages was randomized for each participant to avoid confounding audience feedback with web page content. The order in which the three practice web pages were presented to participants was controlled using a Latin Square design. Participants were randomly assigned to one of the presentation orders. All participants viewed each of the practice web pages for 60 seconds (Jansen & Spink, 2003).

After 60 seconds, the web site automatically advanced to a questionnaire to be answered following each web page presentation. The questionnaire asked participants to provide a credibility rating for the web page they just viewed using the prompt, “How credible (believable) is the information on the web page you just viewed?” Participants responded to the prompt using an online form with a six-point Likert-type scale anchored by “highly credible” and “not at all credible.” Past research has demonstrated that an individual's perception of the credibility of a piece of information can be assessed through a direct question (Walthen & Burkell, 2002). Participants clicked a button at the bottom of the questionnaire to be taken to the next web page to be viewed.

Following the three practice pages, participants were given a short descriptive scenario and asked to imagine that they had been recently diagnosed with Lyme disease. They were told they would have to make a decision concerning the treatment they will receive from the information that is presented to them on the web pages they were to view. All participants viewed each of the eight experimental web pages for 60 seconds. Following each web page's presentation they answered the same questionnaire as in the practice web pages. This sequence continued until participants had viewed all eight of the experimental web pages. The order in which the eight web pages were presented to participants was controlled using a Latin Square design, in which there were eight potential sequences. Participants were randomly assigned to one of the eight presentation orders.

Following the eight web page presentations, participants were asked to make a decision as to which treatment they chose and recorded this decision using an online form that listed the eight treatments discussed on the web pages. They then completed a post-experiment questionnaire asking them to rate their familiarity with Lyme disease and to report which web page elements they used when determining the credibility of the pages they viewed during the experiment. Participants were then debriefed on the purpose of the experiment.

RESULTS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS
  5. PROCEDURE
  6. RESULTS
  7. CONCLUSIONS
  8. References

Participant average self-reported level of familiarity with Lyme disease was 2.25 (SD = 1.20) on a scale of 1 to 6, indicating that most participants believed themselves to be unfamiliar with the disease. 34 participants reported that they were born or had lived in the northeastern United States, an area of high activity for Lyme disease. These participants did not significantly differ from those not born in the northeastern United States on self-reported familiarity with Lyme disease, t(130) = −0.247, p = .806. Four participants reported that they were born or had lived in either Wisconsin or Minnesota, another area of high activity for Lyme disease. These participants did not significantly differ from those not born in Wisconsin or Minnesota on self-reported familiarity with Lyme disease, t(130) = −0.419, p = .676.

The data showed no evidence of kurtosis, skewness, or outliers, therefore no data transformations were performed. All analyses were conducted with alpha set to .05, unless otherwise indicated. Bonferroni corrections were applied to all post-hoc pairwise comparisons. A 2 (Amount) X 4 (Type) within-subjects ANOVA was performed on the data set. Hypothesis 1, that type of audience feedback would impact credibility ratings, was supported, F(3, 378) = 7.33, p < .0001. Participants gave a page with negative feedback an average credibility score of 3.12 (SD = 1.17), a page with no feedback 3.15 (SD = 1.22), a page with mixed feedback 3.50 (SD = 1.18), and a page with positive feedback a 3.59 (SD = 1.13). Partial η2 for this effect was .055, indicating that type of feedback, while significant, provided a small effect on overall credibility ratings. Post-hoc tests indicated that there were significant differences in credibility ratings between negative and mixed audience feedback (p=.015, d = 0.32) and negative and positive audience feedback (p < .0001, d = 0.41). Figure 2 shows the mean credibility rating for each type of audience feedback.

thumbnail image

Figure 2. Average credibility rating by type of audience feedback

Download figure to PowerPoint

A 2 (Amount) x 3(Type) ANOVA was conducted to examine Hypothesis 2, which states that the impact of type of audience feedback would increase as the size of the audience responding increased. Hypothesis 2 was not supported as there was not an interaction between type and amount of audience feedback, F(2, 256) = .158, p = .854. Figure 3 shows the mean credibility rating for each of the combinations of type and amount of audience feedback.

Hypothesis 3, that treatments with positive audience feedback on them would be more likely to be selected as a treatment choice, was not supported. Individuals were not more likely to choose a treatment with positive feedback on the page describing it than treatments with either no, mixed, or negative feedback on the page describing it, χ2(3, N = 132) = 2.24, p = .524. 23% of participants chose treatments whose pages had no audience feedback on them, 25% of participants chose treatments whose pages had negative audience feedback on them, 22% of participants chose treatments whose pages had mixed audience feedback on them, and 30% of participants chose treatments whose pages had positive audience feedback on them.

Participants self-reported which elements they used when determining the credibility of the experimental web pages on the post-experiment questionnaire. Participants most frequently reported using elements related to the page's background or affiliation. For example, 72% of participants reported using the web page author's credentials as part of their criteria for determining credibility, 66% reported using the completeness of the web page's content, 56% reported using the domain of the web page, and 56% of participants reported using the web page's affiliation. 30% of participants reported using whether prior viewers liked the web page (i.e., type of audience feedback) in their evaluation and 30% of participants reported using how many prior viewers liked the web page (i.e., audience size) in their evaluation.

thumbnail image

Figure 3. Average credibility rating by type of feedback and size of audience

Download figure to PowerPoint

CONCLUSIONS

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS
  5. PROCEDURE
  6. RESULTS
  7. CONCLUSIONS
  8. References

The results of the study suggest that the type of audience feedback present on a web page, whether overall negative, mixed, or positive, can influence viewer perceptions of the credibility of that page. The size of the audience, however, does not appear to strengthen the influence of audience feedback on perceived credibility. Despite impacting ratings of credibility, audience ratings did not influence subsequent treatment choices in this study.

Several considerations for the design of audience feedback can be gleaned from the results of this study. Firstly, it appears that audience feedback can act as a potential heuristic for web page credibility. To facilitate this impact, web page designers might consider placing audience ratings of information close to the top left of the web page or very close to the heading of the article. These web page locations have been shown to be among the first that are scanned by a viewer after arriving on a web page (Nielsen, 2006). Similarly, a web page designer can decrease the impact of audience feedback by moving it further down the web page so that the article is likely to be read prior to viewing the audience feedback.

Also, for audience feedback to be able to serve as a heuristic for credibility it must be quickly and easily understood by viewers. The thumbs-up, thumbs-down audience ratings used in this study have become one of the most frequently used methods for presenting audience feedback on the Web. Designers must also consider the cultural context for who they are authoring to ensure that the same meanings are gleaned from audience feedback icons across cultures. Ease of use issues should also be paramount. By ensuring that users can rate information using only one click (i.e., not requiring registration) and by immediately updating the feedback on the web page owners can ensure that the greatest range of audience opinions are expressed.

Although web page designers often have little influence over the actual ratings that are left on a web page, some additional information may increase or decrease the impact of the audience feedback. Information about who has given feedback produces a “nested credibility” situation, where the credibility of the responders themselves must also be assessed. Dependent upon the perceived credibility of the audience members, this additional information may make the impact of audience feedback either stronger or weaker.

There are several important questions raised by this study. It is not yet known if audience feedback will play a larger or smaller role in different contexts for information use. This study only examined information use in the health domain. In contexts where audience feedback and ratings are both common and expected, such as on commercial web sites, it may have more or less of an effect. Also, it is likely that as the accuracy of the information becomes more important to the searcher the impact of audience feedback will lessen (Chaiken, 1980; Metzger, 2005).

This study also examined only one style of audience feedback. As there are many different types of audience feedback on the Web, including star ratings, text comments, and viewer polls, future studies should compare different types of audience feedback to determine whether they have different effects on perceived credibility.

Individual likelihood to use audience feedback is likely to be related to several individual difference variables, potentially including evaluation apprehension, social desirability, and locus of control (Zalesny & Ford, 1990). It is also likely that prior knowledge of the topic on the web page will decrease likelihood of use of audience feedback to establish credibility.

One interesting area for future research is the potential impact of tailored social feedback on credibility. This relatively new type of audience feedback involves dynamically inserting ratings from social networking applications into web pages. This type of feedback gives readers information about whether the web page was recommended or viewed by members of their social network and has the potential to greatly increase the impact of audience feedback on credibility (Byerly & Brodie, 2005; Deutsch & Gerard, 1955; Fogg & Tseng, 1999).

This study provides support for the idea that audience ratings on a web page may impact perceived credibility. As audience ratings and feedback become more commonplace on the Web, it is important that we continue to examine the impact that audience feedback may have on web page and web site credibility.

References

  1. Top of page
  2. Abstract
  3. INTRODUCTION
  4. MATERIALS
  5. PROCEDURE
  6. RESULTS
  7. CONCLUSIONS
  8. References
  • Agosto, D.E. (2002a). Bounded rationality and satisficing in young people's web-based decision making. Journal of the American Society for Information Science and Technology, 53 (1), 1627.
  • Alexander, J.E., & Tate, M.A. (1999). Web wisdom: How to evaluate and create information quality on the web. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Anderson, N.H. (1981). Foundations of information integration theory. New York: Academic Press.
  • Asch, S. E. (1951). Effects of group pressure upon the modification distortion of judgments. In H.Guetzkow (Ed.), Groups, leadership and men (pp. 177190). Pittsburgh, PA: Carnegie Press.
  • Asch, S. E. (1955). Opinions and social pressure. Scientific American, 193, 3135.
  • Baron, R.S., Vandello, J.A., & Brunsman, B. (1996). The forgotten variable in conformity research: Impact of task importance on social influence. Journal of Personality and Social Psychology, 71 (5), 915927.
  • Burns, C. (2008). Deadly decisions: How false knowledge sank the Titanic, blew up the shuttle, and led America into war. Amherst, NY: Prometheus Books.
  • Brinol, P., & Petty, R.E. (2009). Persuasion: Insights from the self-validation hypothesis. In M.P.Zanna (Ed.), Advances in experimental social psychology (Vol. 41, pp. 69118). London: Elsevier.
  • Byerly, G., & Brodie, C.S. (2005, April). Internet (and/or institutional) credibility and the user. Paper Presented at the Internet Credibility and the User Symposium conducted at the University of Washington, Seattle, WA.
  • Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39 (5), 752766.
  • Corritore, C.L., Kracher, B., & Wiedenbeck, S. (2003). Online trust: Concepts, evolving themes, a model. International Journal of Human-Computer Studies, 58, 737758.
  • Deutsch, M., & Gerard, H.B. (1955). A study of normative and informational social influences upon individual judgment. The Journal of Abnormal and Social Psychology, 51, 629636.
  • Dieberger, A., Dourish, P., Hook, K., Resnick, P., & Wexelblat, A. (2000, December). Social navigation: Techniques for building more usable systems. Interactions, 7 (6), 3645.
  • Drapeau, M. (2009). Trust, but verify web 2.0 sources. Federal Computer Week. Retrieved July 19, 2009 from http://www.fcw.com/Articles/2009/07/20/COMMENT-Drapeau-info-sharing-and-rumormill.aspx.
  • Duck, S.W., & Baggaley, J. (1975). Audience reaction and its effect on perceived expertise. Communication Research, 2 (1), 7985.
  • Dutta-Bergman, M. (2003). Trusted online sources of health information: Differences in demographics, health beliefs, and health-information orientation. Journal of Medical Internet Research, 5 (3), Paper e21.
  • Eysenbach, G., & Kohler, C. (2002). How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews. British Medical Journal, 324, 573577.
  • Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7, 117140.
  • Fink-Shamit, N., & Bar-Ilan, J. (2008, December). Information quality assessment on the web – an expression of behaviour. Information Research, 13 (4), Paper 357. Retrieved March 20, 2009 from http://informationr.net/ir/13–4/paper357.html.
  • Fogg, B.J., & Tseng, H. (1999). The elements of computer credibility. Proceedings of CHI '99, Pittsburgh, PA: USA, 8087.
  • Fogg, B.J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., et al. (2001). What makes web sites credible? A report on a large quantitative study. Proceedings of the SIGCHI Annual Conference, 3 (1), 6168.
  • Fox, S. (2006, October 29). Online health search 2006. Washington, DC: Pew Internet and American Life Project.
  • Fox, S., & Jones, S. (2009, June). The social life of health information: Americans' pursuit of health takes place within a widening network of both online and offline sources. Washington, D.C.: Pew Internet and American Life Project.
  • Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438 (15), 900901.
  • Grifantini, K. (2009, September 16). Can you trust crowd wisdom? Researchers say online recommendation systems can be distorted by a minority of users. Technology Review. Retrieved September 18, 2009 from http://www.technologyreview.com/web/23477/?a=f.
  • Gugerty, L., Billman, D., Pirolli, P., & Elliott, A. (2007). An exploratory study of the effect of domain knowledge on internet search behavior: The case of diabetes. Proceedings of the Human Factors and Ergonomics Society, 51, 775779.
  • Hardey, M. (1999). Doctor in the house: The internet as a source of lay health knowledge and the challenge to expertise. Sociology of Health & Illness, 21 (6), 820835.
  • Hart, A., Henwood, F., & Wyatt, S. (2004). The role of the internet in patient-practitioner relationships: Findings from a qualitative research study. Journal of Medical Internet Research, 6 (3), Paper e36.
  • Hilligoss, B., & Rieh, S.Y. (2008). Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing and Management, 44, 14671484.
  • Hitlin, P., & Rainie, L. (2004, October). Online rating systems: 33 million American internet users have reviewed or rated something as part of an online rating system. Washington, DC: Pew Internet and American Life Project.
  • Holscher, C., & Strube, G. (2000). Web search behavior of internet experts and newbies. Computer Networks, 33, 337346.
  • Hovland, C.I., Janis, I.L., & Kelley, J.J. (1953). Communication and persuasion: Psychological studies of opinion change. New Haven, CT: Yale University Press.
  • Hovland, C.I., & Weiss, W. (1951). The influence of source credibility on communication effectiveness. The Public Opinion Quarterly, 15 (4), 635650.
  • Jansen, B.J., & Spink, A. (2003). An analysis of web documents retrieved and viewed. Proceedings of the 4th International Conference on Internet Computing, Las Vegas, NV, 6569.
  • Jenkins, C., Corritore, C.L., & Wiedenbeck, S. (2003). Patterns of information seeking on the web: A qualitative study of domain expertise and web expertise. IT & Society, 1 (3), 6489.
  • Kelly, D. (2005). Implicit feedback: Using behavior to infer relevance. In A.Spink, & C.Cole (Eds.), New directions in cognitive information retrieval (pp. 169186). The Netherlands: Springer.
  • Kelley, H.H., & Woodruff, C.L. (1954). Members' reactions to apparent group approval of a counternorm communication. Journal of Abnormal and Social Psychology, 52, 6774.
  • Kennedy, G., Dalgarno, B., Gray, K., Judd, T., Waycott, J., Bennett, S., et al. (2007). The net generation are not big users of web 2.0 technologies: Preliminary findings. In ICT: Providing choices for learners and learning. Proceedings ascilite Singapore 2007. Retrieved August 12, 2009 from http://www.ascilite.org.au/conferences/singapore07/procs/kennedy.pdf.
  • Kim, Y.A., & Srivastava, J. (2007, August). Impact of social influence in e-commerce decision making. Paper Presented at the ICEC Conference, Minneapolis, MN: USA.
  • Landy, D. (1972). The effects of an overheard audience's reaction and attractiveness on opinion change. Journal of Experimental Social Psychology, 8 (3), 276288.
  • Lau, A., & Coiera, E.W. (2008). Impact of web searching and social feedback on consumer decision making: A prospective online experiment. Journal of Medical Internet Research, 10 (1), Paper e2.
  • Lee, J.D., & See, K.A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46 (1), 5080.
  • Lenhart, A., Madden, M., & Hitlin, P. (2005). Teens and technology: Youth are leading the transition to a fully wired and mobile nation. Washington, DC: Pew Internet and American Life Project.
  • Lindgaard, G., Fernandes, G., Dudek, C., & Brown, J. (2006, March). Attention web designers: You have 50 milliseconds to make a good first impression Behavior & Information Technology, 25 (2), 115126.
  • Mackay, C. (1841). Extraordinary popular delusions and the madness of crowds. London: George G. Harrap & Co.
  • McLuhan, M. (1964). Understanding media: The extensions of man. New York: McGraw-Hill.
  • Metzger, M.J. (2005, April). Understanding how users make sense of credibility: A review of the state of our knowledge and recommendations for theory, policy, and practice. Paper Presented at the Internet Credibility and the User Symposium at the University of Washington, Seattle, WA.
  • Metzger, M.J. (2007). Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58 (13), 20782091.
  • Metzger, M.J., & Flanagin, A.J. (Eds.). (2008). Digital media, youth, and credibility. Cambridge, MA: MIT Press.
  • Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., & McCann, R. M. (2003). Credibility for the 21st century: Integrating perspectives on source, message, and media credibility in the contemporary media environment. Communication Yearbook, 27, 293336.
  • Miller, N. (2005, January). Wikipedia and the disappearing “author”. ETC, 62 (1), 3740.
  • Nielsen, J. (2006). F shaped pattern for reading web content. Retrieved May 12, 2010 from http://www.useit.com/alertbox/reading_pattern.html.
  • O'Reilly, T. (2005, September 30). What is web 2.0? Design patterns and business models for the next generation of software. Retrieved June 20, 2009 from http://oreilly.com/web2/archive/what-is-web-20.html.
  • Palfrey, J., & Gasser, U. (2008). Born digital: Understanding the first generation of digital natives. New York: Basic Books.
  • Peterson, G., Aslani, P., & Williams, K.A. (2003). How do consumers search for and appraise information on medicines on the internet? A qualitative study using focus groups. Journal of Medical Internet Research, 5 (4), Paper e33.
  • Rafaeli, S., & Noy, A. (2002). Online auctions, messaging, communication, and social facilitation: A simulation and experimental evidence. European Journal of Information Systems, 11, 196207.
  • Resnick, P., & Varian, H.R. (1997, March). Recommender systems. Communications of the ACM, 40 (3), 5658.
  • Resnick, P., Zeckhauser, R., Friedman, E., & Kuwabara, K. (2000, December). Reputation systems. Communications of the ACM, 43 (12), 4548.
  • Robins, D., & Holmes, J. (2008). Aesthetics and credibility in web site design. Information Processing and Management, 44, 386399.
  • Sarasohn-Kahn, J. (2008, April). The wisdom of patients: Health care meets online social media. Report prepared for the California HealthCare Foundation. Retrieved May 15, 2009 from http://www.chcf.org/topics/chronicdisease/index.cfm?itemID=133631.
  • Shardanand, U., & Maes, P. (1995). Social information filtering: Algorithms for automating “word of mouth”. Paper Presented at CHI, Retrieved May 12, 2009 from http://sigchi.org/chi95/proceedings/papers/us_bdy.htm.
  • Surowiecki, J. (2005). The wisdom of crowds. New York: Anchor Books.
  • Tormala, Z.L., & Petty, R.E. (2004). Source credibility and attitude certainty: A metacognitive analysis of resistance to persuasion. Journal of Consumer Psychology, 14 (4), 427442.
  • Walther, J.B., Wang, Z., & Loh, T. (2004). The effect of top-level domains and advertisements on health web site credibility. Journal of Medical Internet Research, 6 (3), Paper e24.
  • Wang, Z., Walther, J.B., Pingree, S., & Hawkins, R.P. (2008). Health information, credibility, homophily, and influence via the internet: Web sites versus discussion groups. Health Communication, 23, 358368.
  • Wathen, C.N., & Burkell, J. (2002). Believe it or not: Factors influencing credibility on the web. Journal of the American Society for Information Science and Technology, 53 (2), 134144.
  • Zalesny, M.D., & Ford, J.K. (1990). Extending the social information processing perspective: New links to attitudes, behaviors, and perceptions. Organizational Behavior and Human Decision Processes, 47, 205246.