Trust in social Q&A: The impact of text and photo cues of expertise

Authors


Abstract

Users increasingly rely on social Q&A sites to answer questions that are important to their everyday lives. Social Q&A differs significantly from library-based reference services in that there is no guarantee that the individuals answering questions have relevant expertise or are relying on authoritative sources. This raises an important research question: what factors influence users' trust in social Q&A? This paper seeks to determine the role that text and photo cues of expertise have on users' trust in answers in social Q&A. The results of an experiment with 241 subjects indicate that expertise cues in text lead to significantly higher trust among both experts and non-experts. However, expertise cues in photos increase trust among non-experts but do not increase trust among experts. These results indicate that both the expertise of the user and the expertise cues provided in the answer affect user trust in social Q&A.

INTRODUCTION

Social Q&A sites (also called community Q&A sites or social reference services) provide an alternative to traditional library-based reference services. According to Harper, Raban, Rafaeli, and Konstan (2008), “Community Q&A sites leverage the time and effort of everyday users to answer questions – they represent Web 2.0's answer to more traditional online reference services” (p. 866). In contrast to library-based reference services, including virtual reference services (also called online reference services or chat reference services), there is no guarantee of expertise on the part of the individuals who provide answers to questions. Without evidence of expertise, this thus becomes an issue of trust. As such, this paper sets out to ask: what factors influence users' trust in social Q&A?

Establishing trust in other users is an important part of interaction with user-generated content. Because the volume of available information is so vast, users need ways to determine what information is good quality and which users are providing relevant information. Similarly, individual users and online personas of organizational entities often want to do what they can to build trust with other users so their messages will be accepted and noticed.

In this study, we set out to understand the influence of cues of personal expertise on the trust users place in question answerers. Specifically, we ask: do photo and/or text cues that explicitly indicate that the answerer has personal experience and expertise on the topic increase the trust that users place in the answerer and the answer?

We hypothesized that both photo and text cues would have a positive impact on trust, such that either cue would lead to greater trust than no cue, and both cues would lead to greater trust than just one cue. To test this hypothesis, we ran a controlled user study. Subjects saw a question posted on a message board and four answers. Each answer had two variants. In one, the text indicated that the answerer had personal experience dealing with the question and the other variant provided exactly the same answer but without the indication of personal experience. Subjects saw two answers with these experience cues and two without. In addition, each answer was paired with a photo of the answerer. Two photos had cues that the answerer had personal experience, and two did not. This led to four conditions: answers with no cues, with text cue only, with photo cue only, and with both text and photo cues.

We conducted this experiment with subjects from two communities and found results that reveal a great deal about the different impacts of these expertise cues on trust among subjects with varying degrees of expertise. The text cues increased trust in the answerer among all populations. Photo cues increased trust among subjects with no personal connection to the topic being discussed, but among subjects with a personal connection, there was no increase in trust.

In the rest of this paper, we first present related work on trust and cues of expertise, then describe our experiment and results, and finally discuss the underlying phenomena that shaped our results as well as the broader theoretical and practical implications of our results.

RELATED WORK

Answering questions has previously been the primary domain of the reference librarian, who carried out this work primarily through face-to-face interactions. The growth of the Internet has increased the degree to which such interactions are mediated through virtual reference services, raising new challenges such as the need to refine “techniques for rapport building, compensation for lack of nonverbal cues, strategies for relationship development, evidence of deference and respect, face-saving tactics, greeting and closing rituals” (Radford, 2006, p. 1046). Increasingly, commercial social Q&A sites have sought to replace the expertise of the reference librarian with the wisdom of the crowd. However, given the difficulty involved in assessing the expertise of a question-answerer within a social Q&A site, the increasing reliance on social Q&A sites raises important new challenges. According to Jurczyk & Agichtein (2007), “It is increasingly important to better understand the issues of authority and trust in [social Q&A], which differ drastically from previously studied online communities both in types of interactions that are available to users, and the content of the sites” (p. 919). This paper seeks to understand the factors that influence the development of trust in answers and answerers in social Q&A sites.

Trust is an important part of online interaction, in venues such as e-commerce, online discussions, and all of the social networking and information sharing applications that continue to grow in popularity. Trust helps users determine which information is useful and with whom to interact (Kelton, Fleischmann, & Wallace, 2008).

Many of the cues to a person's trustworthiness are lost in computer-mediated communication. A plain text answer will include some cues, but with nothing else to confirm it, the trust that builds is minimal. In response to this issue, there has been much research on how users develop trust in people in online environments where advice, answers, or product reviews are written.

How trust is established in these environments has been researched since the mid-1990s. E-commerce was a particular area of focus, since trust in these newly established services was lacking.

Photos were one of the first types of media investigated. It has been shown that photos can affect users' trust in websites (Fogg, 2003). However, results from including photos have been mixed. In another study (Steinbrück, Schaumberg, Duda, & Krüger, 2002), a bank added photos of customer service agents for their e-commerce website and a study found this had a significant positive impact on users' trust. A later study (Rigelsberger, Sasse, & McCarthy, 2003) examined the addition of photographs to twelve e-commerce websites and found, overall, they had no impact on trust.

Richer media has also been studied in comparison to text and photos. In one study (Rigelsberger, Sasse, & McCarthy, 2005), it was shown that users tend to trust rich-media advice (i.e. video and audio) more than that presented as text alone. This research also studied text+photos with cues to expertise and found that users were able to identify experts from the text and, while they did not trust them more, preferred advice with photos for its friendliness.

Avatars have been a prominent topic of study with respect to trust as well. They were included in the study discussed above (Rigelsberger, Sasse, & McCarthy, 2005), but were not shown to improve trust from users. Compared with phone and face-to-face interactions with people, interactions with avatars online were shown to generate lower trust (Bente, Ruggenberg, & Kramer, 2004).

This work all lays a strong foundation for us to build upon. The related work described here looks at how the introduction of specific types of media can improve trust by re-embedding some social cues. We take this work a step further by looking at explicit cues of expertise. These studies have looked at the impact that the presence of media have on trust in general, but have not looked at the impact of explicit evidence within in the media that the answerer has expertise. The research questions that drive this study go beyond social cues to look at expertise cues.

EXPERIMENTAL SETUP

To better understand the factors that influence trust in social Q&A sites, we designed and carried out a controlled user study. We chose this approach to allow us to quantitatively test hypotheses without confounding variables or other noise present in existing data. This design also allowed us to collect data about trust directly from our research subjects rather than inferring their degree of trust through an indirect measure. As we will explain below, we carefully controlled our experimental design, which was tested and refined through a pilot study that allowed us to identify and overcome weaknesses in our original experimental design. The resulting experiment allowed us to robustly test our hypotheses within the selected domain.

Research Questions

To carry out our experiment, we broke our key research question, what factors influence users' trust in social Q&A, into three specific subquestions:  

Table  .  
RQ1.What is the impact of photo cues on the trust subjects have in question answerers?
RQ2.What is the impact of text cues on the trust subjects have in question answerers?
RQ3.What is the combined impact of photo+text cues on the trust subjects have in question answerers?

In response to these questions, we posed three hypotheses:  

Table  .  
H1.Photo cues will increase the trust subjects have in question answerers.
H2.Text cues will increase the trust subjects have in question answerers.
H3.The combined impact of photo+text cues will be to further increase the trust subjects have in question answerers.

As a result, we were interested in all four resulting conditions: no cues, text cues only, photo cues only, and both text and photo cues.

To test our hypotheses, it was important to select a domain where it would be easy to express and perceive expertise cues using both photos and text. As such, we chose pet-related information, since pet ownership can easily be established using both photo and text cues. Subjects viewed a message board interface with a question and several answers. To ensure realism of the experimental conditions, both the question and answers were modeled on a real discussion that took place in Dogster, an online community that focuses on dogs.

Pilot Study

We conducted an initial pilot study to help develop the experiment that we ultimately conducted. The pilot study helped to identify variables in both the photos and text that needed to be tightly controlled.

Subjects in the pilot study viewed four questions. Three of these questions were modeled on relevant discussions from Dogster, while one question was a non-dog-related question. Each question had eight distinct answers, some provided by real Dogster users and others created by the experimenters. Four answers included text cues that the answerer owned a dog and four answers did not. Each answer was paired with a randomly selected photo.

The pool of photos was half male and half female. In each gender group, half of the photos showed a person with a dog and half showed a person alone. The breed and group of the dogs were mixed.

Subjects then rated their trust in the answerer and provided qualitative feedback about their ratings. We discovered several important factors that guided the final experiment.

First, subjects trusted and distrusted answerers with dogs based largely on the breed of the dog they owned. Some subjects did not trust anyone who owned a pit bull. Others did not trust anyone who owned a toy dog. Dog owning subjects commented that they trusted answerers who owned dogs similar to the subject's dog.

The answer itself also mattered. While we attempted to keep all answers fairly neutral, the phrasing of advice was important to subjects. Thus, based only on phrasing, much noise was added into the results. It made comparison of answers with cues and no cues difficult since subjects may be rating subtleties of the text and not only the cues or not.

Specifically, the length of the answer appeared to have a significant effect on the degree of trust, a finding that replicated earlier work (Harper, Raban, Rafaeli, & Konstan, 2008).

Thus, in our final experiment we carefully controlled all of the potentially confounding variables that we identified in the pilot study – breed type, differences in the length and content of answers between cued and non-cued text, number of photos, and the type of answers presented. The details of the final experiment are presented below.

Methodology

In our experiment, each subject was shown one message board question. The question was the same for all subjects: “Goldie, a golden retriever, who I have had for about a week now, has been sleeping under my bed. Does anyone know the reason for this?”

The interface for the experiment is shown in Figure 1.

Since breed prejudices were a significant factor in our pilot study, we eliminated this by using only one type of dog in our question and answers – a golden retriever. Because golden retrievers are popular (American Veterinary Medical Association, 2007) and generally well-liked for their friendly personalities (American Kennel Club, n.d.), the choice minimized the impact of negative breed perceptions. It also controlled for any biases by using the same breed throughout the test.

There were four answers to this question, and two versions of each answer: one with a text cue and one without. Each answer provided a different solution or suggestion.

Note that the cue and no-cue variations are almost identical. The cues come from minimal changes indicating “my golden retriever” or a similar possessive instead of “a golden retriever” or other generic statement.

Answer 1-No Cue: Most dogs like to be in a “den like” area when they don't feel well. When a golden retriever puppy is ill, under the bed or in another confined space would feel comfortable.

Answer 1-Cue: Most dogs like to be in a “den like” area when they don't feel well. When my golden retriever puppy was ill, he felt comfortable under the bed or in another confined space.

Answer 2-No Cue: Some golden retrievers love being under the bed as well, especially during the summer. Maybe it helps them stay cool? I think the hardwood floors are cooler than being on a bed.

Figure 1.

The experimental interface.

Answer 2-Cue: My golden retriever loves being under the bed as well, especially during the summer. Maybe it helps them stay cool? I think the hardwood floors are cooler than being on a bed.

Answer 3-No Cue: If you've only had your golden retriever for a week, you might think of getting a crate for him to sleep and spend nap time in. He'll get the same sense of security and that way it's easier for you to keep an eye on him.

Answer 3-Cue: If you've only had your golden retriever for a week, you might think of getting a crate for him to sleep and spend nap time in. Mine gets the same sense of security and that way it's easier for me to keep an eye on him.

Answer 4-No Cue: Some golden retrievers sleep under the bed as puppies and only stop when they get so big that they would get stuck under it! I think it's totally normal.

Answer 4-Cue: My golden retriever slept under the bed as a puppy and only stopped when she got so big that she got stuck under it! I think it's totally normal.

Previous work (as well as our pilot study) indicated that answer length was strongly correlated with perceived quality in online communities (Harper, Raban, Rafaeli, & Konstan, 2008). To eliminate this as a factor, the word count is identical between the two variations on each answer.

Each answer appeared in the experiment. Two answers were randomly selected to be shown with text cues and the other two answers were shown with no text cues.

Each of the answers was paired with a photo. We used only four photos, so every subject saw the same four pictures. Two photos showed men and the other two showed women. The photos were carefully selected to appear as similar as possible. For example, both men have open-mouth smiles, and the photos appear to take place in a beach setting. Similarly, both women have closed-mouth smiles, and the photos appear to take place indoors. Further, the individuals in the photos were selected to appear as similar as possible, including factors such as age and ethnicity, since previous studies have found that these factors may impact trust (McAllister, 1995).

Since our question and answers are about golden retrievers, the photos with cues pictured a person with a golden retriever. One male and one female photo had this cue, and one of each gender had no photo cue. Since we had male and female photos for each condition, gender was not a confounding factor in our analysis, but rather was carefully controlled.

The four photos were randomly assigned to each of the four answers. As such, each subject would see a random photo with a random answer. The resulting combinations included the experimental conditions of no cues (for a photos with no dogs and text with no indication that the answerer owned a dog), text cues (for text that indicated dog ownership but no dogs in the photo), photo cues (for photos of a person with a dog but text with no references to dogs), and photo+text cues (for photos of a person with a dog and text with indications of dog ownership).

Next to each photo-answer pair, subjects were asked to rate how much trust they had in the person who answered the question. They rated trust on a five point Likert scale with 1 indicating low trust and 5 indicating high trust. We treat these as interval values since the intermediate values are not labeled and a visual clue is presented with the scale indicating equidistance between values (see Figure 1).

First, they were provided with a box to supply a free text answer to the question “In completing this study, what factors affected the trust that you had for the different people?” Finally, they were asked what types of pets they own – dog(s), cat(s), bird(s), fish(es), or other.

Subjects

We recruited subjects from two sources. First, we used Mechanical Turk (mturk) (http://mturk.com). On this website, subjects complete tasks, often for small financial rewards. We paid a nominal $0.03 to each subject from mturk. Each subject completed the survey only once. From this environment, we had 191 unique subjects.

Secondly, we recruited subjects from Dogster (http://dogster.com), a social networking website for dogs. This is a community with over 500,000 pet profiles and a passionate and active set of discussion forums.

Table 1. Pet ownership rates among subjects from Mechanical Turk. Note that the values sum to over 100% since people may own several types of pets.
Type of Pet OwnedPercentage of Subjects
Dog57.1%
Cat35.6%
Bird8.4%
Fish14.7%
Other9.4%
None20.4%

A message was posted recruiting community members to participate in the study, and these subjects were not paid. We received 51 responses from this user base.

All subjects who were Dogster users were also dog owners. Among the subjects from Mechanical Turk, 79.6% owned some type of pet and 57.1% owned dogs. Table 1 shows the ownership rates for all types of pets.

RESULTS

Our analysis began with a data collection experiment with Mechanical Turk, followed by a data collection experiment with Dogster users. In this section, we present the results with statistical analysis from our experiments.

Mechanical Turk Subject Quantitative Results

We had 191 unique subjects complete the experiment on Mechanical Turk (MTurk). There were four possible conditions: no cue, text cue only, photo cue only, or both photo and text cues. Average trust ratings are shown in Table 2 (the MTurk Overall column) as well as in Figure 2, which shows the mean trust values from the four conditions along with the 95% confidence interval values as the error bars. An ANOVA indicated significant differences among the populations (F(3,760) = 8.020, p<0.001). Unpaired two-tailed Student's t-tests indicated that trust was significantly higher in conditions with text cues, photo cues, and both cues than it was for the no cue condition. Thus, based on these results, it appeared that both photo and text cues increased trust.

While this finding already seemed quite compelling, we decided to dig deeper into the data to break the data down into dog owners and non-owners. Our rationale for doing so was that dog owners had more expertise on this topic than non-dog owners, so it was important to see if the effects of expertise cues are influenced in any way by the expertise of the subject. This grouping of the subjects by dog owner and dog non-owner status led to some insights that add further nuance to the experimental results.

Table 2. Average trust ratings for each condition, grouped by subject types (* indicates significance compared to the No Cue condition for p<0.05)
CueMTurk OverallMTurk Non-ownersMTurk Dog OwnersDogster
None3.223.243.202.93
Text3.59*3.57*3.60*3.48*
Photo3.50*3.74*3.323.00
Both3.71*3.86*3.59*3.54*
Figure 2.

Average trust ratings for each condition assigned by subjects from Mechanical Turk, shown with 95% confidence intervals

Among the 82 MTurk subjects who did not own dogs, we found results that mimicked the results for the population as a whole. Average trust ratings are shown in Table 2 (the MTurk Non-owners column) as well as Figure 3, which shows the mean trust values from the four conditions along with the 95% confidence interval values as the error bars.

An ANOVA showed significant differences among the four conditions (F(3,324)=6.704, p<0.01) and unpaired t-tests showed the three cue conditions lead to significantly higher trust than was found in the no cue condition for p<0.05.

Notably, while all three cue conditions had significant effects, photo cues appeared to have a larger effect on trust among non-owners than text cues. Thus, both photo and text cues of expertise have an impact on fostering trust in online information among non-experts.

Figure 3.

Average trust ratings for each condition assigned by subjects from Mechanical Turk who do not own dogs, shown with 95% confidence intervals

Figure 4.

Average trust ratings for each condition assigned by subjects from Mechanical Turk who own dogs, shown with 95% confidence intervals

Among the 109 subjects who did own dogs, the results were different from the overall population results. Average trust ratings are shown in Table 2 (the MTurk Dog Owners column) as well as Figure 4, which shows the mean trust values from the four conditions along with the 95% confidence interval values as the error bars. Again, ANOVA showed significant differences in trust among the conditions (ANOVA – F(3,432) = 3.83, p<0.01). T-tests showed that only the conditions with text and both cues performed better than no cues (p<0.01). Thus, only text cues appear to have an impact on the trust of experts; photo cues lead to no significant increase in trust. We found that this was worthy of further investigation and confirmation. For this, we looked at the Dogster community.

Dogster Subject Quantitative Results

Among the 51 subjects from Dogster who completed the experiment, we found similar results to those of dog owners from Mechanical Turk. Average trust ratings are shown in Table 2 (the Dogster column) as well as Figure 5, which shows the mean trust values from the four conditions along with the 95% confidence interval values as the error bars. An ANOVA indicated significant differences in the trust values among the four conditions (F(3,196) = 3.404, p<0.05). Comparing the groups with t-tests indicated answers with text cues and combined text and photo cues were trusted significantly more than answers with no cues or with photo cues alone. All tests were significant for p<0.05. Thus, as in the case of dog owners from Mechanical Turk, Dogster subjects were influenced by text cues but not by photo cues.

Figure 5.

Average trust ratings for each condition assigned by subjects from Dogster, shown with 95% confidence intervals

Summary of Quantitative Results

Thus, overall, photo cues had mixed benefits. The results are very similar for the dog owners from Mechanical Turk and Dogster, and those results are in turn very different from the non-owner results from Mechanical Turk. Thus, it appears that while text cues have a significant impact on trust regardless of subjects' expertise, photo cues only improve trust among non-experts.

It is also worth noting that the trust values assigned by Dogster users were significantly lower than the trust ratings from MTurk, both in the dog owner and non-owner groups. This result may be due to Dogster users' strong trust in their community and resulting lower trust in seeing answers outside that forum, but this is only speculation based on a few qualitative answers. Thus, to explain these and other findings, it is useful to also examine the qualitative findings of the study.

Qualitative Results

In addition to the controlled user study described above, we asked subjects an open-ended question after they completed the experiment. We asked, “In completing this study, what factors affected the trust that you had for the different people?” The rationales provided by the subjects themselves included the factors that were the explicit focus of this study, photo and text cues, as well as other factors not examined in detail in this study.

Several of the subjects commented directly on the photos. Most of these focused on the photo cue of the presence or absence of a dog, and stated that having a dog in the photo enhanced their trust. For example, one subject commented, “The ones that had photos with their dogs were better for me.” Another subject explained, “The people who also had dogs in the pictures did also make them seem a little more trustworthy.” Yet another subject replied, “A dog in the picture definitely helped.” Finally, another subject noted, “The fact that the blond had a pic of her dog made her more believable.” Thus, for these subjects, having a dog in the photo helped to enhance their trust in those answers.

However, some other subjects explicitly mentioned ambivalence towards the presence or absence of a dog in the photo. For example, one subject stated, “I didn't pay much attention to the people– only their answers.” Another subject commented, “I try not to let the pictures of the person leaving the comments effect[sic] my trust for them.” Finally, yet another person noted very succinctly, “The photos of the people with golden retrievers did not increase my trust in their response.” Thus, for these subjects, the presence or absence of a dog in the photo did not affect their trust.

Other factors were also discussed from the photos. Specifically, gender, age, smiles, eyes, and being ‘approachable’ all factored into the trust evaluations of some subjects. One of the more interesting responses was one subject who stated, “Didn't trust the redhead's reply because I doubted it was really her photo.” This echoes previous work that indicated professional photos of people who appeared to be models were considered less trustworthy than photos that appeared to be taken by and of non-professionals (Riegelsberger, Sasse, & McCarthy, 2003). Finally, one heuristic that was likely not very useful in this particular exercise was, “Whether I have met the person.”

Several of the subjects commented directly on the text. Some of these focused on the text cue of mentioning their own dog, and stated that mentioning their own dog enhanced their trust. For example, one subject stated, “Personal stories helped.” Another subject emphasized, “Their ability to speak from experience and not from speculation.” Yet another subject mentioned, “Personal experience with their own pets weighed in as well.” Finally, another subject said, “Answer based on personal experience with pet.” Thus, for these subjects, references to owning a dog in the text helped to enhance their trust in those answers.

However, some other subjects expressed the opposite sentiment, stating that references to a specific dog decreased their trust in an answer. Interestingly, these individuals perceived the difference as an issue of anecdotal evidence versus general facts. For example, one subject stated, “I didn't trust anecdotes about a single dog. I trusted people who gave the most detail about golden retrievers in general.” Another subject mentioned, “A few of the answers were very heartfelt and showed that there was actually care taken to answer the question to help. Others were just people stating facts of their own dog – which isn't really helping the person with the original question.” Finally, another subject explained, “I put more trust in the people who spoke about golden retrievers in general – as opposed to talking about their personal pet.” Thus, in these instances, the intended text cue actually was perceived as having a negative impact on trust.

Other factors were also discussed from the text, including tone, grammar, honesty, truth, absence of a question, confidence, logic, completeness, and making sense. One notable response read, “The words they used – cool hardwood floors– den-like– security– comfortable– normal. You can tell they like their dogs.”

Some subjects directly discussed how both cues affected their trust. For example, one subject explained that a factor that influenced trust was, “Whether or not they had dogs in the photo and whether or not they said they owned dogs mattered a lot.” Another subject commented, “If they said they had a golden retriever or if their picture showed one. Both was better.” However, one contrasting perspective was provided by a subject who stated, “I was unaffected by pictures or stated ownership of retrievers.” Thus, different subjects perceived the cues differently.

DISCUSSION

The difference in results for dog owners and non-dog owners is an interesting one. However, tied to related research, it can be explained by their difference in background.

It has been shown in previous work that the more we empathize with a person, the more we trust them (Levenson & Ruef, 1992). Empathy-inducing communication can be very strong online (Preece, 1999). Communication that leads to empathy in online communities can thus lead to increased trust between users (Preece, 2004).

Earlier research has also shown that empathic accuracy – the ability for one person to accurately understand the feelings of others – has a strong positive effect on trust in online communities (Feng, Lazar, & Preece, 2004).

Previous work has also shown a correlation between trust and similarity in online communities (Ziegler & Golbeck, 2007; Ziegler & Lausen, 2004).

In our study, the two variations of each answer were factually the same. However, the answer variants with cues contain an indication of empathy with the question asker by presenting the information with personal details. Furthermore, when the answerer indicates that he or she owns a dog, it creates a point of similarity with the dog owners reading the response. The no-cue variants of the answer do not lead to that sense of similarity.

Thus, among dog owners, it is consistent with previous work that the more empathetic response would lead to higher trust. The similarity indicated by mutual dog ownership could also lead to increased trust. When the community is made up of non-dog owners however, the personal detail would not lead to a greater sense of empathy or similarity, and thus would not necessarily lead to a greater sense of trust.

For non-dog owners, these points of similarity or empathy are not present. Without those connections, they must look for a more explicit indicator of trust. Since they are also non-experts on the question being asked (because they do not own dogs to have gained that experience), signs of expertise in the answerer can be a basis for trust. Indeed, it has been shown in earlier work that perceived expertise is a factor strongly associated with trust (Moorman, Deshpande, & Zaltman, 1993).

For non-dog owners, signs that the answerer has experience with dogs – either through a text cue or a photo cue – can indicate some expertise and thus lead to increased trust. This will be less of a factor among dog owners who have similar dog owning experience and thus would perceive no special expertise from these cues.

The qualitative data was quite mixed. However, it was interesting to note that for photo cues, the major conflict was between positive impact versus no impact, while for text cues, the major conflict was between positive impact versus negative impact. Given this finding, it is especially remarkable that the text cues were so universally successful in increasing trust. Further, the qualitative data did not seem to align in the same way as a contrast between pet owners and non-owners. Thus, it appears that there may be differences between how people actually rate trust and how they perceive it affecting their ratings, although additional research would be necessary to further examine this point.

IMPLICATIONS FOR DESIGN

We take several broader lessons from this experiment. As we detailed in the review of previous work, the benefits of photos for improving trust have been mixed. We observed this among our own subgroups in this experiment.

Noting where photos did and did not perform in our study may provide some insights into where they are beneficial. We found photos increased trust among subjects whose connection to the topic was not strong enough for them to form connections with answerers through similarity or empathy. While in this case the topic of dog behavior was largely irrelevant to the non-dog owners, this same lack of personal connection would be present in many discussion forums. For example, when users find a discussion board and ask a question to find specific information (e.g. asking a tech support issue or looking for reviews of a product) they often do not become a member of the community. In cases like these, any cues indicating expertise – including those found in photos – may be helpful in building trust with users.

However, in discussions organized around a specific topic where frequent users become part of a community, photos seem less likely to impact trust. Text cues, on the other hand, can indicate a person's connection to the topic can help build trust through an increased sense of empathy and similarity. Thus, for new members coming to these communities or for businesses and organizations trying to establish a trusted presence, they are likely to be more successful by showing community members signs of commonality with their passions and expressing sentiments that will establish an empathic connection than by relying on photos to do the work for them.

CONCLUSIONS AND FUTURE WORK

In this paper, we ran a controlled user study to test the impact of cues to expertise in photos and text on the perceived trustworthiness of the author of answers in social Q&A sites. We found that text cues increased trust among all groups of our subjects, but the impact of photos were mixed. Among subjects with no personal connection to the topic being discussed, photo cues helped. Among subjects who were part of the community used in our text, photo cues did not help. This finding is fascinating since it conflicts with results from Riegelsberger, Sasse, and McCarthy (2005) who found that increasing degrees of media richness increase user trust within the domain of advising. However, this finding aligns with the finding of Toma (2010) from the domain of online dating, as her research found that textual information increased trust while photographs decreased trust. As such, this work reinforces the importance of studying the role of trust in specific application domains, as suggested by Jurczyk and Agichtein (2007).

We conclude from these results that any type of cue may help an outsider build trust since they lack a personal foundation for doing so. However, for people connected to the community, the cues must indicate similarity and empathic similarity with the user to build trust; something photos alone do not do. Beyond our specific domain, this provides some insights into how users may build trust through their activities. Cues of expertise are important and, in forums where there is no strong community or where users often pass through without long-term engagement, any cue may be helpful. In communities with a stronger sense of community and where other users feel personally connected to the topic, cues must indicate similarity and empathy to increase the initial amount of trust given to a user.

Future work is necessary to continue this research direction. Specifically, it would be useful to examine the impact of expertise cues in other domains beyond dog ownership. For example, the experimental design described here could easily be replicated with questions and answers from a different domain, such as consumer health information. Further, it would be useful to find ways to get beyond the perception of the text cue as an indication that the answer is anecdotal – perhaps this will be possible within a different domain. Finally, it would also be useful to examine the other factors that shape trust, and how these factors interact with photo and text cues.

Ancillary