Convergence and synergy: Social Q&A meets virtual reference services



This paper presents a preliminary analysis of online information services that examines the intersection of two areas of research: Virtual Reference Services (VRS) and Social Q&A (SQA). This project embraces the idea of a synergic collaboration within and between VRS and SQA to provide higher quality content to information seekers and a higher level of sustainability. Preliminary findings are described that emanate from a grant-funded project: Cyber Synergy: Seeking Sustainability through Collaboration between Virtual Reference and Social Q&A Sites, which is being supported by the Institute for Museum and Library Services (IMLS). Early stages of the grant project activities include VRS transcript analysis and interviews with online Q&A end users (students) and experts (university librarians). Analysis reveals that VRS questions address a broad range of subjects, predominantly social sciences and technology. Largest numbers of question types are Procedural and Ready Reference. An analysis of accuracy for the subset of Ready Reference questions was found to be high, 90 percent correct, although only 75 percent were found to be correct with citation cited. Interviews with information seekers and providers in both VRS and SQA environments revealed important differences in how experts and end-users perceive online Q&A services, leaving clues as to how it may be possible to start bridging these systems.


Digital and on-ground libraries are increasingly adopting a range of virtual reference services (VRS) to meet the needs of their users. In a parallel digital universe, social media sites and applications are joining with mobile technologies to enable the flow of information to be more accessible to larger and more diverse audiences. The existence and fast adoption by information seekers of these modes of communication call for transformative research that addresses innovative models, services, and tools. New approaches are required that investigate ways in which more sustainable service models can be created by seamlessly providing collaboration among the services within and between knowledge institutions, professionals, and subject experts.

This paper examines the intersection of two related areas of research: VRS and Social Q&A (SQA). Its major goals are 1) to summarize research related to both of these different, yet essentially similar online information services, 2) to suggest ways that an examination of each service could be useful to the other, and 3) to present preliminary findings from a grant-funded project: Cyber Synergy: Seeking Sustainability through Collaboration between Virtual Reference and Social Q&A Sites. Cyber Synergy is being supported from October 2011 to September 2013, by the Institute for Museum and Library Services (IMLS), Rutgers, the State University of New Jersey, and OCLC, (Radford, Connaway, & Shah [2011–2013] This research explores the idea of a synergic collaboration within and between VRS and SQA services to provide higher quality content to information seekers and a higher level of sustainability to their respective institutions. VRS, including live chat and Instant Messaging (IM) formats, have become well-established offerings now found on the great majority of library websites. Including email reference, VRS have existed for more than 20 years (Sloan, 2006) and libraries have offered live chat for more than10 years, with IM reference entering the mainstream in 2008 (Introducing Qwidget, 2008). The ability of libraries to continue offering VRS to users is seriously threatened in this time of deep funding cuts. Concerns regarding sustainability of VRS have demanded an exploration of how collaborations among libraries and library consortia can optimize use of their information service resources, and how a possible collaboration between VRS and SQA communities could leverage elements to generate better service and provide access to subject expertise.


Literature on VRS

A growing body of research examines the quality of VRS, including attention to accuracy, interpersonal dimensions, and ability to effectively provide instruction. Connaway and Radford (2011), Radford and Mon (2008) and Ross, Nilsen, and Radford (2009) provide overviews of these findings. While numerous works look at usage and effectiveness of various VRS services, most are based on the assumption of single user/patron and single source of information. Many professionals in the field are challenging this assumption and are looking for innovative ways to address the needs of information seekers in the rapidly changing information landscape, as well as to provide a more effective and sustainable model for VRS services. For instance, Shachaf (2010) asserts that existing reference models that assume “dyadic interaction” (between two people) are limited. She describes “social reference,” as a collaborative endeavor that “may differ significantly from the traditional (and digital) reference encounter by its social and collaborative nature that involves group effort and open participation” (p. 67). She believes SQA sites to be virtual communities within which social reference encounters could occur.

Pomerantz (2006) agrees that VRS must become more collaborative, although he asserts that library reference has always been a collaborative effort between the librarian and the user, and that reference referrals have typified ongoing and common collaboration between librarians. National VRS cooperatives like OCLC's QuestionPoint (QP) ( have grown from 300 members in 2002 to more than 2000 today. According to Pomerantz (2006), information seekers want answers, and if VRS does not become a more collaborative effort to provide these answers, users will go elsewhere. Twidale and Nichols (1996) state that: “The use of library resources is often stereotyped as a solitary activity, with hardly any mention in the substantial library science and information retrieval literature of the social aspects of information systems” (p.177). Levy and Marshall (1994) also note that “support for communication and collaboration is … important for information-seeking activities, and … indeed, support for the former is needed to support the latter” (p.164). Hansen (2009) asserts that in these difficult budget times there is an imperative to find the right collaboration model.

Given that an effective solution for creating better and more sustainable VRS is collaboration, a natural question is how it should/could be achieved. Hertzum (2008) asserts that there is a need to establish common ground and information sharing for collaborative work to take place. In the communication field, Gibson and Gibbs (2006) studied virtual teams, collaboration, and innovation and found that there were barriers to collaboration at a distance. These barriers included: Geographical dispersion (being physically distant); electronic dependence (reliance on computer-mediated communication, with little or no FtF contact), structural dynamism (frequent change in personnel), and national diversity (being from different nations or nationalities). One barrier they identified was geographic dispersion, which also was studied with regard to email reference (Portree, Evans, Adams, & Doherty, 2008) and a factor in increasing the length of time spent on live chat queries (Connaway & Radford, 2011; Radford & Connaway, 2005–2008).

Literature on SQA

SQA services are community-based, and purposefully designed to support people who desire to ask and answer questions, interacting with one another online. People ask questions to the public and expect to receive answers from anyone who knows something related to the questions, allowing everyone to benefit from the collective wisdom of many. In essence, SQA enables people to collaborate by sharing and distributing information among fellow users and by making the entire process and product publicly available. This encourages users to participate in various support activities beyond question and answer, including commenting on questions and answers, rating the quality of answers, and voting on the best answers. Within the past few years, various types of SQA services have been introduced to the public, and researchers have begun to evidence interest in information-seeking behaviors in these contexts. The most popular examples of SQA include Yahoo! Answers ( and AnswerBag ( The advantages of this approach are the low cost (most services are free), quick turnaround due to large community participation, and easy build-up of social capital. On the other hand, there is, typically, no guarantee regarding the quality of the answers. The asker is simply relying on the wisdom of crowd.

While service such as Yahoo! Answers offer peer-to-peer question answering, there are services that facilitate discussion among peers for questioning and answering. Such services allow a group of users to participant in solving a problem posed by a user of that community. This approach is very similar to the peer-to-peer Q&A, with a difference that it encourages peers to have a discussion, rather than simply trying to answer the original question. WikiAnswers is a good example of this approach. The advantage is that the asker often gets more comprehensive information that involves not only the answer, but also the opinions and the views of the participants. In a way, this approach can be seen as a hybrid of Yahoo! Answers and Wikipedia. The disadvantage is that not many questions require discussion-based answering. In addition, with services such as Yahoo! Answers, users can go back-and-forth between asking questions and leaving answers or comments, thus inducing an implicit discussion.

What a Convergence of SQA and VRS Offers

In library and information science (LIS), Pomerantz (2006) asserts that it is imperative to sustainability that VRS collaborate to insure that users receive answers, not just to provide referrals or instruction on how to find answers themselves. A seemingly natural choice for VRS is to collaborate with SQA, which has several components similar to VRS, with the major difference being that the answers could be provided by virtually anyone in the world. SQA, according to Shah, Oh, and Oh (2009), consists of three components: a mechanism for users to submit questions in natural language, a venue for users to submit answers to questions, and a community built around this exchange. Despite this short history SQA has already attracted a great deal of attention from researchers investigating information-seeking behaviors (Kim, Oh, & Oh, 2007), selection of resources (Harper, Raban, Rafaeli, & Konstan, 2008), social annotations (Gazan, 2008), user motivations (Shah, Oh, & Oh, 2008), comparisons with other types of question-answering services (Su, Pavlov, Chow, & Baker, 2007), and a range of other information-related behaviors. Pomerantz (2008a) wrote about the “Slam the Boards” effort in which librarians voluntarily pick up questions from SQA services, answer them, and declare themselves to be librarians in order to educate the public about quality answers from information professionals. In addition, Enquire, “a 24-hour, live question answering service offered by public librarians across England and Scotland in collaboration with partners in the United States,” ( is answering up to 1500 questions/month in Yahoo! Answers UK and Ireland within specific subject areas. Enquire has been rated the best answer 79 percent of the time. Although these endeavors provide a library presence in SQA sites and have earned a reputation for good answers, they represent a very small number of libraries and are not operating under an economic model or mechanism to triage questions with the SQA sites.

From the standpoint of user satisfaction – with both the answer and the site – it would be a benefit to SQA sites for there to be a mechanism to triage questions between sites. The question's topic would be one factor in such a mechanism, but other factors in evaluating quality of answers also could be valuable for this purpose. This sort of triage is comparatively simple for a human to perform – and while time-consuming, is in fact commonly performed by librarians for digital reference services (Pomerantz, 2008a; Pomerantz, 2008b; Pomerantz, Nicholson, Belanger, & Lankes, 2004). The QuestionPoint (QP) reference service (, an application for managing a global collaborative of library reference services, also performs this sort of triage automatically, by matching questions to profiles of individual libraries (Kresh, 2000). The level of complexity that such triage systems are capable of performing pale in comparison to the complexity that a human triage can manage.

The current model of VRS is often monolithic, in the sense that it works as a black-box, with questions going in and answers coming out, without an easy way to extend the existing services or combine them with other sites or services. This model is becoming difficult to support, especially in the current economic environment, which has negatively impacted library funding across the country (“Branch Closings and Budget Cuts Threaten Libraries Nationwide,” 2008; Goldberg, 2010; Henderson & Bosch, 2010), threatening and often implementing “devastating service reductions” (“State Budgets Hammer Public Libraries Nationwide,” 2009, p.19).

Reference desks (physical or virtual) are sometimes underutilized, or the kinds of information requests coming to them are not appropriate for a given reference service or available staff (Naylor, Stoffel, & Van Der Laan, 2008). Creating an on-demand collaborative among participating reference services at the same or even different institutions may allow for ways not only to best utilize library resources, but also to provide a more effective service to users. VRS offer in-depth expert answering to information seekers, whereas SQA services, such as Yahoo! Answers (, provide quick answers using crowdsourcing. The former is expected to have better quality, whereas the latter has the advantage of a large volume. An interesting avenue to explore is how VRS can collaborate with SQA services to provide more in-depth information when needed. Such collaboration may allow SQA services to offer premium (paid) content to their large user-base, and VRS to create a new revenue stream to support their sustainability.

VRS are evolving, with new developments coming at a quickening pace to enhance the user experience, and recently allowing access to library services through text-messaging mobile devices and social networking sites (e.g., Cassell, 2010; Cassell & Hiremath, 2009; Cole & Krkoska, 2011). However, an expanded research agenda that generates empirical data is needed to assess the effectiveness of these services and enhancements. Furthermore, the reduction of library budgets increases the need to determine opportunities to share resources and generate revenue through collaboration and SQA services may provide such an opportunity.

Research Questions

To address these issues and to further the understanding of user behavior in VRS and SQA, the following research questions are being addressed by Cyber Synergy:

  • What is the effectiveness of various VRS and SQA services, quality of content provided, and their relative merits and shortcomings?

  • How does accuracy compare between VRS and SQA sites?

  • What lessons can be learned from SQA sites that could be applied to VRS and vice-versa?

  • How can VRS become more collaborative, within and between libraries, and tap more effectively into librarian's subject expertise?

  • How can we design systems and services within and between VRS and SQA for better quality and sustainability?

The present paper provides some of the early investigations and findings along these questions.


This project will span two years, and this paper reports initial transcript analysis for VRS queries and results from interviews with online Q&A end-users and experts (university librarians). These preliminary results will be used to inform the development of later phases of the grant project and will include comparisons of transcript analyses of both SQA and VRS sessions, additional interviews with VRS users, VRS providers, and SQA users, plus design sessions with experts (both system design and information professional), which will result in specifications for a system that will help to address the research question relating to system design.

Method for VRS Query Analysis

A sample of 575 transcripts was drawn from 296,797 QuestionPoint (QP) live chat and Qwidget (QW) sessions from June 2010 to December 2010. Of these, 560 were found to be useable, including 350 live chat from QP and 210 QW. All transcripts were stripped of identifying information (e.g., name, email address, IP address, etc.) at OCLC and then subjected to several different analyses by teams of coders, including researchers and graduate assistants from OCLC and Rutgers University, supervised by the co-authors.

A team of two coders first classified the subject of the questions from each chat and QW transcript using the Dewey Decimal Classification System's categories. Two additional coders determined the type (e.g., Ready Reference, Subject Search, Procedural) using criteria and category schemes from Katz (1997), Kaske and Arnold (2002), and Arnold and Kaske (2005). The subset of ready reference questions were further analyzed for accuracy, using criteria and category schemes from Arnold and Kaske (2005), Radford and Connaway (2005–2008), and Radford, Connaway, Confer, Sabolsci-Boros, and Kwon (2011). Coders evaluated the accuracy of chat ready reference answers by checking responses using authoritative web sites and subscription-based databases and any citations/links provided by the librarian/staff member. Intercoder reliability after discussion to resolve differences was above 90 percent for all analyses of VRS transcripts.

Method for SQA Analysis

In order to address the research question regarding user and expert perspectives on SQA, a series of preliminary interviews were conducted. Interviews can be designed to allow researchers to collect qualitative data and are appropriate for exploratory phases of research projects in library and information science (LIS) contexts (Connaway & Powell, 2010). Here structured interviews were used to gather perceptions regarding how users judge quality of answers and also how experts might view the potential of increased collaboration and the leveraging of subject knowledge that might result from a synergy between SQA and VRS. Ten reference librarians from Rutgers University Libraries were interviewed who represented multifaceted subject areas including the liberal arts, the arts, sciences and business disciplines. Interview questions for these academic librarian experts centered on their perceptions of face-to-face and VRS reference, approaches to responding to users' needs, selection of resources, important components of quality answers and their knowledge of and opinion of SQA services.

Additionally, exploratory interviews were conducted with users of online Q&A systems including twenty-four undergraduate students from Rutgers University, NJ, and 12 Masters' students from eight different universities (i.e., Boston University, John Hopkins, Drexel University, Philadelphia College of Osteopathic Medicine, Harvard, Georgetown, Notre Dame and Northwestern). Convenience sampling was used to obtain these students by offering undergraduates either monetary compensation ($10) or extra credit for those who volunteered to participate in the study, as well as offering monetary compensation ($10) to Masters' students. Undergraduates were recruited through flyers posted at Rutgers University; the Masters' students were recruited using a snowball sampling technique beginning with Rutgers University Master's students. Students who volunteered were selected as participants only if they could identify past use of and/or experience with at least one online Q&A platform.

Interview questions for these graduate and undergraduate end-users centered on their use and experiences in physical and digital libraries, their use and experience with SQAs, and on what they look for in question-answering services. All interviews were audio-taped and transcribed and a theme analysis was performed by the research team following the constant comparison method (Glaser & Strauss, 1967) and facilitated through the use of NVivo 9 qualitative analysis software (


VRS Query Analysis — Dewey Decimal Classification

To better understand the types of subjects received by VRS, the 560 transcripts (350 from QP and 210 from QW) were further analyzed for topicality. 429 questions were discovered that could be coded using the Dewey Decimal Classification (Dewey, 2011). The Dewey system was selected as is a well-recognized and simple subject typology. An additional 261 questions were identified that were not able to be assigned Dewey numbers (these questions included Procedural, No Subject, Unclear, and Miscellaneous — Directional, Inappropriate, Test). As can be seen in Table 1, the highest percentages were found for questions related to the Dewey categories of Social Sciences (116, 16.81%) and Technology (80, 11.59%).

Table 1. VRS Questions by Dewey Decimal Classification
Dewey Decimal Classification Category# VRS Questions
000 Generalities39
100Philosophy & Psychology
2.61%200 Religion
2.17%300 Social Sciences
16.81%400 Language
1.88%500 Natural Science & Mathematics
4.78%600 Technology (Applied Sciences)
11.59%700 The Arts 27
800 Literature & Rhetoric58
900 Geography & History30
Other (includes Procedural, Directional, No Subject, Holdings, Inappropriate, Test, Unclear)261
Total VRS Questions690

Type of Question

From the 560 usable transcripts (350 from QP and 210 from QW), coders identified 575 questions, as some transcripts had more than one query that could be sorted into the following nine categories: 1) Subject Search, 2) Ready Reference, 3) Procedural, 4) No Question, 5) Holdings, 6) Research, 7) Inappropriate, 8) Directional, and 9) Reader's Advisory. These categories were derived from Arnold and Kaske (2005), Radford and Connaway (2005–2008), and Ross et al. (2009).

The largest number and percentage of questions were determined to be in two categories: Procedural (181, 31 percent) and Ready Reference (179, 31 percent). The next most frequent were Subject Searches (97, 17 percent), Holdings (49, 9 percent), and No Question (25, 4 percent). Lower numbers and percentages were found for Research (19, 3 percent), Directional (15, 3 percent), Reader's Advisory (6, 1 percent), and Inappropriate (4, <1 percent).

Accuracy of Ready Reference Answers

The subset of 179 Ready Reference queries were further examined for accuracy, with 11 of these then eliminated because they were referred for follow-up to another library/librarian or had technical difficulties. The remaining 168 queries were checked by a team of two coders for accuracy. These were found to include 151 correct answers (90 percent) and 7 incorrect answers (4 percent), and ten queries were coded as “Other,” 10 answers (6 percent).

Correct answers were further examined to determine which of these included citations, no citations, and instances where no citation was necessary. Correct with Citation accounted for 75 percent (114 of 151 correct answers). Correct without Citations were found for 14 percent (21 of 151 correct answers), and Correct No Citation Needed were found for 11 percent (16 of 151 correct answers).


This investigation began by identifying a series of factors that users employ when judging the “goodness” of an answer, or more appropriately, a response. A comprehensive literature review of LIS literature, focusing on SQA services and VRS best practices, identified over 200 factors. Scholars argue that many of these factors overlap and can been winnowed down to an exhaustive list of around 10-30 factors, however a comprehensive, standard list has yet to be generated. From this review, it was determined that these factors could be grouped into three high level categories: quality, relevance, and satisfaction. These categories account for both the qualities inherent to the objective content of the information, as well as the subjective expectations of the end user, the two of which combine to produce a judgment of information value. These findings extend earlier works on evaluating answers or responses for quality or relevance only (Agichtein, Castillo, Donato, Gionis, & Mishne, 2008; Harper et al., 2008; Kim & Oh, 2009). To determine which factors to classify under the three high level categories, it was necessary to explore insights garnered from both experts and users regarding the criteria they employ when making value judgments.

These interviews with experts (librarians) and end-users (students), constituted an initial attempt to look at both sides of the online Q&A coin, hoping to compare SQA and VRS at some level. Interviews revealed that users and experts most often identified topicality and validity as important factors used in making value judgments of information. Undergraduate and graduate students value information that is accurate and will often test the veracity of answers provided to them on a SQA site. If they could verify the truth to an answer, or at least had the opportunity to verify if desired, they exercised a propensity toward SQA as compared to VRS. Students who indicated participation in library instruction sessions emphasized continued use of search strategies learned to find resources that were more in-depth and on-topic with their information need. This falls in line with the experts' similar indicated value of these two attributes. The noted success of the library instruction sessions suggests that future study should be completed to further analyze the effectiveness of these sessions in aligning user and expert information behaviors, as well as to examine the efficacy of these services across various platforms.


The above literature review provides a brief overview of the research related to these different, yet essentially similar models for delivery of online information service to end-users. Traditional VRS, like face-to-face reference services, embraces a one-to-one model (i.e., one asker, one expert professional, usually a librarian) and SQA promotes a one-to-many model (i.e., one asker to many potential experts, harnessing the knowledge of the crowd). This paper foregrounds some questions including: Can and should these two models converge or be bridged? What would a combination or synergized model look like? Should and how could VRS embrace the wisdom of the crowd, especially for questions related to highly esoteric or complex subjects? The research findings reported here constitute the initial explorations of a larger project to span two years that is investigating these questions and the idea of a possible convergence and synergy between SQA and VRS (Radford, Connaway, & Shah, 2011–2013).

Analysis of preliminary data have found that VRS services successfully address a broad range of subjects, predominantly centered in the social sciences and on technology. Classification of a sample randomly drawn from a large corpus found that the largest numbers of question types are Procedural and Ready Reference. Accuracy of the sub-set of questions typed as Ready Reference across the wide range of subjects was found to be high, with 90 percent correct, although only 75 percent were found to be correct with citation included. One application of this finding is that it establishes a benchmark for VRS accuracy. VRS librarians/staff were found to provide a high level of accuracy in informational questions on a variety of topics. The level of accuracy (and user satisfaction) for SQA will be a focus of the grant project's later phases, and an analysis of reasons why some SQA questions fail to garner any answers is in progress. A typology of attributes of failed informational questions is also being developed.

The interviews conducted to compare VRS and SQA services provided interesting insights from both the experts (librarians') and the end-users (students') points of view. Librarians who were interviewed placed either an equal or higher importance on attaining sustained user satisfaction and meeting service expectations. They also valued the inclusion of teaching search strategies that enable users to develop their searching and information evaluation skills as opposed to the delivery of e-resources, including URLs for authoritative websites or scholarly articles from databases that will most likely only be used once. Most librarians claimed that teaching these strategies was better conveyed via electronic media rather than in face-to-face settings (see also Radford & Connaway, 2005–2008, which had a similar finding in interviews with VRS librarians). Students, on the other hand, asserted the importance of topicality, length, visibility, timeliness, clarity, availability and verifiability. These judgments were conveyed both via their reported information behavior, as well as by their analyses of pre-identified categories culled from the literature review. Unlike experts, students identified factors comprising relevance, quality and satisfaction on equal planes, suggesting that these three criteria have equal importance for information seeking within an academic and Everyday Life Information Seeking (ELIS) context (Savolainen, 1995).

Additional analyses are in progress for these data, as well as analysis of SQA questions by subject, question type, and reasons why some questions fail to obtain any answers, that will allow for further comparison.

Later phases of the project will include in-depth interviews with VRS users, VRS providers, and additional interviews with SQA users, and plus design sessions with experts which will result in specifications for a system that will help to address the research question relating to system design.

The findings of the Cyber Synergy research will help scholars to: a) explore how various library services, especially VRS, could better employ existing and frequently underutilized services (e.g., Naylor et al., 2008), b) understand how subject expertise of librarians could be better leveraged through virtual collaborations, c) develop guidelines for practice and recommendations for evaluation of VRS and SQA, d) inform systems design, and e) explore ways to connect potential users and information seekers with SQA services, VRS, and social media sites. These investigations could inform system design and adoption of different service models to create a more open, innovative, and sustainable pathway for the future.


Support for the research study, Cyber Synergy: Seeking Sustainability through Collaboration between Virtual Reference and Social Q&A Sites, was provided by the Institute of Museum and Library Services (IMLS), Rutgers, The State University of New Jersey, and OCLC, Inc.

The authors wish to thank the following research assistants who helped with coding and transcript analysis: Erik Choi, Alyssa Darden, Lisa DeLuca, Mary Ennis, Erin Hood, Jaqueline Woolcott, and Vanessa Kitzie.