Evaluating clinical librarian services: a systematic review


Alison Brettle, Research Fellow, School of Nursing and Midwifery, University of Salford, Salford. E-mail:a.brettle@salford.ac.uk


Background:  Previous systematic reviews have indicated limited evidence and poor quality evaluations of clinical librarian (CL) services. Rigorous evaluations should demonstrate the value of CL services, but guidance is needed before this can be achieved.

Objectives:  To undertake a systematic review which examines models of CL services, quality, methods and perspectives of clinical librarian service evaluations.

Methods:  Systematic review methodology and synthesis of evidence, undertaken collaboratively by a group of 8 librarians to develop research and critical appraisal skills.

Results:  There are four clear models of clinical library service provision. Clinical librarians are effective in saving health professionals time, providing relevant, useful information and high quality services. Clinical librarians have a positive effect on clinical decision making by contributing to better informed decisions, diagnosis and choice of drug or therapy. The quality of CL studies is improving, but more work is needed on reducing bias and providing evidence of specific impacts on patient care. The Critical Incident Technique as part of a mixed method approach appears to offer a useful approach to demonstrating impact.

Conclusions:  This systematic review provides practical guidance regarding the evaluation of CL services. It also provides updated evidence regarding the effectiveness and impact of CL services. The approach used was successful in developing research and critical appraisal skills in a group of librarians.

Key Messages

Implications for Practice

  •  A mixed methods approach is an appropriate means of evaluating clinical librarian services.
  •  Need to increase robustness in methods (particularly concerning researcher bias)
  •  CIT is a useful tool to demonstrate specific instances of impact provided a distinction is made between actual and intended impact
  •  Data should be collected on the relevance and usefulness of the service provided as well as specific impacts on patient care and items relevant to organisational objectives

Implications for Policy

  •  Current studies mainly focus on undefined impact on patient care – need to provide evidence of more specific impacts
  •  Impact from an organisational perspective needs to be considered over the longer term
  •  Standards for reporting evaluation studies need to be improved
  •  Quality markers for assessing evaluation studies need to capture how well an element of the evaluation was conducted rather than whether it was conducted.


Recent reports have raised important questions about the challenges facing health librarians and suggested a need for evaluation at local and national levels.1,2 Rigorous evaluations of clinical librarian (CL) services will demonstrate their value and improve the evidence base of the profession, but CLs need guidance on evaluating their services before this can be achieved. This systematic review aims to provide guidance for future evaluations with the emphasis on measuring the impact of CL services. It also updates previous research on the effectiveness of clinical librarian services.

Related research

Three systematic reviews3–5 have previously examined the effectiveness of CL services. Winning & Beverley,3 building on an earlier selective review of clinical librarianship6 found that CL services were well used and liked by clinicians, but there was little evidence of how clinicians used the literature, their impact on patient care, or their cost-effectiveness. The review3 also highlighted the poor quality of reporting, deficiencies in the evaluation methodologies used, difficulties in comparing studies and the lack of a clear definition of impact. They noted that

“Impact may be interpreted as affecting a range of factors...”3 (p. 19)

but the majority of studies merely ask if the information provided had generally impacted on patient care. However they recognise that actually measuring a direct impact on patient care is “difficult if not impossible to do”3 (p. 19).

Wagner & Byrd4 confirmed these findings, noting that “comparative quantitative research methods or carefully and systematically constructed qualitative research methods have been used very rarely”4 (p. 30). Furthermore

“no study to date has attempted to measure the direct or indirect impact of CML [clinical medical librarian] services on the outcomes of patient care (such as hospital length of stay, patient mortality, adverse drug effects, etc.)…”4 (p. 30).

In contrast, in a sub-analysis of clinical librarian studies for a wider systematic review Weightman and Williamson5 suggest there are indications that CL services directly benefit patient care, save clinicians time and are cost-effective. Weightman & Williamson7 also commented on the poor quality and conclude that in designing impact studies, multi-methods (i.e. both quantitative and qualitative methods) should be employed. In particular they suggest the use of the Critical Incident Technique (CIT) to collect data relating to specific instances where use of the library services has had an impact in some way. They also recognize that evaluating impact on patient outcomes is difficult and discuss a recent taxonomy8 to assess the value of library services to hospitals which suggested there was no valid way of measuring the direct impact of the library on patient care. Based on the results of the review a set of quality standards for a user survey approach to assessing the impact of library services on patient care were developed.7

The three systematic reviews highlight a range of weaknesses in previous evaluations of CL services. These include poor reporting, bias and a lack of reliable or valid evaluation methods. Furthermore they set out a number of recommendations and provide guidance for future evaluations. To take forward Hill’s recommendations,2 UK CLs need practical guidance on evaluating their services. The systematic review described below seeks to build on previous work to provide this practical guidance and, as further studies have been conducted provide updated evidence regarding the effectiveness and impact of CL services.


In line with this overall aim, the objectives of the review were:

  • 1 To build on previous models9 of clinical librarianship and determine which models of clinical librarian services have been evaluated.
  • 2 To determine whose perspective has been evaluated when evaluating clinical librarian services.
  • 3 To determine the quality of the methods used to evaluate clinical librarian services.
  • 4 To determine what outcome measures have been used to evaluate clinical librarian services and establish their appropriateness.
  • 5 To update previous reviews evaluating the effectiveness of clinical librarian services.


The review was undertaken by a group of 8 librarians interested in Clinical Librarianship, led and mentored by a librarian with expertise in conducting systematic reviews. Each step in the review process was undertaken in a collaborative manner, allowing each member to participate, contribute and gain experience of each element of the systematic review process and gain research skills. Group members worked in pairs at the screening, data extraction, critical appraisal, synthesis and report writing stages to build confidence as well as ensure the reliability of the results. This contrasts with the approach often taken in the health sector (such as those for the Cochrane Collaboration, where the review is undertaken by a team, with each member contributing only by their subject specialism i.e. the librarian will contribute to the searching, the statistician the meta-analysis and a subject specialist will supervise a research assistant in the data extraction). A protocol was drafted and jointly agreed following scoping of the literature, critical appraisal of a test article and team discussions to ensure that there was a joint philosophy, agreement on definitions and the review was relevant to the librarian’s practice.

Search strategy

The search strategy aimed to locate published and unpublished English language CL evaluations from 2001 onwards (the cut-off period of previous systematic review searches).

A comprehensive search of twenty databases was undertaken which included the main Library and Information databases; LISA, Library Literature & Information Science Full Text and Library & Information Science Technology Abstracts as well as key healthcare databases including Medline, Cinahl and Embase (see appendix online). All searches were run during Summer 2008 and updated Autumn 2009.

The search terms were based on a single concept, that is, terms to identify studies of the CL or similar roles using a variety of sources, including previous systematic reviews, brainstorming within the project team and examining database thesauri. An agreed list of free-text search terms (see appendix online) was used to devise search strategies adjusting for thesaurus terms specific to each database.

Additional methods included: scanning references of included publications; handsearching Health Information and Libraries Journal and Journal of the Medical Library Association; consulting several known Clinical Librarianship bibliographies; searching Google; contacting experts and calls for studies via the CILIP Health Libraries Group Newsletter and various email discussion lists.

Inclusion/exclusion criteria

Before establishing a set of inclusion and exclusion criteria, it was essential to reach consensus regarding a definition of a clinical librarian. Various definitions within the literature are described but following discussion, the group agreed to use the ‘Hill definition’ since the Hill Report2 had been the impetus for the undertaking of the review, i.e. a CL seeks “to provide quality assured information to health professionals at the point of need to support clinical decision making”.2 Further clarification of each element of the definition was agreed by the group (Box 1), before the inclusion and exclusion criteria were agreed and established (Box 2).

Table Box 1.   Clarification of the Hill definition2 of a Clinical Librarian
Health professionals
 Any professional who provides services to support patient care
Point of need
 The place where the healthcare professional first requested support in any setting (e.g. acute, primary care). This could be within or outside the library or via a computer system. The point of need could be identified passively by the clinician themselves or proactively by the clinical librarian engaging with the clinical team (e.g. via journal club).
Quality Assured Information
 Broadly defined to include evidence-based clinical resources such as The Cochrane Library, NHS Evidence, plus original peer-reviewed journal articles and books. Includes the librarian being involved in the process of supporting/training others to quality assure their own information through journal clubs, training and critical appraisal. This also includes the librarian filtering information in some way, either by critical appraisal, reducing information overload to increase relevance or appropriateness for the decision scenario, or abstracting the information into a format or system for more effective integration into patient care.
Clinical decision making
 Studies where the main focus of the service was supporting patient care directly through clinical services or to managers whose decisions were about clinical care.
Table Box 2.   Inclusion\exclusion criteria
 Studies that meet the Hill definition (see Box 1)
 Studies which are described as outreach but the focus is to support patient care as defined above
 Evaluation includes at least one outcome measure
 Published post-2001
 Reports evaluation methodology
 English language
 Studies that do not meet Hill definition for Clinical Librarian (see Box 1)
 Studies where purely a training focus
 Outreach which involves the provision of a remote library service with no specific link to patient care
 Descriptive article – no clear service evaluation methodology
 Health science librarians providing a general hospital library service
 Published pre-2001
 Non-English language

Filtering and article selection

Figure 1 illustrates the searching, screening and extraction process. Two members scanned and filtered all obviously irrelevant references located via the database searches. From the remaining abstracts, 50 were chosen at random and distributed to each member of the group to scan to ensure consistency in the selection of relevant papers. The titles and abstracts of the remaining references were independently scanned by at least two reviewers. Any disagreements were settled by group discussion. Full papers were obtained for those that appeared to be relevant. Each of these papers was checked against the inclusion criteria by team members working in pairs to ensure consistency.

Figure 1.

 Flow of literature through review process

Assessment of study quality and data extraction

A data extraction and critical appraisal template was developed and agreed by the team, based on established tools.7,10 Details from each study were extracted and independently critically appraised by at least two reviewers from of the group of 8. Any discrepancies were resolved by discussion. All data were recorded on an Excel spreadsheet. Some responses were standardised for ease of synthesis.

Data synthesis

Due to heterogeneity in study design a formal statistical analysis was not deemed appropriate, therefore the findings are presented as a narrative summary.


Number of studies located

A total of 2040 studies were located of which 857 were obviously irrelevant. On the basis of titles and abstracts, 91 appeared relevant to the aims of the review (see Fig. 1), after reading the full text 63 were excluded because they did not meet the inclusion criteria.

On further detailed reading, six failed to meet the inclusion criteria and four were duplicate publications. In total 2211–32 publications were located representing 18 unique studies. Duplicate publications were located for three studies (study 1: Brookman et al.,16 Lovell,14 study 2: Glassington & Urquhart,18 Glassington,13 study 3: Urquhart et al27,28,32). The first publication for each study noted above will be referenced to refer to all publications throughout this review. Eighteen studies11,12,15–26,29–32 were included in the review. These are summarised in Table 1 and used to form the basis of the results presented below. The results are expanded in the final project report (available from the authors on request).

Table 1.   Included studies
StudyMain aimsStudy design/Methods/PopulationKey results
Booth, et al., 15 2002,UKEvaluation of a clinical librarian project in one NHS trust (two CL posts)All teams in which the clinical librarian service operated were surveyed over 12 months. Acute setting.Questionnaires n = 72Interview n = 5/13CIT DiaryObservationService users and non-service usersResponse rate n = 42/72 (58%)Impact of information reportedon;direct patient management n = 22, Teaching n = 19, CPD n = 17, Audit n = 11Respondents felt that the service had saved them time, improved access to information and provided information in a timely manner
Brookman et al.,16 2006,UKLovell,142005,UKTo evaluate the CL serviceSurvey conducted over 12 months in an acute and community settingCase StudyExternal Questionnaire n = 349Semi structured interviews n = 9Internal Questionnaire n = 411Critical Incident TechniqueService users and non-usersResponse rate external n = 86/349 (25%)Response rate internal n = 167/411 (41%)Service valued by users, information results in a change in knowledge which is disseminated, visibility of CL crucial to role
Dowse & Sen,172007,UKTo evaluate a Community Outreach Library ServiceSurvey Case StudyLength of evaluation and setting unclearQuestionnaire n = 250 Interview n = 1 (Outreach Librarian)Service users and non-users surveyedResponse rate n = 93/250 (37%)Time is a major constraint for both providers and users of the service
Freeth D,122002,UKTo determine service uptake
To evaluate the impact of the CL
To identify which aspects of the service were valued and how the role should be developed
Survey of 4 Trusts (3 acute and 1 primary care)Questionnaires n = 110Interviews n = 13/26Use of documents/recordsService users onlyResponse rate n = 38/110 (35%)No. of users n = 256No. literature search requests n = 306CL increased clinical knowledge, skills and saved user’s time
Glassington & Urqhuart,18 2003, UKGlassington13 2003, UKTo explore how effective the CL can be in facilitating the implementation of EBPA survey of all staff across 2 departments in an acute trust over 5 monthsQuestionnaire n = 104Interviews n = 10DiaryService users and non-users surveyedResponse rate n = 38%99% knew about the CL Service, 59% had used the CL Service47% said the service had saved them time 81% were confident in the CLs ability to perform an effective search to inform clinical decision makingCLs may be more effective supporting clinical governance/guideline development than individuals due to barriers faced at individual level (lack of time, apathy towards EBP)
Gorring H,19et al.,2009,UKTo evaluate a pilot clinical librarian servicePilot survey evaluation over 4 months in a mental health settingQuestionnaire n = 18Interviews n = 31 Focus group n=?Critical Incident TechniqueService users onlyResponse rate n = 14/18 (78%)Benefits reported in using the information provided by the CL to support EBP, CPD and contribute to team meetingsProject had a positive impact in teams where the service was piloted and began the process of embedding evidence based information within clinical practice
Greenhalgh, et al.,20 2002, UKEvaluation of two different models of a clinical informaticist service providing evidence based answers to questions arising in clinical practiceModel 1: Academically focused question/answer serviceModel 2: Service focused, aimed to engage users and promote a questioning cultureTwo individual case studies reporting 2 different models evaluated over more than 12 months.Model 1: Academic dept of primary health care setting. n = 100 individualsModel 2: General practice setting. n = 50 practicesObservationQuestionnaireInterviewsUse of documentsService users and non-usersModel 1 1: 22/100 submitted 60 questions15% reported a change in practice towards their current patient, 29% said their practice changed towards other patients with the same problemModel 2: 58 individuals submitted 119 questionsAchieved aim of providing a credible, sustainable service that was accepted and engaged users
Jerome, et al.,212008, USAEvaluation of strategies for clinician adoption of literature request feature integrated in an electronic medical recordSurvey over 11 months of primary care physicians.Service users and non-service usersQuestionnaire n = 137Focus Group n = 8Response rate n = 48/137 (35%)The 19 who used the service rated the following aspects on a Likert scale1–5: Information was relevant, mean 4.6 (range 4–5) and led to a change in clinical practice, mean 3.9 (range 3–5)Information was used for general education n = 18/19 (95%), confirming current treatment/diagnostic plan n = 11 (58%), implementing new/different treatment n = 6 (32%), adding a diagnostic test n = 5 (26%), changing duration of treatment n = 2 (11%), changing diagnostic test n = 2 (11%), cancelling diagnostic test n = 1 (5%), adding component to treatment n = 1 (5%)
Maden22, 2007,UKTo assess the impact of literature searches and information skills trainingSurvey of 3 Trusts (2 acute, 1 primary care) over 6 monthsQuestionnaires: Post training performance n = 119, literature search/training impact evaluation n = 83Interviews n = 23/47CITService users onlyResponse rate post training performance n = 119/119 (100%), literature search/training impact evaluation (78%) Literature searches were used to inform research (10/13, 77%)/direct patient care (38%)/service delivery (15%)/audit (31%)/guideline development (15%) and to share with colleagues (46%)Service saved respondents time
Mulvaney et al.23, 2008,USTo determine the effectiveness of providing synthesized research evidence to inform patient care practices via an evidence based informatics program Clinical Informatics Consult Service (CICS)Experimental randomized trial conducted over 19 months in a University HospitalCICS n = 146 (49%) vs.No CICS n = 153 (51%)QuestionnaireResponse rate n = 226/229 (76%) (CICS intervention n = 108/146 (74%), no CICS intervention n = 118/153 (77%))Intention to treat analyses showed that CIC consult interventions had greater actual and potential impact on clinical actions and satisfaction than no CICS:Immediate impact of information on patient care (measured on scale 1–5, the higher the score the more impact):CICS n = 103, mean = 2.87, no CICS n = 46, mean = 2.74 (effect size 0.11, Non-significant).Impact on future patient care: CICS n = 103, mean 4.25, no CICS n = 55, mean = 3.76 (effect size 0.52, P < 0.01)Impact CICS n = 94% (count) vs no CICS n = 21% (count):Use of new or different treatment n = 14 (14.9) vs n = 1 (4.8%), OR 8.20 (95% CI 1.04–64.00)Add diagnostic test n = 6 (6.4%) vs n = 3 (14.3%) ns, change diagnostic test n = 6 (6.4) vs 0 ns, cancel diagnostic test n = 6 (6.4%) vs n = 1 (4.8%) ns, other diagnostic test n = 2 (2.1%) vs 0 ns, change drug dose n = 6 (6.4%) vs 0 ns, change drug n = 8 (8.5%) vs 2 (9.5%) ns, add drug n = 5 (5.3%) vs 0 ns, duration of treatment n = 12 (12.8%) vs 4 (19%) ns, timing of treatment n = 11.7 (11%) vs 4 (19%) ns, stop treatment n = 9 (9.6%) vs 2 (9.5%) ns, add component of treatment n = 4 (4.3%) vs n = 3 (14.3%) ns
Swinglehurst DA,11et al., 2001,UKTo develop and evaluate a pilot information service in which a Clinical Informaticist (GP with EBM training) provided evidence based answers to GPs and Nurse practitioners.Pilot evaluation survey of service provided over 10 months, implied evaluation at the end. Primary Care setting
Case Study
Questionaire n = 57 (60 questions asked by 22 unique individuals, 57 were answered)
Interview n = 17
Service users only
Response rate n = 54/57 (95%)Clinicians using service were satisfied and would change their practiceService provision led to pursuit of questions which would have remained unanswered
Taylor S, & Hudson T,252008UKTo evaluate the clinical teams Outreach Librarian ProjectPilot survey evaluation of the first 6 months of the service across 1 acute NHS TrustQuestionnaire n = 46Service users onlyResponse rate n = 15/46 (33%)Literature searching led to changes in patient care
Thornton & Allen,26 2005 (UK)To evaluate how, why and by whom the CL service was usedPilot survey of service users in an acute trust over 6 monthsQuestionnaire n=? Sample size unclearService users onlyResponse rate n = 31 (61%)60% of doctors used the CL Service mainly for patient care, to improve their knowledge, for audits and research, 80% midwives used the CL Service mainly for guideline development, 60% nurses used the CL Service for patient care, CPD/education or guideline/policiesOver 90% of respondents agreed that information was relevant and useful
Urquhart C, Durbin J, Turner J.27,28,32 2005, UKTo evaluate the CL Service in particular to:Establish which aspects of the CL Service were useful,Estimate time (and money) savedExamine the effect of information skills trainingExamine the benefits to clinical practiceQuasi-experimental evaluation and survey across3 NHS Trusts (acute and community) over 18 monthsQuestionnaires:• Baseline questionnaire n = 95• Immediate feedback forms (i.e. pre/post info skills) n = 130• Post-training questionnaire (i.e. reflective) n = 75• Final questionnaire n = 74• Control questionnaire n = 150Interviews n = 33Reflective practice diaryService users and non-service usersResponse rate:• Baseline questionnaire n = 69/95 (72.6%)• Immediate feedback forms (i.e. pre/post info skills) n = 90/130 (69.2%)• Post-training questionnaire n = 24/75 (32%)• Final questionnaire n = 57/74 (77%)• Control questionnaire n = 123/150 (82%)• Lit search feedback form = 34/218 (15.6%)Impact of literature search on:Choice of diagnostic test n = 6 (17.6%)Recognition of abnormal/normal condition n = 11 (32.4%), differential diagnosis n = 7 (20.6%), confirmation of proposed therapy n = 21 (61.8%), alternative treatment n = 16 (47.1%), minimize risk of treatment n = 16 (47.1%), revision of treatment plan n = 18 (52.9%), audit n = 18 (52.9%), improved quality of life n = 19 (55.9%), legal/ethical issues n = 17 (50%)Interviewees confirmed that main impact of literature search was on patient management (76%) and therapy (55%)Estimates for that for 23 interviewees the CL saved them 91.25 hrs conducting a particular search. Estimates the CL saved teams 54.5 hrs
Vaughn,29 2009, USPrimary Objectives:To determine the impact of the CML on clinical decision makingTo chart future directions of the CML serviceSecondary objectives:To evaluate the quality and usefulness of the search resultsTo evaluate how well the CML service is usedSurvey of residents and faculty from 3 departments in an acute setting.Questionnaires:Users and non-users questionnaire n = 97; literature search questionnaire n = 32Interviews n = 11CITService users and non-service usersResponse rate:Users and non-users questionnaire n = 33/97 (31%)Literature search questionnaire n = 19/32 (59%)Impact on86% said information provided by CML influenced a change in patient care – most often in choice of drugs or other treatments, 63% said it would influence the care of a patient they were currently treatingAll who had used CML service said the information received was relevant, accurate, useful, current, of clinical value and contributed to higher quality care
Verhoeven V, & Schuling J,242004,NetherlandsTo describe and examine the feasibility of an evidence based question answering service for GPsCase study evaluation over 12 months (service ran for 2 years) in a primary care settingQuestionnaire n = 61Critical Incident TechniqueService users onlyResponse rate n = 48/61 (87%)81% of answers had an impact on the GP and 52% on the patient
Ward, et al.,30 2001, UKTo examine the feasibility of a clinical librarian service (2 part-time CLs)User survey across 4 departments in an acute Trust over 6 monthsLiterature search questionnaire n = 136Evaluation questionnaire sample size not reportedPilot studyService users onlyResponse rate; literature search questionnaire n = 14/136 (10%), evaluation questionnaire n = 15Literature search results had an impact on; patient care (73%),teaching (64%); case presentations (60%), research (46%), management (30%) and publication (27%)
Whitmore et al.,31 2008,USTo examine the use, expectations and value of the informationist serviceSurvey of scientists at a research institute attached to a hospital.Case reportBaseline questionnaire (2004) n = 74Questionnaire (2006) n = 181Interviews (2006) n = 13
Service users only
Response rate (2006) n = 91/181 (50%) Service saved respondents time

Of the 18 studies included 1311,12,15–20,22,25,26,30,32 were undertaken in the UK, 421,23,29,31 in the US and 124 in the Netherlands. In line with the objectives of this review the data was extracted, analysed and synthesised to provide evidence regarding the models of clinical librarian services, the evaluation perspective, the effectiveness and impact of clinical librarian services (outcome measures) and the quality of the evaluation methods.

Models of clinical librarian services

During initial discussions of the review parameters, a range of job titles and roles were noted amongst the team, leading to interest in understanding whether there were different models of service provision and implications of this for their evaluation and subsequent impact. For each study the role of the CL was extracted. Using thematic analysis, two group members identified four models of clinical librarian service provision; question and answer service, question and answer service plus critical appraisal, outreach and outreach plus critical appraisal. All of these provided information at the point of need (in line with the Hill definition) and two also provided critical appraisal and synthesis. The outreach plus critical appraisal model most closely resembles the description of the ‘informationist’ described by Davidoff and Florence33 and reviewed by Rankin et al.34Table 2 describes the four models.

Table 2.   Models identified within the systematic review
Information at the point of needInformation at the point of need plus critical appraisal and synthesis
Question and answer service n = 120A static service is provided where users submit their requests via phone, electronically or in person, a literature search is conducted and reply provided to user (usually search results).Question and answer service plus critical appraisal n = 311,21,24A static service is provided where users submit their requests via phone, email or in person, a literature search is conducted and reply which contains a critically appraised summary of results is provided to user.
Outreach n = 1212,15–20,22,26,29,32Librarian uses a range of means and methods to provide information to users. This can include literature searches, training, attendance at journal clubs or ward rounds. Involves a pro-active approach to engage the users, perhaps as part of the team. Results of queries often provided in the form of literature search.Outreach plus critical appraisal and synthesis = informationist n = 323,30,31Librarian uses a range of means and methods to provide information to users. This can include literature searches, training, attendance at journal clubs or ward rounds. Involves a pro-active approach to engage the users, perhaps as part of the team. Results of queries often provided in the form of literature search but include a synthesised critical appraisal.

Greenhalgh20 compared two different models of service provision; in total 12 studies were classed as an outreach model, Mulvaney,23 Ward30 and Whitmore31 met the outreach plus critical appraisal and synthesis (‘informationist’) model, Swinglehurst,11 Jerome13 and Verhoeven24 provided a question and answer service with critical appraisal and Greenhalgh20 described a question and answer service. Although only four service models were identified, there was a wide range of job titles, including the same title to represent different models. For example the majority of UK librarians have the job title of Clinical Librarian and operate using the ‘outreach’ model, however Ward et al.30 describe a UK service with a job title Clinical Librarian that more closely represents the ‘informationist’ model described by Rankin.34Table 3 maps the models against the job titles and their country of origin and shows that the ‘informationist’ is more prevalent in the US whereas clinical informaticist represents a title for two different models. Taylor25 uses the term Clinical Librarian and Clinical Teams Outreach Librarian interchangeably, perhaps summing up an overall lack of agreement over roles and titles.

Table 3.   Overview of Service Models
Our Service ModelJob titleCountry
Question and Answer
 Greenhalgh et al. (2002)20Clinical InformaticistUK
Question and Answer Plus Critical Appraisal
 Jerome et al. (2008)21Clinical Informatics Consult Service (Clinical Informationist)US
 Verhoeven & Sculing (2004)24InformationistEurope (Holland)
 Swinglehurst et al. (2001)11Clinical InformaticistUK
 Booth et al. (2002)15Clinical LibrarianUK
 Brookman et al. (2006)16Clinical LibrarianUK
 Freeth (2002)12Clinical LibrarianUK
 Glassington & Urquhart (2003)18Clinical LibrarianUK
 Goring et al. (2009)19Clinical LibrarianUK
 Urquhart et al. (2005)32Clinical LibrarianUK
 Vaughn (2005)29Clinical LibrarianUS
 Thornton & Allen (2005)26Clinical LibrarianUK
 Maden (2007)22Clinical Information SpecialistUK
 Dowse & Sen (2007)17Outreach librarianUK
 Taylor & Hudson (2008)25Clinical teams outreach librarian (used interchangeably with CL)UK
 Greenhalgh et al. (2002)20Clinical InformaticistUK
Outreach + Critical Appraisal = Informationist
 Ward et al. (2001)30Clinical LibrarianUK
 Mulvaney et al. (2008)23Clinical Informatics Consult ServiceUS
 Whitmore (2008)31InformationistUS


When evaluating a service, it is important to consider the different perspectives of the stakeholders involved as each will have different requirements and views of the service. The majority of studies included in the review (n = 17),11,12,15–19,21–26,29–32 94% took the user’s perspective into account, usually by directly asking for the user’s views of the service; only one study20 failed to consider the user’s views. The library service perspective was considered by n = 1112,15–18,21,24,26,29,30,32 (61%) however only three studies20,25,32 included the organisational perspective in their evaluation.

Quality of methods

Previous reviews3–5 commented on the poor study design of previous evaluations of clinical librarian services. To determine whether evaluations had improved over time and to provide an insight into effective methods of evaluating clinical librarian services, the methods employed in the studies were analysed.

Study design characteristics

Of the 18 studies included in the systematic review, only Mulvaney23 performed a randomised controlled trial (RCT) and Urquhart32 adopted a quasi-experimental approach. The majority adopted a qualitative methodology (n = 16)11,12,15–22,24–26,29–31 usually characterised by surveys (n = 14),11,12,15–19,21,22,25,26,29,31,32 case studies (n = 7)11,12,16,17,20,22,24 and action research (n = 1).18 Five studies12,15,20,31,32 were evaluated by external assessors, one conducted both an internal16 and external14 evaluation and the remainder were internal evaluations. Five pilot studies11,19,25,26,30 were reported.

Multiple methods of data collection were reported across the studies. Self-reported questionnaires were the most preferred method of data collection using both descriptive statistics and more open-ended qualitative responses, followed by interviews (n = 14).11,12,15–22,29–32 Nine studies12,14,15,19,22–24,29,32 reported using the CIT, five12,14,15,20,21 used records/documentation in their evaluation and three15,18,32 utilised diaries. Less common data collection methods included focus groups19,21 and observation.20

Seven studies12,16,20,23,24,31,32 conducted their evaluation over a period >12 months, and six11,18,21,22,26,30 took between 6–12 months. Two studies19,25 conducted their evaluation over a period of 1–5 months. In three studies15,17,29 the length of the evaluation period was unclear.

The timing of the evaluation is important when demonstrating actual impact as opposed to intentional impact, for example conducting the evaluation immediately after sending search results may be convenient but too soon to give respondents a chance to use the information and assess its impact. Ten studies12,15,17,20,22,25,26,29,31,32 did not indicate the timing of their evaluation; in the remaining eight studies there is a wide variation in the timing; three studies11,16,30 sent their evaluation forms with the literature search results, one23 sent out the evaluation 3 days after the literature search results were sent, one24“after a few weeks” (p. 30) and three studies18,19,21 conducted their evaluations at the end of a designated period.

Intent versus actual impact can also be determined in the phrasing of the evaluation questions, e.g. ‘Can you give me an example of what you used the information for?’ can aid to demonstrate actual impact, while ‘Can you give me an example of what you will use the information for?’ demonstrates intent of impact. For evaluation purposes, actual impact holds more weight than intent impact, which may or may not be carried out. Twelve studies11,12,15,16,20,21,23,24,29–32 demonstrated evidence of actual impact. In the remaining six studies17–19,22,25,26 it was unclear whether actual or intent impact was being measured.

Assessment of quality

There are no agreed methods of assessing quality of studies,7 however following their systematic review Weightman et al.,5 developed a set of quality standards to help librarians conduct evaluations which assess the impact of library services on patient care.7 These were used here to assess quality (although this wasn’t their original purpose). Table 4 summarises the number of studies meeting each of these quality standards.7

Table 4.   Quality of Methods
Quality Measure as defined by Weightman et al.No.%
Appoint researchers who are independent of the library service632
Ensure that all respondents are anonymous and that they are aware of this842
Survey all members of chosen user group(s) or a random sample1684
Agree a set of questions that are objective, well used in previous research, and developed with input from library users421
Ask respondents to reply on the basis of an individual case of specific and recent library use/information provision rather than library use in general (Critical Incident Technique)947
Combine a questionnaire survey with a smaller, but also random sample of follow-up interviews1474

In six studies12,14,15,20,31,32 the evaluation was conducted by an independent researcher and seven studies12,14,15,18,26,29,32 considered anonymity of respondents. Fifteen studies11,15–24,26,29,30,32 surveyed the whole population or a random sample, four studies17,25,29,32 developed a set of objective questions, nine studies12,15,16,19,22–24,29,32 utilised the CIT and 13 studies11,12,15–19,21,22,29–32 used triangulation.

Using these criteria, it can be seen that overall the quality of the studies located was less than adequate. Only one study32 met all six quality measures, three studies15,16,29 met five measures, one study12 met four measures, four studies17–19,22 met three measures, the majority of studies11,20,21,23,24,26,30,31 met two measures and one study25 met just one measure.

Furthermore, even in those which did meet most of the criteria a number of flaws and omissions in the methodology were noted. Sample sizes ranged from 18–411, the response rate of the evaluation questionnaires ranged from 10%–95% with eight studies12,16–18,21,25,29,30 reporting a response rate of less than 40%. Of the thirteen studies that triangulated their data collection, in one study16 the interview sample was not representative and in a further three studies11,19,22 it was unclear how the interviewees/focus group members were selected.

Quality of reporting

The source population across all studies were clearly stated (nine studies15–18,20,21,29,31,32looked at service users and non/service users, nine11,12,19,22–26,30 included service users only). The majority of studies reported a sample size and a response rate, however, many reported their figures in percentages which often gave an inflated impression of response rate, when the actual number of respondents was low. The key characteristics of respondents were clear across all studies.

Methods of data collection were generally stated (see above). The quality of the reporting of data analysis techniques was less adequate. In 13 studies the analysis was either not stated or of poor quality (see above). Two studies11,29 were selective in the reporting of their results suggesting they had collected data on numerous outcomes relating to patient care (reduced length of stay, better informed decisions, etc.), yet failing to report them clearly in the results.

Outcome measures

Effectiveness of Clinical Librarian Services.  A range of measures were used to assess the clinical librarian interventions; this included outcome and impact measures (see Table 5). All studies reported the use of at least one measure, however the measures used across studies varied widely, making it very difficult to draw firm conclusions across studies. A number used process measures (such as the numbers using the service (n = 12) or the time taken to respond to enquiries (n = 8). No studies attempted to assess the costs involved in providing a clinical librarian service, nor its cost effectiveness. Where studies sought to assess outcome or impact, they were classified into having a positive, negative or neutral effect (for example if respondents had answered positively about a particular outcome, this was given a positive impact rating) and are reported in comparison with those identified in previous reviews (Table 5). This classification was used as the methods used to measure impact and the data collected on impact and the way it was reported varied across the studies. For example some used the CIT (asking about the impact of specific information) whereas others asked general questions on how the information may have been used. Some reported impact as a percentage response (coupled with a low response rate) and in others the number of respondents was reported. This makes it difficult to draw comparisons between studies.

Table 5.   Effectiveness and impact of CLs
Outcomes measuredPrevious reviewsThis review
WinningWagnerWeight manToolkit+ve−veneutralNot assessed
Patient care: Undefined impact on patient care1312,15,16,19–25,29,30,32111417,18,26,31
Patient care: Higher quality care 216,291611,12,15,17–26,30–32
Patient care: Better informed decisions 1211,12,15,16,19–21,24,25,30–32617,18,22,23,26,29
Patient care: Diagnosis221,321111512,15–20,22–26,29–31
Patient care: Choice of drugs/therapy511,12,21,29,321315–20,22–26,30,31
Patient care: Reduced length of stay 1811,12,15–26,29–32
Patient care: Advice to patients511,12,19,24,321315–18,20–23,25,26,29–31
Patient care: Other 316,25,321511,12,15,17–24,26,29–31
Benefits of CL: Avoided referral   1811,12,15–26,29–32
Benefits of CL: Avoided readmission   1811,12,15–26,29–32
Benefits of CL: Saved health professional time 912,15–18,22,24,31,32123811,19–21,25,26,29,30
Benefits of CL: Saved money   212,151321511,16–26,29–31
Benefits of CL: literature search results were relevant 1111,12,15,16,19,21,23–26,29130617,18,20,22,31,32
Benefits of CL: literature search results were useful  1411,12,15,16,19–21,23–26,29,31,32417,18,22,30
Benefits of CL: improved info literacy skills   712,17,20,22,26,31,321301011,15,16,18,19,23–25,29,30
Benefits of CL: improved confidence    517,22,24,31,321311,12,15,16,18–21,23,25,26,29,30

A number of studies indicated areas where CLs are effective or provide benefits to health professionals. In total nine (50%) studies reported positive benefits in saving health professionals time, usually based on perceived time savings, rather than actual measurements of time saved. Three studies12,15,32 attempted to ascertain whether the CL saved money. Urquhart et al.32 suggest a conservative estimate of the CL service would be cost neutral, Booth et al.15 suggested savings of £26.78 per hour. Of the 12 studies which examined whether the literature search results were relevant, 11 were positive and the remaining study was neutral, indicating that CLs are effective in interpreting questions and providing the right type of results. Of those studies which ascertained whether the information provided was useful (n = 14, 74%), all responded positively, suggesting that the information provided by CLs is valued. In summary this review provides evidence that CLs are effective in providing relevant and useful information and are perceived to save clinicians’ time.

Impact of clinical librarian services

The review also sought to ascertain how CLs made an impact (or difference) to patient care. The majority of studies (14, 78%), examined impact, by asking a general question on whether the CL service had made a difference to patient care; of these, 13 (68%) reported a positive impact and one reported a neutral impact, four studies did not examine impact at all. The majority did not report on specific impacts. The largest most specific impact reported was that CLs have a positive effect on better informed decisions; this was found in 12 studies (67%). Two studies (11%) suggested that CLs impact on higher quality care, three (17%) reported an impact on diagnosis, two of them positive and one neutral; this included confirming diagnosis,21 choice of diagnostic test, recognition of an abnormal condition or information on a differential diagnosis32 or a significant number of questions regarding diagnosis11 and five11,12,21,29,32 (28%) on the choice of drug or therapy. For most studies, however, specific impacts were not reported. No studies examined impact on avoiding referral or readmission and three didn’t examine any type of impact on patient care.17,18,26 In summary the majority of studies reported a positive impact on patient care; a quarter of studies identified a positive impact on the choice of drug or therapy. On the whole, however only a small number of studies were able or sought to quantify the impact made by the CL and establish where or how that impact could be made.

These results do not take into account the quality of individual studies. Further analysis of these results suggest that the studies that did not measure impact at all were also those classed as a lower methodological quality (two or less). In contrast a number of those reporting specific impacts were better quality studies (quality rating of 4+). This included 4 out of 12 studies which reported positive impacts on better informed decisions, three out of five studies reporting positive impacts on choice of drugs or therapy and one out of three reporting impact on diagnosis which adds some weight to the conclusions drawn.


This review has identified four different models of CL provision, updated the evidence on the effectiveness and impact of CL services and analysed how services have been evaluated. In contrast to previous reviews,3–5 there is an increased trend towards the evaluation of CL services in the UK.


Four models of CL service provision have been identified, but the majority of studies (n = 12) fit into an ‘outreach model’ (Table 2), 11 of which are UK based. It is worth noting that the term outreach in the UK is used differently to the US where it has a ‘mobile’ connotation. This may indicate the emergence of a different model of CL in the UK, delivering services across the wider organisation, unlike in the US where there may be dedicated CLs to two or three hospital departments. The CLs in nine of the UK studies use the job description Clinical Librarian, in line with previous research (Harrison & Beraquet, Ward) and is by far the most common title representing this model in our research. Clinical Information Specialist; Outreach Librarian; and Clinical Teams Outreach Librarian are other titles which represent the same role as that performed by ‘Clinical Librarians’ operating within an outreach model.

A trend towards remote or electronic service delivery could also be seen,21,24 perhaps as documenting a move away from the traditional model of clinical librarianship where the librarian is physically present at the point of need.

Two of the services were delivered by GPs with additional training in librarianship11,24 and two studies21,23 were delivered by librarians with additional training in clinical specialities. Although only a small proportion of the studies represent these hybrid models (approx 25%), it will be interesting to see whether the clinical librarianship role continues to evolve in this way in the future.

Harrison and Beraquet35 recently described a model of UK clinical librarianship, based on key activities (e.g literature searching) and skills (good rapport with health professionals), identified from user surveys and user needs. Fig. 2 extends this model, in line with the evidence identified in the systematic review, and focusing on how the CL service is delivered (e.g outreach, Q&A, etc.). Further work examining these models, including their sustainability, may help to clarify a more representative definition for UK Clinical Librarians.

Figure 2.

 Model of Clinical Librarianship


The majority of the CL evaluations included in this review and in previous studies3–5 were undertaken to justify the continuation of a new CL service, (‘short term evaluations’) and as a consequence evaluated their services largely from a library perspective based on users views. However as we enter a more difficult fiscal period it is perhaps more important than ever to demonstrate how CL and library services in general impact not only upon their users but also upon their organisation. Harrison and Beraquet35 (p. 9) warn that ‘demonstrating effective service delivery in strategic and financial terms i.e. ‘value and worth’ is what matters in the NHS’. A shift in the way impact evaluations are undertaken may therefore be required to demonstrate how library (and CL) services can impact on their own organisation by aligning themselves with local, regional and national drivers.

A number of Alignment Toolkits37,38 have been developed using case studies and local service profiles to support service justification in this way. One advantage of using these tools is that it is possible to record the impact of one-off projects/initiatives that CLs may be involved with in a more in-depth manner. Further work investigating the use of such toolkits and how they can complement existing impact evaluation methodologies is necessary.


Ascertaining whether CL services are effective can be undertaken in a number of ways. As this study demonstrates, the difficulty in evaluating CL services is that there are often too many variables in relation to the models and outcomes being evaluated to create a ‘one evaluation model fits all’ approach. Previous systematic reviews4,5 highlighted the absence of comparative quantitative research methods (p. 30) to evaluate CL services and questioned the quality of previous CL evaluations4 stating that they “do not rise to the level of ‘best evidence’ called for to support evidence based medicine or librarianship” (p. 31), indicating that “more high quality research is needed to demonstrate the value of these services.” (p. 31). This review demonstrates that there is still a paucity of pre/post comparative studies with a continuing trend to undertake user surveys (17/19 studies) to determine if the CL service has met its objectives. The use of quantitative or experimental designs such as randomised trials can demonstrate the effectiveness of interventions but only one study23 used this approach in this review and only one used a quasi-experimental approach.32

However it may be necessary to question the nature of ‘high quality’ research designs within library and information practice. Within the evidence based medicine paradigm RCTs are considered to be the highest level of primary research, however demonstrating the effects of the services provided by CLs is likely to be difficult using methods such as these.39 This could be due to practical issues involved in allocating users to the CL service or an alternative service for comparison. Indeed the RCT reported in this review failed to maintain the randomisation due to differences in enquiries between users.23 Furthermore an enquiry is not a standard intervention (in comparison to a drug) and, the approach needed to answer each enquiry is likely to be different, thus making comparisons impossible. It is unlikely to be the act of providing information that makes a difference to care or the outcome of the patient, other factors will also make a difference (confounders) and these need to be taken into account when interpreting the results of any study. Problems with confounders were highlighted in the quasi experimental study in this review.32

These practical issues which compromise the quality of experimental approaches raises the question of whether experimental or quasi experimental approaches are the most appropriate for demonstrating the effectiveness or impact of CL services. If this is the case, how can evaluation methods be improved, so that a high quality, but appropriate and practical study be undertaken? Urquhart recommends ‘Accept the limitations of the user survey approach39 (p. 218) and aim to reduce bias’. This is in line with Weightman and Williamson’s5 call for the inclusion of qualitative alongside quantitative methods in conducting impact studies, and highlighting of the use of the CIT to capture evidence of specific instances of impact5 (p. 21). These approaches are further reinforced and advocated in their toolkit for assessing impact in libraries.7

This review suggests that, despite some flaws, CL evaluations are improving in their methodological approach in contrast to previous reviews which found that “systematically constructed qualitative research methods have been used very rarely”4 (p. 30). The majority used a mixed method approach; 13 of these triangulated their data collection methods and nine utilised the CIT. However it is important to note that, while the ClT is a powerful tool in demonstrating specific instances of impact,36 the timing of its use and the way the question is worded is also important to capture evidence of real rather than intended impact.

This review also highlights the difficulty in selecting appropriate markers for assessing study quality. On closer inspection, it was noted that despite meeting a number of recommended quality criteria (Weightman et al.,7 some studies still failed to undertake the methods in a robust and unbiased manner. Issues concerning researcher and response bias and low response rate3,5 still need to be adequately addressed. While it is acknowledged that many evaluations are taking place at a local, internal level and the use of external evaluators is often beyond the scope of some services more needs to be done to address the influence of researcher bias. Relatively simple and low cost options include using external post graduate students to conduct an independent evaluation as part of a dissertation.14,16 Alternatively libraries in neighbouring regions could collaborate to perform an evaluation for each other’s hospital, using the same tools and methods. In addition to improving researcher bias, this would lead to increased standardisation between studies allowing better comparisons.

Compared with previous reviews3 improvements can be seen in the reporting of samples and response rates, yet it is evident that there is room for improvement in reporting data collection and analysis. Some of the studies were intended for local consumption which needs to be taken into account when critiquing the reporting, but given the increasing number of librarians involved in critical appraisal40 and the expansion of librarian roles from locating the evidence to assessing its quality41 it is worth reminding librarians of the need to be explicit when reporting their own findings.

Outcomes and effectiveness

‘Measurement is the key element in the evaluation process, because the credibility of the results will depend on the quality of the measures themselves and the methods used to capture the data’42 (p. 166). This review, like previous ones, found a wide variation in the outcome measures used and the outcome data collected (see Table 5). A complication with measuring library services, however, is that the services may not have immediate, tangible or direct outcomes and therefore evaluating that contribution or demonstrating its effectiveness or impact is likely to be difficult or complex. Urquhart39 (p. 217) suggests that ‘looking for immediate patient care impacts may not be productive’, long term patient care impact may depend on organisational factors, outside the CLs’ control, ‘impact on patient care can only be measured indirectly in terms of helping health staff improve the quality of care’39 (p. 212). Previous systematic reviews3 recognise that actually measuring a direct impact on patient care is “difficult if not impossible”2 (p. 19) or “no study to date has attempted to measure the direct or indirect impact of CML [clinical medical librarian] services on the outcomes of patient care”4 (p. 30). This review reports on 14 studies which examined non specified impact, but a significant number (n = 6) identified impacts in specific areas, particularly in relation to better informed decisions, diagnosis and change in drug or therapy, providing some evidence that the situation is improving. Furthermore, the use of CIT can highlight positive impacts of the contribution of CL services. When the results of the studies using CIT are examined separately (n = 9), seven highlighted a positive impact on patient care and one neutral, six impacted positively on better informed decisions, three impacted positively on choice of drugs/therapy, four impacted positively on the advice given to patients. This review also adds evidence that CLs save clinicians time (nine studies in addition to four reviewed by Weightman and Williamson5).

The measures used to evaluate CL services can be classified into those which measure process, outcome or impact. Outcome measures are used to determine the effectiveness of an intervention (whether the intervention works), whereas impact measures seek to establish whether the intervention has made a difference. Although noted as outcome measures, some of the measures cited in the studies were actually measuring process rather than the outcome or impact of interventions, for example usage statistics provide information regarding the process of the service not whether it works or makes a difference. Although process measures such as these are useful in indicating whether a service is used, they do not provide any information regarding whether the service is effective or makes an impact.

A move towards the use of the CIT to collect data on the specific outcomes listed in Table 5 would further improve the quality of CL evaluations and allow CLs to demonstrate where their contributions can specifically make an impact. Ensuring that the evaluation is conducted some time after the event and asks how the information was actually used (rather than intended use) will provide more robust evidence. Publication of these evaluations would build up the evidence base and provide further evidence surrounding the effectiveness and impact of CLs, whilst continued collection of feedback regarding the usefulness and relevance of search results would enable CLs to assess their effectiveness and the quality of their service provision.

Strengths and limitations of the review

Conducting a systematic review collaboratively represents a novel approach within evidence based library and information practice and provided an opportunity for capacity building in systematic reviewing and research skills. The collaboration and range of viewpoints of the eight participants involved in the review led to consideration of wider perspectives (especially regarding the CL model as a whole rather than simply focusing on job titles) and more objectivity. However two of the studies22,26 included in this review were written by two of the authors therefore in an attempt to mitigate any bias other team members evaluated these studies. Finally the review was restricted to English language publications only.


Overall this review has indicated that there are four clear models of clinical librarian service provision, but the most common in the UK is an ‘outreach model’. Providing ‘information at the point of need’ is core to the CL model with an increasing trend for UK CLs to follow an outreach model where the point of need may be physical or electronic and satisfied by a range of means and methods to provide information to users involving both a proactive and responsive approach.

The review has provided limited evidence that CLs are effective in saving health professionals time, and the results of literature searches provided are relevant and useful, indicating that clinicians are happy with the quality of the services. CLs have a positive effect on clinical decision making by contributing to better informed decisions. There is limited evidence that CLs impact on diagnosis and the choice of drug or therapy.

Since the publication of the last review,5 there is an improvement in the quality of studies, with a move towards a mixed methods approach to capture both performance and impact outcomes. In particular, the use of the CIT can demonstrate specific instances of impact, provided actual rather than intended impact is measured. However, more work needs to be done in ensuring that the methods chosen are adequately conducted, limit bias and are reported explicitly. Assessing study quality is problematic, as markers (such as whether the data has been triangulated) which indicate the quality of a study may offer a simplistic approach, capturing whether something has been done rather than how well it was done.

Recent studies have focussed on the benefits of the clinical librarian service to the clinician in measuring outcomes relating to time saving, confidence in using literature and usefulness of results. Continued use of these outcome measures are useful in demonstrating effectiveness. However it would appear that the outcome measures defined in the Impact Toolkit7 would give a more appropriate measurement of the impact of a clinical librarian service providing it is recognised that CLs make a contribution to patient care rather than directly impact patient care. Given the economic climate, CL services (and library services in general) may also need to link their evaluations to organisational objectives and use appropriate outcome measures to demonstrate impact on these.

Further research

Further research to investigate the use of Alignment Toolkits on demonstrating organisational impact is needed.

Further examination of the CL model would aid redefinition of a UK Clinical Librarian that is representative of the role undertaken.

Comparisons of the effectiveness and impact of different models of CL services (e.g. those which have a critical appraisal element against those without) are needed.