The value and impact of information provided through library services for patient care: developing guidance for best practice
Alison Weightman, Support Unit for Research Evidence (SURE), Sir Herbert Duthie Library, Cardiff University, Heath Park, Cardiff CF14 4XN, UK. E-mail: email@example.com
Introduction: Previous impact tool-kits for UK health libraries required updating to reflect recent evidence and changes in library services. The National Knowledge Service funded development of updated guidance.
Methods: Survey tools were developed based on previous impact studies and a systematic review. The resulting draft questionnaire survey was tested at four sites, and the interview schedule was investigated in a fifth area. A literature search in assia, Google Scholar, intute, lisa, lista, scirus, Social Sciences Citation Index (Web of Knowledge), and the major UK University and National Libraries Catalogue (copac), identified ways to improve response rates. Other expert advice contributed to the guidance.
Results: The resulting guidance contains evidence-based advice and a planning pathway for conducting an impact survey as a service audit. The survey tools (critical incident questionnaire and interview schedule) are available online. The evidence-based advice recommends personalizing the request, assuring confidentiality, and using follow-up reminders. Questionnaires should be brief, and small incentives, such as a lottery draw should be considered. Bias is minimized if the survey is conducted and analysed by independent researchers.
Conclusion: The guidance is a starting point for a pragmatic survey to assess the impact of health library services.
This research study was funded by the National Knowledge Service, England to develop guidance on how to estimate the impact of a health library based on best current evidence.
The project evolved from a previous systematic review1 of research studies that examined the value and impact of library services on health outcomes for patients and time saved by health professionals. The quality standards and other examples of good research practice from the included papers (in the systematic review) were used to develop some suggestions for a practical, but low-bias user survey (see Box 1).
Table Box 1 . Suggestions for a practical, but ‘low-bias’ impact study for health libraries (from Weightman and Williamson)1
| • Appoint researchers who are independent of the library service.|
| • Ensure that all respondents are anonymous and that they are aware of this.|
| • Survey all members of chosen user group(s) or a random sample; consider those who decline at invitation as non-respondents.|
| • Agree a set of questions that are objective (e.g. changed drug therapy), well used in previous research, and developed with input from library users.|
| • Ask respondents to reply on the basis of a specific and recent instance of library use/information provision (i.e. an individual case) rather than library use in general (the critical incident technique).|
| • Combine a questionnaire survey with a smaller, but also random sample of follow-up interviews.|
Findings from an impact study of library services may be used to provide evidence of the need to retain, extend, or restructure library and information services. The findings should indicate why the library and information services are used and how the use of information obtained relates to patient care directly or indirectly through education and research. Results may also indicate whether the library service contributes to more efficient clinical decision-making, by making information more easily accessible, or through searching by librarians on behalf of health professionals (mediated searching). More importantly, impact surveys for health library (or knowledge) services should inform the planning of future services. The purpose of a National Service Framework for Quality Improvement for NHS-funded knowledge services in England2 is to establish the core and developmental requirements for such services, and confirm the principle that knowledge services need to work collectively.
Other library sectors in the UK are also involved in impact assessment. An initiative under the auspices of the Library and Information Research Group and the Society of College, National and University Libraries (in the UK and Ireland) assessed the impact of higher education libraries on learning, teaching, and research.3–5 For public libraries in the UK, the Laser Foundation commissioned research on methods for such libraries to demonstrate their contribution to society.6 An overview of impact assessment points out some of the pitfalls, such as the need to consider negative as well as positive impacts of service provision.7
In health libraries elsewhere, Cuddy8 surveyed library patrons, using the Abels et al. taxonomy9 that emphasizes contribution to organizational goals. The model relates service category, inputs, outputs, performance measures, and outcome measures (e.g. support of clinical decisions), and impact is defined specifically as a direct effect on patient care such as the reduction in length of stay.
It was agreed by the National Library for Health Library Services Development Group that more guidance for libraries was necessary, and that a nationally agreed questionnaire, for example, would support easier comparisons and aggregation of data sets.
The first stage was the development of a questionnaire and interview schedule. A critical incident approach was adopted to help the respondents focus on a particular occasion of information need and use. The critical incident technique has been extensively used over many years, provides a more accurate reflection of need and use than asking about use in general, and is established as viable for use in library research.1,10–14 The questions were based on previously developed survey tools and good practice identified from the systematic literature review. The resulting survey questionnaire and interview schedule were piloted within the UK.15
The article describes the testing of the survey methods, the lessons learned, and implications for use of the survey instruments. Other evidence was also collected and collated, from a selective systematic review on factors influencing response rate, contemporaneous and related projects in health libraries, and expert statistical and clinical governance advice. The results and discussion are integrated to provide best practice guidance in the form of a planning pathway for an impact study. Detailed advice is given in the best practice guidance (available online). It is hoped that health libraries in the UK will use the planning pathway and online guidance in future impact surveys, to help further validate the research instruments and refine the best practice procedures.
Designing and testing the survey instruments
A draft questionnaire survey was tested at four sites, within a district general hospital, a shared services model (across several Trusts), a public health/commissioning support service, and a clinical librarian service, during 2006–2007. At the end of the questionnaire, respondents were asked to volunteer for interview, as required by the research ethics procedures, but very few did so. The interview schedule was therefore trialled with a convenience sample of four staff (a consultant, a medical scientist, a nurse, and a nurse manager) in a district general hospital during 2007. The methods are described in detail by Urquhart et al.15 The questionnaire had an introductory section for demographic information about staff group, and then the respondents were asked to reflect on a recent occasion when they had wanted information for clinical decisions. Questions explored why the information was needed, the format required, the type of information expected, the resources used (formal and informal information resources), success in answering the query, the immediate benefits (cognitive impact), time saved, library contribution to the search, and the likely clinical impacts of the information obtained. A final question asked about the respondent's information seeking habits, perceptions of confidence and competence, and use of library services. Most questions were based on questions used in previous large-scale UK impact studies, such as the Value project16,17 and EVINCE project,18 and subsequent smaller-scale impact studies, with some updating to reflect the current range of electronic resources and specialized resources. One question was intended, partly, to examine perceptions of the usefulness of clinical librarian services of the ‘clinical question answering’ variety. The categories asking for subjective estimates on time spent searching were based on previous evaluations for the National Library for Health19 and the North Wales clinical librarian project.20
Literature review: enhancing response rates
A literature review was also carried out to examine the current consensus on enhancing the response rate to questionnaire surveys of working age groups carried out by paper, e-mail, or Web-based methods. assia, Google Scholar, intute, lisa, lista, scirus, Social Sciences Citation Index (Web of Knowledge), and the major UK University and National Libraries Catalogue (copac) were searched using the terms ‘questionnaire*’ or ‘survey*’ in the title, and ‘review’ or ‘response rate*’ or ‘good practice’ in the abstract for papers published from January 1990 to April 2007, regarding relative response rates to paper, e-mail, and web surveys.
Ethical/Clinical governance approval
Advice on ethical/clinical governance approval was sought from the Central Office for Research Committees (now National Research Ethics Service) at the start of the pilot research study. As a result, the project to test and develop the survey instruments was run as a research project, and therefore NHS research ethics procedures were followed in full.
During the project, guidance was changing on the distinction between audit and research, and it became clear that future local impact projects could be classed as audit projects, and full research ethics approval would not be necessary. Nevertheless, to assist libraries in their impact studies, guidance would still be necessary on some of the procedures they might follow.
Evidence from recent similar surveys
Evidence was also sought from library sites involved in impact research (via the e-mail list lis-medical). Finally, the tool-kit drew from a survey carried out in 2005, in the Birmingham and Black Country Strategic Health Authority area that examined library and information services support for clinical governance. This research21 was relevant to the tool-kit development as librarians had conducted interviews, using a critical incident approach, and one of the aims of the project was to enhance the librarians’ research skills. This helped to indicate how librarians might team up to audit services in each other's library sites.
Grading the evidence
The included evidence was graded for research type according to Health Evidence Bulletins Wales methodology22 based on the study design described. No formal critical appraisal was carried out.
The survey instruments
Valuable information obtained from the pilot led to improvements to the wording of the questionnaire based on the searching profiles identified. Some questions with potential for misunderstanding were transferred to the interview schedule.
The final version reflected the need to keep the questionnaire as short as possible. Some questions were merged and re-arranged. Responses to the original survey indicated that the questions that asked about assistance from library staff obtained inconsistent responses.15 The final versions of the invitations and questionnaire, and interview schedules are available in the best practice guidance (available online, in Appendices 1 and 2, with a sample consent form for the interview in Appendix 3).
Enhancing response rate: the literature review
Incentives. On balance, there appear to be benefits in including an incentive23–25 particularly a cash rather than a non-cash incentive,25–30 but this is disputed by other research.31,32 One study33 found that entry into a lottery draw was not effective, while two others34,35 found that this increased the response rate. Enclosing a pen with a mailed questionnaire may36 or may not37 increase the response rate (largely type II/III evidence from randomized and non-randomized trials).
It is also important to remember the ethics of incentives to participation, and incentives should not appear as coercion.
Length of questionnaire and format of presentation. In terms of the length of a survey on response rates, consensus varies between an unclear or variable effect,31,32,38,39 or an increased response to a shorter questionnaire24,34,35,40–44 (largely type II/III evidence from randomized and non-randomized trials).
Likewise, the colour of the survey may not have an influence45,46 (type II/III evidence from randomized and non-randomized trials), although Etter et al.47 noted an increased response rate to a pink questionnaire, but no effect with other colours (type II evidence, from a randomized controlled trial).
Other potentially positive influences include advanced notification,28,31 (although this is not always supported32), the appearance of the questionnaire,48 presence and wording of a covering letter asking for help and stressing the importance of the survey,32 and an assurance of confidentiality.23 A questionnaire identification number can be effective49,50 (largely type IV evidence from observational studies).
Personalization of the covering letter was found to have a neutral effect by some,45 while others found both personalized contact effective,31,51,52 and a handwritten signature with an accompanying letter for postal surveys may increase this effect32,51 (largely type II evidence, from randomized controlled trials).
Gendall48 recommended a ‘likeable’ simple, neutral cover design (type IV evidence, from observational studies).
Method of delivery. In terms of the choice of Web-based, e-mail, or postal questionnaires, there is relatively little research literature, although use of a Web-based form will save time, both in survey administration and analysis, and response rates/quality of response appear to be comparable53 to or better than54,55 a paper-based survey (type II/III evidence from randomized and non-randomized trials).
Reminders. The literature suggests that two to three reminders are appropriate56–60 (largely type IV evidence, from observational studies but including one systematic review of trials60), and telephone reminders may also be effective61 (type II evidence, from a randomized controlled trial).
Overall, the sum of the evidence lends further support to findings from an extensive systematic review published in 2002 on increasing response rates to postal questionnaires.62 The review found evidence for using a financial incentive, keeping the questionnaire short, personalizing the request, using coloured ink, providing stamped addressed envelopes and using recorded delivery, using reminders, and avoiding sensitive questions (type I evidence, systematic review of randomized controlled trials).
On the basis of the findings from this study, a planning pathway is proposed below for discussion and further development.
Ideally, collaborative development across a range of health care library sites within and even beyond the UK would help to validate the research instruments (see Appendices, online), as well as informing future development of library and knowledge services for health staff. In particular, techniques may be developed to help overcome the challenge of getting a random or representative sample of library users to volunteer for interview.
Further discussion should also take place on whether the tool-kit should be further developed to include user service quality measures as well as impact. Prior to the development of the impact tool-kit, the Library and Knowledge Development Network (now Services Development Group) had piloted and evaluated the LibQUAL+ survey63 in 2006. Recommendations64 from an evaluation meeting held at the end of the pilot concluded that LibQUAL+ would require adaptation to make it relevant for all NHS settings.
It is noted that the Medical Library Association has steered away from attempts to measure the direct impact of library services on health outcomes65 and has developed an extensive Benchmarking Network66 devoted largely to collecting a range of activity and performance data rather than measures of the impact of the individual library. However, despite the difficulties of measuring direct impact on patient care, evidence can be sought that includes patient care outcomes1 along with the social, research, and educational benefits of the library service. Further development of a pragmatic method for gathering this information is justified, as the impact of the library service on such outcomes (patient care, research, educational, social) should inform development of improved services.
Questionnaire and interview surveys: basic principles
The pilot project was designed on the principle that interviews usefully complement questionnaire surveys (evidence from the systematic review). Ethical considerations should guide the conduct of questionnaire and interview surveys by library staff, whether the survey is conducted as an audit or as research. Planning should therefore consider: (i) scope of the survey (will the findings be generalizable?), (ii) anonymity of responses (how can these be guaranteed?), (iii) confidentiality (how can this be assured for the participants?), (iv) conflicts of interest (will positive and negative impacts be considered fairly?), and (v) voluntary participation (participants should not feel coerced, and should have the freedom to withdraw from the survey).
Advice for UK health libraries may be sought from research governance for NHS-funded libraries or from university research ethics guidance for libraries funded by higher education.
Content of questionnaire
The questionnaire schedule (see online Appendices) includes revisions made after the piloting and detailed analysis of the returns. The online best practice guidance gives further details on tailoring the questions to the setting (type IV/V evidence based on surveys and expert guidance, themselves based on a systematic review of the literature).
Scope of survey: Sample size and sampling
There are websites that may assist with sample size calculations. For example, there is a Web-based calculator67 that gives definitions of confidence level and confidence interval, and explains the factors that affect confidence intervals. One can calculate the sample size required for a 95% confidence level and a confidence interval of 5% (i.e. if 25% of your sample said ‘yes’ to a question, the result for the whole population could be estimated as 25 ± 5% with 95% confidence). For a large population, the sample size required is 384; for population of 1000, it is 278, and for a population of 500, it is 217. Randomized sampling (see online Appendix) minimizes bias, and to assess the impact of health library services in an organization, staff lists should be used rather than lists of library members (type V evidence, expert guidance).
Obtaining a good response is important, and there are ways the response rate can be enhanced. On the basis of the literature review outlined earlier, four good practice recommendations are proposed (see Box 2) (largely type II/III evidence, randomized, and non-randomized trials; see Results section).
Table Box 2 . Evidence-based recommendations for maximizing the response rate
| • Personalize the request, stressing the importance of the survey and assuring confidentiality.|
| • Send at least one, and ideally two or even three, reminders.|
| • If you amend the questionnaire, keep it brief.|
| • Consider the use of a financial incentive such as a lottery draw.|
The interview schedule has been developed from schedules used in previous impact studies. The pilot study did not obtain more than face validity estimation of the interview schedules, a consequence, possibly, of the constraints stipulated. Experience gained with a survey assessing contribution of health libraries to clinical governance suggests that libraries could pair up, so that a pair of staff from one library could interview health professionals served by the other (type V evidence, expert guidance based on a systematic review of the literature).
Data analysis and presentation
More detailed guidance is provided in the online Appendix. The guidelines are not evidence based, but follow accepted style guides (type V evidence, expert guidance).
The impact tool-kit, resulting from and presented with this study, is a starting point for a pragmatic survey to assess the impact of health library services. It should ideally be piloted and further developed in a wider range of library sites across, and perhaps even beyond, the UK.
The research team thanks the National Knowledge Service for funding the study, and the members of the National Library for Health service development group who have contributed in various ways to the project, particularly Helen Bingham and David Peacock. Staff at the participating pilot sites in Gwent, Blackburn, Preston, and Leicester, particularly the librarian co-ordinators, human resources, and research governance provided assistance. We thank those who kindly agreed to help in the design of the interview schedule. Last but not least, we are grateful to the staff who returned questionnaires or took part in interviews.
Implications for Policy
- • Libraries across the UK could develop this guidance nationally to measure the impact of their services.
- • Impacts may be related to patient care outcomes.
- • Impact studies may be regarded as service audit, for future improvement of services.
Implications for Practice
- • Library services might team up to enable independent assessment of each other's services.
- • Questionnaire surveys should be complemented by some interviews, if possible.
- • The questionnaire and interview schedules should be further validated.