SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Discussion
  6. Conclusion
  7. Acknowledgements
  8. References

In 1999, staff at the universities of Sheffield and Oxford commenced an unfunded project to examine whether it is feasible to apply critical appraisal to daily library practice. This aimed to establish whether barriers experienced when appraising medical literature (such as lack of clinical knowledge, poor knowledge of research methodology and little familiarity with statistical terms) might be reduced when appraising research within a librarian's own discipline. Innovative workshops were devised to equip health librarians with skills in interpreting and applying research. Critical Skills Training in Appraisal for Librarians (CRISTAL) used purpose-specific checklists based on the Users’ Guides to the Medical Literature. Delivery was via half-day workshops, based on a format used by the Critical Appraisal Skills Programme. Two pilot workshops in Sheffield and Oxford were evaluated using a brief post-workshop form. Participants recorded objectives in attending, their general understanding of research, and whether they had read the paper before the workshop. They were asked about the length, content and presentation of the workshop, the general format, organization and learning environment, whether it had been a good use of their time and whether they had enjoyed it. Findings must be interpreted with caution. The workshops were enjoyable and a good use of time. Although the scenario selected required no clinical knowledge, barriers remain regarding statistics and research methodology. Future workshops for librarians should include sessions on research design and statistics. Further developments will take forward these findings.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Discussion
  6. Conclusion
  7. Acknowledgements
  8. References

The role of the health librarian in supporting evidence-based practice is well established.1,2 Increasingly, this role involves librarians in supporting critical appraisal3−5 by health professionals within their employing organization. Debate continues as to the extent that librarians should themselves acquire critical appraisal skills.6,7 In Britain and North America, librarians have been involved from the beginning in organizing and supporting initiatives such as the Critical Appraisal Skills Programme and week long Teaching Evidence-based Practice Workshops.8 At a local level, they complement critical appraisal skills workshops by training health practitioners to find the evidence. However, very few information professionals have undergone the intensive generic workshops to prepare them for facilitating the development of appraisal skills. Our collective experience with librarians in most of the NHS Regions in England, as well as at a national level, suggests three particular barriers to greater librarian participation in critical appraisal:

  • a lack of clinical knowledge (the context);
  • poor knowledge of research methods and designs (the methods);
  • a lack of confidence in managing the statistics (the skills).

Recently, the evidence-based paradigm has migrated from medicine to social work, education and human resource management.9 In the year 2000, the term ‘evidence-based librarianship’ received increasing emphasis through articles in the health information literature and through exposure at several international conferences.10 Every day, library managers, and those working in ‘technical’ professional roles, face numerous decisions with regard to services and resources.11‘Should we concentrate on end-user training at the expense of mediated search services?’, ‘Should we introduce a clinical librarian initiative?’, ‘Should we subscribe to electronic journals instead of their printed equivalents?’, ‘Should I send my staff to a 1-day workshop or give them time to undertake a distance learning course?’. How do they resolve these decisions? By taking advice from colleagues, by following their professional judgement, by responding to anecdotal reports in the professional press—a myriad of ways developed through custom and practice. Evidence-based practice is an opportunity for the information profession to improve the quality of such decision making.

Background to the project

The seeds of the Critical Skills Training in Appraisal for Librarians (CRISTAL) Project can be traced to a meeting organized to advance a Librarian Development Programme for the NHS. Once it became apparent that funds would not be forthcoming to develop skills in interpreting and applying research evidence to health library practice, the authors decided to advance this agenda independently. They initiated an unfunded collaboration between the School of Health and Related Research (University of Sheffield) and the Health Care Libraries Unit (University of Oxford).12 Subsequently, a detailed proposal, to establish whether it is practicable and feasible for health care librarians to apply critical appraisal skills in their day-to-day practice, was firmed up amidst the appropriately evidence-based atmosphere of the Cochrane Collaboration Colloquium in Rome, in October 1999. The CRISTAL Programme sought to capitalize on library professionals’ own knowledge of the context for their work, to introduce them to a rudimentary knowledge of research design and to present necessary statistics in a way that was meaningful and non-threatening.

Guiding principles

The CRISTAL Programme was guided by two particular influences from mainstream evidence-based practice. Early in the development of evidence-based medicine, an Evidence-based Medicine Working Group had produced a series of Users’ Guides to the Medical Literature.13 Each guide was designed to address a particular question type (e.g. diagnosis, therapy etceteras) or study design (e.g. systematic review). Furthermore, each User's Guide was exemplified in a checklist to be used in carrying out the mechanics of critical appraisal. The CRISTAL project team believed that a series of similar guides, based on question types as opposed to study types, would enable librarians to ask meaningful questions of published research. A candidate list of question types included Evaluating User Education, Evaluating End User Searching, Assessing Clinical Librarian Projects, Identifying Information Needs, Evaluating Current Awareness Services and Assessing User Studies. From this list, two topics, Identifying Information Needs and Assessing User Studies (a generic guide examining studies that measure use of any library service), were prioritized for development on the basis of prevalence of studies and the importance of their topics.

The other major influence on the project had been the authors’ involvement in generic critical appraisal skills training to health professionals as pioneered by the Critical Appraisal Skills Programme (CASP) in Oxford.14 Features such as the use of workshops (typically of half-day duration), a problem based scenario, use of checklists and an integrated approach to evaluation all figured prominently in delivery of the CRISTAL Programme. The methods pioneered by the Critical Appraisal Skills Programme (CASP) were judged particularly appropriate for translation to the health information context because:

  • 1
    They are multidisciplinary in intent and have proved successful with all professional groups including doctors, nurses, Professions Allied to Medicine, Maternity Services Liaison Committees, health service managers and consumer groups.
  • 2
    They provide a standard approach to the important dimensions of reliability, applicability and validity.
  • 3
    They are familiar to many librarians, already involved in supporting critical appraisal or finding the evidence workshops.

It was recognized that, as paralleled in the generic CASP programme, librarians would also need to be able to use databases to locate evidence in their professional literature (e.g. Library and Information Science Abstracts, assia, Social Science Citation Index, cinahl and medline). However this was flagged as a topic for subsequent development.

Based on experiences from these existing initiatives the following operating principles were agreed:

  • 1
    Checklists would be specific to a type of study, e.g. information needs analysis, user study et ceteras not to a specific study design.
  • 2
    Checklists would share the standard three CASP dimensions regarding validity (i.e. appropriateness of methods to research question), reliability (i.e. the rigour with which the actual study was carried out) and applicability (i.e. the usefulness of research to the user's own practice).
  • 3
    Checklists would be orientated towards the practitioner not towards the creation of an academic tool.

Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Discussion
  6. Conclusion
  7. Acknowledgements
  8. References

The work was carried out in three phases:

  • 1
    Development of an initial critical appraisal tool.
  • 2
    ‘Cross-over’ evaluation of the tool.
  • 3
    Workshop based evaluation.

Development of an initial critical appraisal tool

The project team resisted the assumption that the health information sector could readily adopt an approach to evaluating research from generic healthcare without significant modification. A dual approach was employed to incorporate perspectives grounded in the theory and practice of the health information professional:

  • 1
    One investigator (ABo) reviewed existing critical appraisal checklists for their value to the proposed tools (an evolutionary approach).
  • 2
    The other investigator (ABr) started from an example of a study to be appraised and used this to suggest appropriate appraisal criteria within the domains of validity, reliability and applicability (a revolutionary approach).

‘Cross-over’ evaluation of the tool

The resultant two sets of criteria were then integrated into a single list, prior to the pilot stage. Investigator One (ABo) used the tool with the original study identified by Investigator Two. Meanwhile Investigator Two (ABr) used the tool to appraise a second study of a similar type, identified by Investigator One. The integrated tool was then revised in the light of this ‘cross-over’ phase.

Workshop based evaluation

Following the design and evaluation phases described above, two workshops were convened in Autumn 2000; one in Oxford and one in Trent. Two groups of librarians were chosen, primarily for convenience of access but also because, for historic reasons, they represented contrasting levels of prior involvement in critical appraisal (‘CASP-aware librarians’ and ‘CASP-neutral librarians’). Between 10 and 20 librarians at each venue were invited to workshops facilitated by the two investigators. They received free training as part of a continuing professional development programme in return for commitment to the evaluation. At each workshop, participants used the Checklist for User Studies to appraise a paper. In the interests of consistency the same paper was used at each session (Fig. 1).15 A standard evaluation sheet was used at each workshop. Following the format of a standard CASP workshop, participants were asked to vote at the beginning and end of the sessions, on the question posed by the scenario.

image

Figure 1. Scenario—Keeping a finger on the pulse

Download figure to PowerPoint

Evaluation

The original plans for evaluation, to be funded under the NHS Librarian Development Programme, were very comprehensive and involved multiple opportunities for evaluation, both of the instrument itself and of the workshop process:

  • 1
    Matched data comparing the assessments by the two investigators (ABo and ABr). This would allow us to quantify the extent of agreement between the two investigators using the so-called ‘kappa statistic’.
  • 2
    Matched data comparing assessments by a group of external evaluators. Again it would be possible to compare agreement using the kappa statistic between those involved in developing the instrument and those charged with interpreting it.
  • 3
    Intra-workshop and inter-workshop comparisons between the two workshops. Comparisons were planned between both groups using a standard evaluation form. Variables to be collected would include years as a health librarian, extent of prior experience of CASP methods, number and type of journals read each month, extent of academic qualifications and previous experience of research projects.

Faced by the pragmatic considerations of an unfunded project, however, evaluation focused singly on data collected at the two workshops. Participants were asked questions concerning their objectives in attending the workshop, and whether these had been met, what their general understanding of research was, and whether they had read the paper before the workshop. They were also asked questions about the length, content and presentation of the workshop sessions and the general format, organization and learning environment. They were asked to indicate whether they felt that the workshop had been a good use of their time, and if they had enjoyed it.

Objectives in attending

The workshops were attended by 25 participants. Of these, 22 reported that their objectives in attending the workshop were to learn how to appraise a piece of library research, and 23 expected it to contribute to their general professional development. Twenty-one reported the objective of increasing their understanding of research, and 18 to gain expertise to pass on to colleagues (Table 1).

Table 1.  Learning objectives.
 Oxford (n = 14)Sheffield (n = 11)
  • *

    Other in this instance was ‘to understand what was trying to be achieved in these workshops and stage of development’—CASP colleague).

  • Other in this instance were ‘to meet other librarians in the Region and gain confidence in my new role’ and ‘to enhance my knowledge of critical appraisal of librarianship issues’.

Learn how to critically appraise a piece of library research1210
Increase understanding of research issues1110
Contribute to general continuing professional development programme1211
Gain expertise to pass on to colleagues10 8
Other 1* 2+

Prior knowledge

Two out of a total of 25 participants said that they had undertaken a lot of research, 14 had undertaken a little and nine had not undertaken any. Four reported that they read a lot of research papers as part of their job, with 17 reporting that they read a little and three reporting that they did not read research papers (Table 2). The assumptions that had been made in planning that the Oxford group would be more sophisticated with regard to appraisal seem to be borne out by the differential responses for both undertaking and reading research between both sites.

Table 2.  General understanding of research.
OXFORD Responses received 14Yes, a lotYes, a littleNo
Have you undertaken research?2 93
Do you read research papers as part of your job?311 
SHEFFIELD Responses received 11Yes, a lotYes, a littleNo
Have you undertaken research?56 
Do you read research papers as part of your job?163 (+ 1 null response)

Four participants reported using existing user guides to help in appraising research papers. These included CASP questions, JAMA User Guides, How to Read a Paper16, Information for Evidence-based Care17 the Cochrane Collaboration Reviewers’ Handbook, and a checklist devised by the Cochrane Non-Randomised Methods Group.

In the Oxford workshop, 10 had read the paper carefully, four had only skimmed it. The Sheffield participants were not required to look at the paper beforehand in an attempt to control for prior familiarity with the paper.

Objectives achieved

Twenty-two of the 25 participants felt that the small group session had been the right length, with 19 reporting that they had understood the meaning of the questions, and four that they had not. Two questions from the appraisal checklist [Are any limitations in the methodology (that might have influenced results) identified and discussed?] and (What additional information do you need to obtain locally to assist you in responding to the findings of this study?) had proved the most difficult to answer.

In general, participants found that the feedback session following the small group work helped clarify areas of uncertainty—23 reported this as having been achieved, and 19 felt that this session had been the right length.

Twenty-three of the 25 participants were happy with the workshop format. Nine felt that the workshop had been an excellent use of their time, 14 that it had been a good use of their time and two a fair use of their time. Interestingly, the difference between the two groups with regard to fulfilment of the learning objectives (Table 3) saw the Sheffield Group having felt that the session had met their objectives more than did the Oxford Group. This suggests that prior knowledge and familiarity with research increases participant expectations from a critical appraisal session. Nevertheless, responses for both groups were concentrated in the ‘quite a lot’ and ‘very much’ categories.

Table 3.  Objectives achieved.
 Not at allNot muchQuite a lotVery much
OxfordSheffieldOxfordSheffieldOxfordSheffieldOxfordSheffield
Learn how to critically appraise a piece of library research000010437
Increase understanding of research issues003011704
Contribute to general continuing professional development programme001111822
Gain expertise to pass on to colleagues001011702

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Discussion
  6. Conclusion
  7. Acknowledgements
  8. References

The pilot project addressed the need for a tool for appraising library-related literature and explored the feasibility of using workshops as an effective educational intervention. It has demonstrated that the appraisal tool, together with the workshop format, helped participants improve their understanding of research methods and their ability to use research to aid their decision making.

Two noteworthy factors reported by participants, associated with the ability to use the tool to make a judgement of the validity, reliability and applicability of the research paper, were prior knowledge of statistical techniques and research methodology. This was particularly an issue when dealing with questions 6 and 7 on the checklist. This finding has informed the development of a 1-day programme on evidence-based librarianship where the basic half-day critical appraisal workshop is augmented by two substantive sessions: Statistics for Petrified Librarians (STAPL) and Matching the Research Design to the Research Question. The 1-day format of this course has been run twice—once for health librarians in Wales and once for librarians in South-west England.

A further noteworthy factor associated with the ability to use the tool was having read the paper beforehand. Although 10 participants in the Oxford workshop reported that they had read the paper carefully beforehand, several commented that participants would have liked to have received the paper earlier. These participants stated that they needed more preparation time if they were to reach a decision about the paper. This finding is not unequivocal, however, as experience suggests that regardless of how much time is allowed for reading the paper some will always consider this insufficient. Similarly, participants indicated different preferred learning styles with regard to the optimal size and interactivity of their group.

Several participants alluded to the difficulty of assessing statistics as a major block to appraising the paper—this confirms other observations made by the authors concerning participation of librarians, and indeed all professional disciplines, in general critical appraisal sessions. Additional pre-workshop tools and preparation may be necessary—for example, worksheets/glossaries for terminology, etc. to enable participants to get the most from the learning possibilities in the workshop. One Oxford participant asked that basic terms be covered first while a Sheffield participant suggested that we could perhaps ‘give a reference beforehand to a text which explains unfamiliar statistical terminology’. Such texts do in fact exist—the book A-Z of Medical Statistics: a companion for critical appraisal18 being one such example. These observations suggest that library course curricula may need to consider incorporating statistics as a core competence for potential information professionals.

Comments received substantiated the choice of small group work as supportive, inclusive and discursive. However, it has not been possible to reinforce learning through ongoing interaction or follow-up.

At the Oxford workshop, the feedback sessions discussed why such large numbers of participants reported that they didn’t read papers as part of their job. Several factors contribute to this practice-research gap. There is a reported gap between the ideal availability of methodologically sound library and information science research, and the reality. Problems were noted in gaining access to a relevant resource base. Participants were keen to find good examples of research, and suggestions for future developments included the availability of CATS (Critically Appraised Topics).19 Participants also expressed a need to improve the depth of general critical appraisal skills throughout the whole profession. However, such a need must be placed in the context of recent thinking on evidence-based practice in general which suggests that not all practitioners will be able to undertake the complete evidence-based process. Instead, all can aspire to better ways of getting appraised, synthesized research reports to their profession in a much more readily accessible format, linked to identified work-based questions.20

One additional complication of the pilot workshops that would not usually be encountered is the fact that they had two different, and not necessarily compatible, objectives—to appraise the value of the checklist itself and to use the checklist to appraise a paper. Usually instrument development and instrument use are distinct—some participants highlighted this as a possible area for confusion. Respondents also found some duplication in the checklist questions. The project team has identified a need to further validate the checklist, and refine it where necessary.

Conclusion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Discussion
  6. Conclusion
  7. Acknowledgements
  8. References

The findings from this unfunded pilot project suggest that there is indeed a need for critical appraisal sessions tailored to the specific needs of information professionals. Although the methods used do not differ substantially from those employed by generic critical appraisal sessions for health professionals, there does appear to be added value in using a checklist tailored to a particular information practice question and in considering a topic familiar to the audience. Nevertheless, librarians are no different from other professional groups in reflecting uncertainty with regard to statistical methods, interpretation of results and knowledge of research design. A useful spin-off from librarians acquiring critical appraisal skills within their own professional context might be that they would then feel more able to facilitate similar sessions with multidisciplinary groups within their organization.

Planned future developments

The CRISTAL project team has only developed two checklists to date, prioritized according to the volume and importance of the literature. However developments in systematic reviews within health information suggest that other checklists may be easier to produce as a by-product of such reviews. For example, two systematic reviews presented at the Evidence-based Librarianship Conference in Sheffield and included in this issue will likely help to generate checklists for clinical librarianship and user training. Another suggestion is for a checklist to evaluate articles reporting the development of optimal filters. The authors intend to reproduce such checklists in their forthcoming book on Evidence-based Practice for Information Professionals. In the meantime the two checklists, reported in their abridged form ( Tables 4 and 5), in this article will be made available with supporting hints and full documentation from the Evidence-based Librarianship website at: http://www.eblib.net.

Table 4.  Twelve questions to help you make sense of a user study.
A.Is the study a close representation of the truth?
1.Does the study address a clearly focused issue?
2.Does the study position itself in the context of other studies?
3.Is there a direct comparison that provides an additional frame of reference?
4.Were those involved in collection of data also involved in delivering a service to the user group?
5.Were the methods used in selecting the users appropriate and clearly described?
6.Was the planned sample of users representative of all users (actual and eligible) who might be included in the study?
B.Are the results credible and repeatable?
7.What was the response rate and how representative was it of the population under study?
8.Are the results complete and have they been analysed in an easily interpretable way?
9.Are any limitations in the methodology (that might have influenced results) identified and discussed?
C.Will the results help me in my own information practice
10.Can the results be applied to your local population?
11.What are the implications of the study for your practice?
 In terms of current deployment of services?
 In terms of cost?
 In terms of the expectations or attitudes of your users?
12.What additional information do you need to obtain locally to assist you in responding to the findings of this study?
Table 5.  Twelve questions to help you make sense of an information needs analysis/information audit.
A.Is the study a close representation of the truth?
1.Does the study address a clearly focused issue?
2.Does the study position itself in the context of other studies?
3.Is there a direct comparison that provides an additional frame of reference?
4.Were those involved in collection of data also involved in delivering a service to the user group?
5.Were the methods used in acquiring data on information needs appropriate and clearly described?
6.Was the planned sample of users representative of all users (actual and eligible) who might be included in the study?
B.Are the results credible and repeatable?
7.What was the response rate and how representative was it of the population under study?
8.Are the results complete and have they been analysed in an easily interpretable way?
9.What attempts have been made to ensure reliability of responses?
C.Will the results help me in my own information practice
10.Can the results be applied to your local population?
11.What are the implications of the study for your practice?
 In terms of current deployment of services?
 In terms of cost?
 In terms of the expectations or attitudes of your users?
12.What additional information do you need to obtain locally to assist you in responding to the findings of this study?

Once a substantial body of useful evidence-based materials has been identified as a result of this initiative it may be possible, given necessary resources, to pursue the authors’ plan to develop a reference management database of reviews in librarianship/information work, entitled REVEL (REViews of Evidence in Librarianship).

Finally, an already tangible result of the production of these checklists has been their use in systematic review activities. For example, researchers from the Information Resources section in ScHARR have already used the checklist on information needs analysis in a review of the information needs of visually impaired persons and the checklist on use studies in their review of clinical librarianship published in this issue. In creating synergies between critical appraisal and systematic review activities and between checklists developed within health information and their wider use within health care, the future of evidence-based information practice will become increasingly clear-cut!

Acknowledgements

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Discussion
  6. Conclusion
  7. Acknowledgements
  8. References

We gratefully acknowledge the participation and support of the librarians of the former Oxford and Trent Regions, particularly Jennie Kelson who has shared with us in delivering the CRISTAL materials.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Discussion
  6. Conclusion
  7. Acknowledgements
  8. References
  • 1
    McKibbon, K. A. Evidence-based practice. Bulletin of the Medical Library Association 1998, 86, 396401.
  • 2
    Tsafrir, J. & Grinberg, M. Who needs evidence-based health care? Bulletin of the Medical Library Association 1998, 86, 405.
  • 3
    Landrivon, G. & Ecochard, R. Principles of the critical appraisal of medical literature. Health Information and Libraries 1992, 3, 2934.
  • 4
    Scherrer, C. S. & Dorsch, J. L. The evolving role of the librarian in evidence-based medicine. Bulletin of the Medical Library Association 1999, 87, 3228.
  • 5
    Dorsch, J. L., Frasca, M. A., Wilson, M. L. & Tomsic, M. L. A multidisciplinary approach to information and critical appraisal instruction. Bulletin of the Medical Library Association 1990, 78, 3844.
  • 6
    Jerome, R. N., Giuse, N. B., Gish, K. W., Sathe, N. A. & Dietrich, M. S. Information needs of clinical teams: analysis of questions received by the Clinical Informatics Consult Service. Bulletin of the Medical Library Association 2001, 89, 17784.
  • 7
    Gray, M. National electronic Library for Health. Vine 1999, 115, 5761.
  • 8
    Booth, A. Research. Health Information & Libraries Journal 2000, 17, 2325.
  • 9
    Trinder, L. & Reynolds, S. (eds) Evidence-Based Practice: A Critical Appraisal. Oxford: Blackwell Science, 2000.
  • 10
    Booth, A. Spotlight on evidence-based librarianship. Bibliotheca Medica Canadiana 2002, 23, 845.
  • 11
    Booth, A. Asking questions, knowing answers. Health Information and Libraries Journal 2001, 18, 23840.
  • 12
    Booth, A. & Brice, A. Research. Health Information and Libraries Journal 2001, 18, 1757.
  • 13
    Guyatt, G. & Rennie, D. Users’ Guides to the Medical Literature: Essentials of Evidence-Based Clinical Practice. Chicago, IL: American Medical Association, 2002.
  • 14
    Ibbotson, T., Grimshaw, J. & Grant, A. Evaluation of a programme of workshops for promoting the teaching of critical appraisal skills. Medical Education,1998, 32, 48691.
  • 15
    Young, J. M. & Ward, J. E. General practitioners’ use of evidence databases. MJA 1999, 170, 569.
  • 16
    Greenhalgh, T. How to Read a Paper, 2nd edn. London: BMJ Publishing Group, 2001.
  • 17
    Roberts, R. Information for Evidence Based Care. Oxford: Radcliffe, 1999.
  • 18
    Pereira-Maxwell, F. A–Z of Medical Statistics: A Companion for Critical Appraisal. London: Arnold, 1998.
  • 19
    Wyer, P. C. The critically appraised topic: closing the evidence-transfer gap. Annals of Emergency Medicine 1997, 30, 63940.
  • 20
    Guyatt, G. H., Meade, M. O., Jaeschke, R. Z., Cook, D. J. & Haynes, R. B. Practitioners of evidence-based care. Not all clinicians need to appraise evidence from scratch but all need some skills. BMJ 2000, 320, 9545.