Assessing prescribing competence

Authors

  • John Mucklow,

    Corresponding author
    1. Prescribing Skills Assessment Office, British Pharmacological Society, 16 Angel Gate, City Road, London EC1V
      Dr John C. Mucklow, Prescribing Skills Assessment Office, British Pharmacological Society, 16 Angel Gate, City Road, London EC1V, UK. Tel.: +44 (0) 207 239 0176. Fax: +44 (0) 207 417 0114. E-mail: j.mucklow@prescribe.ac.uk
    Search for more papers by this author
  • Lynne Bollington,

    1. Prescribing Skills Assessment Office, British Pharmacological Society, 16 Angel Gate, City Road, London EC1V
    Search for more papers by this author
  • Simon Maxwell

    1. Clinical Pharmacology Unit, University of Edinburgh, Western General Hospital, Edinburgh EH4 2XU, UK
    Search for more papers by this author

Dr John C. Mucklow, Prescribing Skills Assessment Office, British Pharmacological Society, 16 Angel Gate, City Road, London EC1V, UK. Tel.: +44 (0) 207 239 0176. Fax: +44 (0) 207 417 0114. E-mail: j.mucklow@prescribe.ac.uk

Abstract

Prescribing of medicines is the key clinical activity in the working life of most doctors. In recent years, a broad consensus regarding the necessary competencies has been achieved. Each of these is a complex mix of knowledge, judgement and skills. Surveys of those on the threshold of their medical careers have revealed widespread lack of confidence in writing prescriptions. A valid and reliable assessment of prescribing competence, separate from an overall assessment of medical knowledge and skill, would have many benefits for clinical governance and patient safety, and would provide a measure of the success of training programmes in therapeutics. Delivering such an assessment presents many challenges, not least of which are the difficulty in identifying a surrogate marker for competent prescribing in clinical practice and the challenge of ensuring that competence assessed in a controlled environment predicts performance in clinical practice. This review makes the case for an on-line OSCE as the most valid form of assessment and sets out the requirements for its development, scope, composition and delivery. It describes an on-going attempt to develop a national assessment of prescribing skills towards the end of undergraduate medical training in the UK.

Introduction

This review is one of a series devoted to aspects of prescribing. Earlier reviews in this series have presented evidence to show that the quality of prescribing is far from optimal. This evidence creates an image of poor prescribing performance among some healthcare professionals, remedies for which have included placing restrictions on the range of medicines that inexperienced practitioners are authorized to prescribe, observation and feedback of practitioners in practice and targeted audits (and re-audits) of prescribing. This review is concerned with the assessment of competence among individual prescribers (and would-be prescribers).

What constitutes competent prescribing?

Prescribing medicines is the most common intervention (for good or bad) that most doctors make to improve the health of their patients. Given that this is such a fundamental part of medical practice, a prerequisite for developing appropriate and acceptable curricula and assessments is to define and agree with stakeholders what the relevant competencies are.

The process of defining prescribing competence is one that several agencies have attempted to tackle. The World Health Organization's Guide to Good Prescribing (1995) included a six-step model of prescribing (Table 1) [1]. In 2003, the British Pharmacological Society published its core curriculum for teaching safe and effective prescribing in UK medical schools [2]. More recently, there has been growing evidence of medication errors [3] and adverse drug reactions [4] and media concern about the quality of prescribing in UK hospitals, particularly among newly qualified doctors. Several surveys among medical undergraduates and foundation year doctors have revealed widespread lack of confidence in writing prescriptions, acknowledged weakness in the pharmacological knowledge underpinning prescribing and concern that their undergraduate teaching had not prepared them sufficiently to minimize the risk of adverse reactions and drug–drug interactions in their patients [5–8]. In response to these concerns, the General Medical Council (GMC) convened a meeting of interested parties, the outcome of which was the establishment of the Medical Schools' Council (MSC) Safe Prescribing Working Group [9]. Its terms of reference were to:

Table 1. WHO Six-step model of prescribing
Define the patient's problem
Specify the therapeutic objective
Verify the suitability of your P-drug (your personal choice of drug for this indication)
Write a prescription
Give information, instructions and warnings
Monitor (and stop?) the treatment
  • summarize what a Foundation Year 1 doctor (first year trainee doctor) must know and be able to do on their first day with regards to prescribing

  • suggest ways to support the development of this knowledge through undergraduate education and foundation training, including assessment

  • consider ways to support junior doctors in their prescribing.

The working group made a number of recommendations, which included a statement of competencies in relation to prescribing required by all Foundation doctors (Table 2) [9]. The GMC incorporated these competencies into its regulatory guidance on the outcomes and standards for undergraduate medical education (Tomorrow's Doctors; 2009) [10]. In addition to the GMC competencies, the National Prescribing Centre has produced a variety of competency documents for medical and non-medical prescribers (available at: http://www.npc.nhs.uk/) and the British Pharmacological Society (BPS) has published its own principles of good prescribing (Table 3) [11]. We believe that these various sources of advice can be condensed into a detailed list of prescribing sub-competencies (Table 4), each of which comprises a mixture of knowledge, judgement and skills.

Table 2. Competencies required of all Foundation doctors (see Tobaiqy et al. [5])
1. The ability to establish an accurate drug history
2. The ability to plan appropriate therapy for common indications
3. The ability to write a safe and legal prescription
4. The ability to appraise critically the prescribing of others
5. The ability to calculate appropriate doses
6. The ability to provide patients with appropriate information about their medicines
7. The ability to access reliable information about medicines
8. The ability to detect and report adverse drug reactions
Table 3. British Pharmacological Society 10 principles of good prescribing (see Illing et al. [8])
1.Be clear about the reasons for prescribing
 Establish an accurate diagnosis whenever possible (although this may often be difficult)
 Be clear in what way the patient is likely to gain from the prescribed medicines
2.Take into account the patient's medication history before prescribing
 Obtain an accurate list of current and recent medications (including over-the-counter and alternative medicines), prior adverse drug reactions and drug allergies from the patient, their carers or colleagues
3.Take into account other factors that might alter the benefits and risks of treatment
 Consider other individual factors that might influence the prescription (e.g. physiological changes with age and pregnancy or impaired kidney, liver or heart function)
4.Take into account the patient's ideas, concerns and expectations
 Seek to form a partnership with the patient when selecting treatments, making sure that they understand and agree with the reasons for taking the medicine
5.Select effective, safe and cost-effective medicines individualized for the patient
 The likely beneficial effect of the medicine should outweigh the extent of any potential harms, and whenever possible this judgement should be based on published evidence
 Prescribe medicines that are unlicensed, ‘off-label’ or outside standard practice only if satisfied that an alternative medicine would not meet the patient's needs (this decision will be based on evidence and/or experience of their safety and efficacy)
 Choose the best formulation, dose, frequency, route of administration and duration of treatment
6.Adhere to national guidelines and local formularies where appropriate
 Be aware of guidance produced by respected bodies (increasingly available via decision support systems), but always consider the individual needs of the patient
 Select medicines with regard to costs and needs of other patients (health care resources are finite)
 Be able to identify, access and use reliable and validated sources of information (e.g. British National Formulary), and evaluate potentially less reliable information critically
7.Write unambiguous legal prescriptions using the correct documentation
 Be aware of common factors that cause medication errors and know how to avoid them
8.Monitor the beneficial and adverse effects of medicines
 Identify how the beneficial and adverse effects of treatment can be assessed
 Understand how to alter the prescription as a result of this information
 Know how to report adverse drug reactions (in the UK via the Yellow Card scheme).
9.Communicate and document prescribing decisions and the reasons for them
 Communicate clearly with patients, their carers and colleagues
 Give patients important information about how to take the medicine, what benefits might arise, adverse effects (especially those that will require urgent review) and any monitoring that is required
 Use the health record and other means to document prescribing decisions accurately
10.Prescribe within the limitations of your knowledge, skills and experience
 Always seek to keep the knowledge and skills that are relevant to your practice up to date
 Be prepared to seek the advice and support of suitably qualified professional colleagues
 Make sure that, where appropriate, prescriptions are checked (e.g. calculations of intravenous doses)
Table 4. Prescribing sub-competencies
  • *

    Even when rewriting an existing prescription, every item must be reviewed and checked for appropriateness: prescribing should never involve merely ‘transcribing’.

Make a diagnosis
Establish a therapeutic goal
Choose the therapeutic approach (in discussion with the patient)
Choose the drug
Choose the dose, route and frequency
Choose the duration of therapy
Write the prescription*
Inform the patient
Monitor drug effects
Review/alter prescription in the light of further investigation

Why assess prescribing competence?

There are several reasons why well-validated and reliable assessments of prescribing competence would be valuable.

Clinical governance/patient safety reasons

Prescribing is a fundamental skill of a doctor and an assessment might serve as a marker of competence to enter into (or continue in) clinical practice. Such an assessment might identify individuals who pose a risk to patient safety and require particular support.

Educational reasons

A discrete assessment would provide a measure of the success of training programmes in one of the most challenging aspects of healthcare education. The assessment itself could contribute to training (formative assessment) if it provided targeted feedback in relation to areas of poor performance. It is an old adage that ‘assessment drives learning’ and the mere establishment of a prescribing assessment, which is not a widespread concept in undergraduate medical education, might help to foster higher standards of attainment in prescribing.

The establishment of high quality prescribing assessments will be a resource intensive exercise requiring time commitment from authors, peer reviewers, invigilators and markers, all with appropriate expertise in one of the more rapidly developing fields of practice. Such an undertaking requires a ‘critical mass’ that would be challenging, even for a large academic medical centre. For these reasons there are strong arguments in favour of arranging such assessments at regional or national level. This would reduce duplication of effort, unify standards and allow the creation of a more reliable assessment. However, this approach inevitably causes concerns amongst those who believe that academic diversity should allow for local development of curricula and assessments. We believe that these arguments are counter-balanced by the arguments in favour of a more collaborative approach. First, because most advances in medicine, especially therapeutics, have come from improved standardization of practice (e.g. national guidelines). Second, because the public expects uniform high standards of practice in what is a ‘national’ health service.

The concept of national level assessment also resonates with the move towards outcomes-based undergraduate education signalled by the GMC in Tomorrow's Doctors [10]. Paragraph 117 of Tomorrow's Doctors suggests that:

Medical schools must have appropriate methods for setting standards in assessments to decide whether students have achieved the “outcomes for graduates”. There must be no compensatory mechanism which would allow students to graduate without having demonstrated competence in all the outcomes.

In other words, all medical schools must sign off their students as competent in each area of practice that is identified as a discrete outcome, including:

Diagnose and manage clinical presentations (paragraph 14)

Prescribe drugs safely, effectively and economically (paragraph 17)

without compensation from good performance in other areas. This could be achieved most easily by developing a discrete, identifiable assessment, separate from the overall assessment of medical knowledge and skill.

What assessments of prescribing competence exist?

Ross & Loke screened 3189 studies in a systematic review of investigations that had looked at whether educational interventions improved prescribing [12]. They found only 22 that met their criteria for acceptability (15 controlled trials). The assessments used included short-answer written tests (6/15), multiple choice question/single-best answer question (MCQ/SBAQ) tests (2/15), calculations (1/15), objective structured clinical examination (OSCE) stations numbering between 1 and 9 (5/15) and real world prescribing (1/15). However, the diversity of interventions and outcome measures used adversely affected the validity and generalizability of these studies. Only one intervention (the WHO Good Prescribing Guide) had been used in a variety of international settings and in students at different levels of attainment. In the majority of studies, assessments were too limited in their scope to be reliable, and marking schemes used were inconsistent and arbitrary.

O'Shaugnessy et al., in a survey of clinical pharmacology and therapeutics (CPT) teaching in UK medical schools, found that knowledge of CPT was tested specifically in 27 schools (90%), using a mixture of coursework, portfolio work and written assessments, but in 22 schools (73%) students were subjected to a practical test (OSCE). However, only 10 schools (33%) collected data on the prescribing performance of their graduates in the foundation year by enquiring about prescriber confidence, assessing competence to manage patients taking core drugs, and through the reporting of medication errors [13]. This strongly suggests that assessment of prescribing skills at undergraduate level is an area requiring development. A lack of confidence in workplace readiness has persuaded many healthcare organizations to organize small-scale prescribing assessments for new starters, usually organized by their Pharmacy departments. In addition, some isolated examples exist of on-going training programmes that monitor prescribing competency in junior doctors [14].

How might prescribing competence be assessed?

Prescribing competence can be assessed by observing practice in two general ways – in the real world or in a controlled environment. Table 5 summarizes some of the advantages and disadvantages of measuring prescribing performance in each of these different settings. It is evident from Miller's pyramid of clinical competence [15], which identified four levels of attainment from knowledge (‘knows’) (the base of the pyramid) through understanding (‘knows how’) and competence (‘shows how’) to performance (‘does’) (the apex), that performance requires the prescriber to be competent, but competence alone does not guarantee consistent performance, which would suggest that observing real-world prescribing (which assesses performance) should be preferred. However, observing practice against a predefined standard, which by its nature must be opportunist, is subject to the vagaries of case mix, and the difficulty in defining an absolute standard and evaluating individual performance in isolation from a team effort. By contrast, assessment in a controlled environment will always be somewhat artificial and assumes that competence will be transferred into satisfactory performance.

Table 5. Different settings in which prescribing performance might be measured
Setting Real-world prescribing Controlled environment
StandardPre-defined norms (e.g. compared with national guidance)Standard setting based on difficulty of the task and the perceived ability of the candidates
AdvantagesTakes account of real practiceObjectivity
Gathered from routine dataMinimizes confounding
DisadvantagesConfounding factors difficult to control (e.g. case mix, facilities, other support)Time consuming
May not reflect performance in real clinical practice
ExamplesAggregated prescribing dataOSCE
Clinical outcomes (e.g. cures, adverse effects, errors)MCQ
 Written cases

Some examples of assessment of prescribing by observed practice include:

  • Hospital prescribers. Audits by clinical pharmacists in hospitals as a measure of prescribing performance are now commonplace and are capable of identifying potentially serious prescribing errors. In their published audit in 2002, Dean et al. analyzed their findings using human error theory and found that most mistakes were made because of slips in attention or because prescribers did not apply relevant rules [3].

  • General practitioners. Prescribing data for all UK general practitioners are collected routinely and compared against agreed prescribing indicators (e.g. percentage of proton pump inhibitor prescriptions stipulating the most cost-effective agent). The indicators reflect what is perceived to be best practice but cannot account for the actual case mix seen by the GP. They also reflect broader behaviour rather than the quality of individual prescriptions.

  • Non-medical prescribing. The introduction of independent prescribing by nurses, pharmacists and optometrists in the UK in 2006 was supported by a programme of training and assessment that far outstripped those hitherto provided for medical undergraduates [16]. The programme comprised a minimum of 26 days at a higher education establishment (HEE) plus 12 days ‘learning in practice’, during which a supervising designated medical practitioner (DMP) was required to provide the trainee with supervision, support and opportunities to develop competence in prescribing practice. The HEE would define the learning outcomes and competencies to be assessed by the DMP, using a random case analysis approach to verify that the trainee was competent to assume the prescribing role.

The remainder of this review will consider the issues that must be considered when designing assessments of prescribing competence in a controlled environment, including multiple choice, short answer and OSCEs.

What factors contribute to the quality of a prescribing assessment?

The aim of a high quality assessment of prescribing competence should be to measure performance reliably, without bias, and provide assessments that are, as far as possible, relevant to the real world practice faced by the candidates. When creating an assessment there are several important questions to consider:

What format should the assessment take?

Questions that involve tick-box selection of answers, such as MCQs or extended-matching questions (EMQ), are most useful for testing factual recall and use of SBAQs allows testing of judgement. These question formats are very useful for testing large cohorts of students, as they are relatively quick and cost-effective to mark, particularly if automated. It is possible to provide fast, individualized feedback. However, these test formats are not as useful for assessment of skills. Short answer questions and prescribing exercises allow assessment of knowledge, judgement and skill, but are labour-intensive and time-consuming to mark. They also suffer from lack of objectivity, because the markers have to make individual judgements about performance that are not necessarily reproducible. Clinical assessments (e.g. OSCE) can test knowledge, judgement and skills (including communication), and are much closer to reality but consume significant manpower, time and space resources.

What environment should be used for the test?

Assessments that involve tick-box formats or short answers take place most commonly in a ‘real’ environment such as an examination hall but many centres are now developing online assessments to offer the facility for rapid collation, analysis and feedback. Technological advances mean that it is increasingly possible to deliver more complex tasks in a virtual environment, including patient consultations and prescribing (the on-line OSCE).

What material should the assessment include?

Ideally, the selection of questions for an assessment should be determined by a blueprint developed from the relevant curriculum, in order to ensure that the key areas of knowledge set out in the curriculum are tested in a representative manner, to support the validity of the assessment. There is at present no nationally agreed curriculum for teaching safe and effective prescribing but we believe that the model developed by the BPS in 2003 and recent attempts to identify a list of competencies form the basis for a blueprint specifying the competencies that will be tested (e.g. prescribing, dose calculation). There has been divergence of opinion as to what constitutes prescribing, from the very limited act of filling in decisions onto a prescription chart (‘transcribing’) to the broader approach involving several sub-competencies (see Table 4). We are of the firm opinion that the latter defines the competencies required of safe and effective prescribers and that they should form the basis of any prescribing assessment.

How long should the assessment be?

The length of the assessment and number of different observations can also vary and will always be a compromise between the time and resources available and the wish to have a reliable and reproducible assessment. The greater the number of items (observations), and therefore the duration of the assessment, the greater are the chances of making a reliable assessment of the competence of the candidate. Timing may also vary independently of the number of items. For instance, the time available may be unlimited, removing any time pressure from the candidate. Alternatively, timing may be limited to simulate the real world of clinical practice. This consideration is particularly important if supporting reference resources (e.g. a formulary) are to be allowed, because it will determine the extent to which the candidate is able to review this information at length.

What supporting resources should be provided?

Supporting resources that might be allowed to those being assessed include calculators and reference sources, such as formularies, textbooks and websites. These will normally be selected with regard to the working environment that exists when the relevant competencies are put into practice.

How should the test be marked?

Marking of assessments depends on their form and setting, but can be objective (e.g. automated computer-based marking, observer working to strict criteria) or subjective (e.g. observer with less strict criteria or ones that are more difficult to judge, post hoc review of written items). The more subjective the method and the greater the variety of acceptable responses (e.g. prescriptions), the more detailed must be the marking scheme.

How can the validity and reliability of the assessment be maximized?

The assessment should address agreed competencies and reflect the knowledge, skills and judgement required in the workplace, identified as important by relevant stakeholders. The level of complexity of the tasks set should be appropriate to the candidates, and tests of reasoning and judgement should be appropriate to the item style. The supporting resources allowed during the assessment should reflect those available in the workplace. There should be sufficient questions to inspire confidence in the accuracy of the assessment, as judged by indices of reliability (e.g. Cronbach's alpha, standard error of measurement).

How will the quality of the assessment items be ensured?

Writers trained in the principles of question authoring should prepare the question items. Draft material, edited to ensure it conforms to a uniform style, should be subjected to academic scrutiny by rigorous peer review before and during the process of selection for inclusion in the assessment itself. Feedback to item writers should be arranged to support the development of item writing competence and analysis of question performance should provide insights into question difficulty and optimal question design [17].

What standard of performance constitutes competence?

The pass standard (the process of establishing one or more cut scores on a test) should be determined by setting a criterion-referenced pass mark using a recognized method (e.g. Angoff) and based on ‘What level of performance could reasonably be expected of a minimally competent graduate?’ Determining a standard that allows a minimally competent graduate to pass but establishes public confidence depends on:

  • the agreed outcomes of medical education,

  • being able to recognize the characteristics of the borderline group,

  • understanding the demands of the workplace and

  • stakeholder buy-in (which will influence panel selection).

What compensation should be given to candidates with disabilities?

The moves to reduce discrimination in education and more sensitive measurement of disabilities such as dyslexia have required consideration of how disabled candidates should be treated within any assessment system. For many assessments candidates are given more time for question items. Where the assessment of prescribing is concerned, there is no accepted way to handle this. Prescribing is a common and time-pressured task in most healthcare environments and it might be argued that it is better to identify disabilities that might impact on performance and require additional support, rather than try to create an artificial ‘level playing field’ that is likely to be exposed as a fallacy in clinical practice.

In this review, we have tried to set out the relative merits of practice-based assessment and assessment in a controlled environment. Whereas practice-based assessment is the most rigorous, it is also the most labour-intensive and the most difficult to standardize for all students. For pragmatic reasons, a compromise must be drawn between the assessment of performance and the assessment of knowledge alone. We believe the on-line OSCE provides the best compromise at present.

A national prescribing assessment?

The published evidence suggests that no validated, reliable and widely accepted measure of prescribing performance currently exists. Given that all medical schools are required to demonstrate that their students are competent, and that NHS Trusts are starting to set up multiple ad hoc assessments, there would seem to be a place for a recognized national assessment that all medical students are required to pass before graduating. A national prescribing assessment would pool academic resources, serve to raise and unify standards and could be used by other relevant prescribing groups, such as other grades of doctor, nurses, pharmacists, dentists and other health professionals.

In the UK, the Medical Schools Council and the British Pharmacological Society are currently collaborating in the development of a Prescribing Skills Assessment as a summative assessment of knowledge, judgement and skills related to prescribing medicines. It is intended primarily for final year medical students and will assess competencies in prescribing that map onto the outcomes identified in Tomorrow's Doctors 2009. It will test skills and deductive powers (as well as knowledge) relevant to early postgraduate practice. The competencies assessed include writing new prescriptions, reviewing existing prescriptions, calculating drug doses, identifying and avoiding both adverse drug reactions and medication errors, and amending prescribing to suit individual patient circumstances. The structure of the assessment is shown in Figure 1 and an outline of the proposed marking scheme in Table 6. The intention is that the assessment must be passed before qualification and subsequent assumption of NHS prescribing responsibilities. It will be available to be taken during the final year of training (on multiple occasions, if necessary) and will be delivered online. Candidates will have access to the national formulary and a calculator throughout the test.

Figure 1.

Structure of the Prescribing Skills Assessment

Table 6. Format of prescribing skills assessment
Station Description Marks Comments
 1Prescribing 110One question item of 10 marks
Deciding on the most appropriate prescription (drug, dose, route and frequency) to write (on one of a variety of charts) for a single drug, based on the clinical circumstances and supplementary information.
 2Prescribing 210See Station 1
 3Prescription review 18Two question items of 4 marks each
Deciding which components of the current prescription list are inappropriate, unsafe or ineffective for a a patient, based on their clinical circumstances.
 4Planning management8Two question items of 4 marks each
Deciding which combination of therapies would be most appropriate to manage a particular clinical situation.
 5Communicating information6Three question items of 2 marks each
Deciding the important information that should be communicated to patients about a newly prescribed medicine.
 6Drug calculation skills8Four question items of 2 marks each
Making an accurate drug dosage calculation, with appropriate units of measurement, based on numerical information.
 7Prescribing 310See Station 1
 8Prescribing 410See Station 1
 9Prescription review 28See Station 3
10Adverse drug reactions8Four question items of 2 marks each
Identifying likely adverse reactions of specific drugs, drugs likely to be causing specific adverse drug reactions, potentially dangerous drug interactions and deciding on the best approach to managing a clinical presentation resulting from the adverse effects of a drug.
11Drug monitoring8Four question items of 2 marks each
Deciding on how to monitor the beneficial and harmful effects of medicines.
12Data interpretation6Three question items of 2 marks each
Deciding on the meaning of the results of investigations as they relate to decisions about ongoing drug therapy.
  Total marks 100  

One possible drawback of an assessment on this scale is its reliability, a measure of the accuracy of the examination, generally represented by Cronbach's coefficient alpha. The value of this coefficient depends on the number of items in the test, the variance of the overall test scores in the cohort and the sum of the variances of each test item. It is therefore highly dependent on the range of ability displayed by the cohort of candidates. A value of >0.9 is considered optimal for a high-stakes examination, but this can be difficult to achieve where the range of candidate ability is narrow, without increasing the number of questions to an impractical level. It has been argued that the standard error of measurement, a measure of the accuracy of an individual candidate's mark, which is less dependent on the ability range of the candidates, is a preferable statistic [18].

Competing Interests

JM and LB undertake consultancy work for the Medical Schools Council.

Ancillary