SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Where we are
  5. What we need – the definition and enumeration of error and harm
  6. The design of systems
  7. Conclusions
  8. Competing Interests
  9. Acknowledgments
  10. REFERENCES

Medication errors cause substantial harm to patients. We need good methods for counting errors, and we need to know how errors defined in different ways and ascertained by different methods are related to the harm that patients suffer. As errors arise within the complex and poorly designed systems of hospital and primary care, analysis of the factors that lead to error, for example by failure mode and effects analysis, may encourage better designs and reduce harms. There is almost no information on the best ways to train prescribers to be safe or to design effective computerized decision support to help them, although both are important in reducing medication errors and should be investigated. We also need to know how best to provide patients with the data they need to be part of initiatives for safer prescribing.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Where we are
  5. What we need – the definition and enumeration of error and harm
  6. The design of systems
  7. Conclusions
  8. Competing Interests
  9. Acknowledgments
  10. REFERENCES

Medication errors are important, can cause devastating harm (Figure 1), and are preventable [1]. We understand clearly enough that a medication error has occurred when a child dies from an intrathecal injection of vincristine given in place of methotrexate. However, there is still a great deal of debate on how best to define medication errors so that they may be counted, a necessary precursor to testing the efficacy of ways of reducing errors. Even a robust definition, and there is some evidence from studies of error scenarios that suggests which definitions are easiest to implement [2], cannot solve the difficulties of counting errors [3].

image

Figure 1. The total number of lives lost per year plotted against the number of encounters per fatality, plotted on a log-log scale. Iatrogenic errors are common and often fatal. [log+en−rule+log]

Download figure to PowerPoint

Where we are

  1. Top of page
  2. Abstract
  3. Introduction
  4. Where we are
  5. What we need – the definition and enumeration of error and harm
  6. The design of systems
  7. Conclusions
  8. Competing Interests
  9. Acknowledgments
  10. REFERENCES

Methods for counting errors have ranged from assessing the number of cases that are reported or that come to litigation, through analysis of prescriptions (‘chart review’) [4, 5] and covert observation of the administration of medicines [6], to assay of the concentrations of infused intravenous drugs [7, 8]. Few studies have compared one method with another [9], although major differences in error rates are at least as often the result of differences in definitions or their application as true differences in rates.

The confusion between errors and the harm they cause has led to a good deal of difficulty [10, 11]. Many studies of the incidences of errors have failed to evaluate the consequent harms or have estimated them from measures of the ‘potential seriousness’ of errors that were intercepted. Some studies have counted harms (as hospital admissions, for example) and have then tried to attribute them to preventable or unpreventable causes [12]. That has caused further difficulties, because ideas about what is or is not preventable, and whether all preventable harms are the result of error, are also the subject of considerable debate [13].

Efforts to prevent errors have fallen into two main classes: efforts to reduce the harm from specific errors, such as the intravenous administration of strong potassium chloride solution; and efforts to reduce the burden of errors of all sorts.

The specific approach can reduce harm from the most egregious errors. This has the advantage that confidence in the system of healthcare, which can be seriously damaged by the reporting of rare but emotionally distressing events, may be bolstered, and politicians can be seen to be responding to ‘unacceptable’ events. The edicts of the erstwhile National Patient Safety Agency tended to follow this line of reasoning. However, there are disadvantages to this piecemeal approach. Most obviously, the deeper causes of error remain unexplored, and the opportunity to correct systemic weaknesses is lost; furthermore, substantial resources may be needed to guard against rare events which, while deadly in themselves, contribute little to the overall burden of harm caused by error.

The general approach fits well with the appreciation, clearly described by James Reason and others, that human fallibility is inevitable [14, 15]. The very same cognitive processes that permit ‘trial-and-error’ solutions to problems that defy simple analysis and that allow us to learn short-cut solutions to recurrent problems that would otherwise be laborious to solve are those that make us error prone. Errors will occur if systems fail to make allowances for this. Imperfect systems permit human errors. However, the errors that occur and the harms they cause are in part a matter of chance. What is indisputable is that, within a given system, factors such as fatigue increase the likelihood that a human will err.

What we need – the definition and enumeration of error and harm

  1. Top of page
  2. Abstract
  3. Introduction
  4. Where we are
  5. What we need – the definition and enumeration of error and harm
  6. The design of systems
  7. Conclusions
  8. Competing Interests
  9. Acknowledgments
  10. REFERENCES

We need to be able to count errors if we are to evaluate the efficacy of methods to reduce them. Enumeration is not enough, because the aim is to minimize harm, not to minimize error rates. Failures to adhere to rules on writing prescriptions or to administer doses of medicines at specified times do not necessarily translate into harm. While some studies have sought reliable ways of classifying errors according to the harms that may result, most have been based on the imperfect tool of ‘expert opinion’. Pragmatic trials in which harms are counted as the outcome have their own difficulties; serious harm is likely to be rare. Large populations and sensitive methods will be needed to study them effectively. One important area of research, therefore, is the inter-relation between errors defined in different ways and ascertained by different methods and the harm they cause.

If such research allows a reasonably clear relation to be established between errors detected and harm avoided, then we can proceed to test hypotheses related to harm prevention.

The design of systems

  1. Top of page
  2. Abstract
  3. Introduction
  4. Where we are
  5. What we need – the definition and enumeration of error and harm
  6. The design of systems
  7. Conclusions
  8. Competing Interests
  9. Acknowledgments
  10. REFERENCES

The improved system

The system of medical care in hospital can be chaotic and fragmented. Patients admitted as an emergency may visit the Emergency Department, the Medical Admissions Unit, diagnostic departments, theatres or catheter laboratories and a specialist ward all in one day. At each place, they may be met by staff whom they do not know (and who do not know them), with whom they will not interact further and who may not have interacted with others responsible for their care, except through the fallible media of the clinical record and the prescription chart. It should come as no surprise that errors arise and are propagated in such a system, and not only medication errors, but errors in all parts of medical care.

Whether the system can be disentangled so as to make the number of interactions smaller and the potential for error less is unclear, but it seems likely that careful analysis of the system and the opportunities it presents for failing may allow a more robust system to be designed. In the wider world of error reduction, failure mode and effects analysis and other techniques have been widely applied, and there is increasing recognition of the value of such analysis in healthcare [16, 17]. Yet most healthcare processes have evolved without any consideration of their weak points. Future research into improving hospital processes is clearly required.

The improved prescriber

Psychologists distinguish two broad groups of errors: mistakes, which are errors of knowledge and planning; and slips or lapses, which are errors in the execution of plans (Figure 2). Errors of knowledge should be amenable to education. For example, it should be possible to teach prescribers that warfarin interacts with many other drugs, including macrolide antibiotics and azole antifungals. It should also be possible to teach prescribers those elements of therapeutics that are required for planning treatment. In the UK, initiatives such as Prescribe [18] and Script [19] are designed to provide junior doctors with the required knowledge. However, a systematic review of educational initiatives in therapeutics [20] provided very little evidence on the subject, and only one approach, the World Health Organization personal formulary [21], demonstrated an improvement in prescribing. We need good evidence on the training programmes that most efficiently generate safe prescribers.

image

Figure 2. A classification of error based on the psychological approach of reference 14

Download figure to PowerPoint

A second, and rather undervalued, defence against cognitive medication errors is to institute checking. In experimental studies of error, independent checking detects 90% of errors [22]. Clinical pharmacists have retained careful checking, for example at the level of dispensing; and ward pharmacists are regarded by junior doctors as a buffer between their prescriptions and patient harm [23]. However, drug rounds on hospital wards are now conducted by solitary nurses, and no one checks junior doctors' prescriptions outside daytime hours. No formal examination has been made of the value of a second nurse or of out-of-hours pharmacy services, although these and other checks and controls merit evaluation.

Computers and computerized decision support

Computer-assisted prescribing (computerized physician/provider/prescriber order entry; CPOE) reduces error rates [24]. This is no surprise, because many errors in the past have been the result of illegible or incomplete prescriptions. Computers can solve both problems. They can also provide information through the use of decision support and enforce prescribing rules. The evident attractions of an ever-watchful guardian conceal a fundamental difficulty. Human beings tire of being told things they do not need to know. When decision support software warns of potential interactions, many of which are theoretical and most of which cause little or no harm, ‘alert fatigue’ leads users to ignore warnings [25], as we ignore car alarms. The need to warn of potential harm has to be balanced against the need to avoid alert fatigue [26]. Yet there is no empirical evidence to suggest what the optimal sensitivity and specificity of medication decision alerts might be. Given that the goal is to minimize harm to patients, it could be that very sparse alerts to very harmful errors would be much more effective than very frequent alerts to errors that generally cause no harm or trivial harm. This requires investigation.

Engagement of the patient

The patient is the one player who is often ignored in the dramas that lead to medication tragedies. There is a plausible explanation for this; doctors are keen to hide from patients their own propensity to make errors. This is not entirely unreasonable, because faith in one's doctor can help towards cure. However, it does mean that the person for whom the error is most important and who suffers the harm that results is unable to play a full role in preventing it.

One area of research that could improve safety without a major increase in costs is greater patient involvement in medical decision making, which in turn means providing patients with sufficient information to protect themselves against the errors of others. It is accepted that the most vulnerable patients, for example those who are unconscious in intensive care units, will not be able to protect themselves, but close family members can be encouraged to take responsibility for them and for children and others unable to act on their own behalf.

Implementation of change for safety

No changes will reduce harm if they are implemented ineffectively; and some will reduce harm in a small segment of the system, while increasing it elsewhere. The changes required in complex systems inevitably have unintended consequences [27]. Research on the socio-technical aspects of implementing changes for safety will therefore be an essential part of reducing harm by introducing changes in healthcare, especially as those in healthcare are often loath to change systems that seem to have worked after a fashion and to whose failings they have become largely inured.

Conclusions

  1. Top of page
  2. Abstract
  3. Introduction
  4. Where we are
  5. What we need – the definition and enumeration of error and harm
  6. The design of systems
  7. Conclusions
  8. Competing Interests
  9. Acknowledgments
  10. REFERENCES

Research into medication errors should be aimed at reducing harm and not simply at reducing error rates. Many errors cause little or no harm but are symptomatic of systemic failures. Redesign of systems could therefore potentially reduce the rates of both common but harmless errors and rare but harmful errors. Prescribers need to be taught effectively how to prescribe safely, but we do not know how to do this. Computerized decision support has the potential to block seriously harmful errors, but that potential has not been fully realized, in part because we do not know what the optimal sensitivity and specificity might be for decision support systems. Patients and carers need to be constructively involved in error recognition and harm prevention. And when we find from experiment which changes are likely to reduce harm, then we need to know how best to implement those changes so that the benefits are realized in practice.

REFERENCES

  1. Top of page
  2. Abstract
  3. Introduction
  4. Where we are
  5. What we need – the definition and enumeration of error and harm
  6. The design of systems
  7. Conclusions
  8. Competing Interests
  9. Acknowledgments
  10. REFERENCES