To provide a brief introduction into Critical Incident Reporting Systems (CIRS) as used in human medicine, and to report the discussion from a recent panel meeting discussion with 23 equine anaesthetists in preparation for a new CEPEF-4 (Confidential Enquiry into Perioperative Equine Fatalities) study.
Moderated group discussions, and review of literature.
The first group discussion focused on the definition of ‘preventable critical incidents’ and/or ‘near misses’ in the context of equine anaesthesia. The second group discussion focused on categorizing critical incidents according to an established framework for analysing risk and safety in clinical medicine.
While critical incidents do occur in equine anaesthesia, no critical incident reporting system including systematic collection and analysis of critical incidents is in place.
Conclusions and clinical relevance
Critical incident reporting systems could be used to improve safety in equine anaesthesia – in addition to other study types such as mortality studies.
Critical incident reporting systems (CIRS) and analysis of adverse events are evolving and have become established in many fields of human medicine (Ahluwalia & Marriott 2005; Choy 2008). Anaesthesiology was the first medical discipline to start integrating and adapting approaches developed by the aviation industry (Webb et al. 1993; Gaba 2000; Cooper & Gaba 2002; Haller et al. 2005; Huebler et al. 2007; Kantelhardt et al. 2011; Staender 2011). Cooper (1978) was the first to apply a modified critical incident technique in anaesthetic practice. In contrast to classical risk factor studies, which typically focus on estimating an outcome-related risk (e.g. mortality) and the identification of risk factors, Cooper focused on understanding the process of error. The critical-incident analysis technique used by Cooper was originally developed by Flanagan, a psychologist, in the context of training military pilots (Flanagan 1954). Both aviation and anaesthesiology are potentially high-risk activities and share several characteristics. The flight of a plane taking off from point A and landing at point B could be compared to an anaesthetic procedure, commencing with induction of anaesthesia and ending with recovery, separated by a maintenance phase. Safety checks prior to flights are similar to pre-anaesthetic examinations and risk assessments. Although intensively investigated and documented in comprehensive databases, not all aviation accidents with fatal consequences, can be attributed to identifiable causes (http://aviation-safety.net/database/, accessed 05.06.2013). Similarly in anaesthesia, in some cases it is difficult or even impossible to identify the definitive cause of death or other perianaesthetic complication. Both disciplines require an intensive training with regard to safety concepts; joint training courses with pilots and anaesthetists have been organized in the past (Staender et al. 1997). In both disciplines, multiple issues (human, equipment/machines, organizational) might contribute to failures. In aviation 70% of the accidents involve human error (Helmreich 2000). Similarly, in human anaesthesia human errors are found to be involved in approximately 70% of the incidents (Fox et al. 1993; Webb et al. 1993; Kantelhardt et al. 2011). However, studies undertaken in aviation and human hospitals indicate that errors committed by pilots, physicians and nurses are in most cases, not the cause, but the result of a number of different inciting factors (Haller et al. 2005). Similarities between both disciplines and lessons to be learned are described in more detail elsewhere (Helmreich 2000; Reason 2005).
Reason, a psychologist, initiated a model of accident causation in healthcare that differentiates between the traditional focus on human error (‘person approach’) shifting instead to a ‘system approach’ (Reason 2000). The person approach, prevalent in medicine (and veterinary medicine) focuses on unsafe acts (comprising errors and procedural violations) by those, such as anaesthetists and surgeons, interacting directly with the patient. These unsafe acts result from a range of mental lapses including ‘forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness’. In this approach the measures often adopted to counteract unsafe acts are limited to those addressing individuals – and blaming them – rather than the system which created potentially unsafe conditions. In stark contrast, the system approach focuses rather on the conditions under which individuals work and tries to build defenses to avert errors and mitigate their effects. The basic premise in the system approach is that humans are fallible and errors are to be expected, even in the best organization. Complex systems have many defensive layers: some are engineered (e.g. oxygen failure alarms, Pin Index Safety System etc.), others rely on people (surgeons, anaesthetists, pilots, control room operators, etc.) and yet others depend on procedures (anaesthetic machine checklist, surgical safety checklist etc.) and administrative controls (appropriate staffing and duty hours or standard operating procedures [SOPs]). Ideally each defensive layer would be intact. Reason illustrates the system approach with the so-called ‘Swiss cheese model’ of system accidents (Reason 2000). In this model, each defensive layer is analogous to a slice of Swiss cheese, with holes representing a defensive breach. However, unlike cheese, the holes are dynamic, continually opening, shutting and shifting location according to conditions (Fig. 1). In order for a bad outcome to occur the holes in multiple layers must line up, allowing the effects of multiple defensive breaches to result in patient harm. According to the model, serious adverse events and complications are often preceded by a chain of individually minor errors and problems, in turn influenced by a wide variety of contributory factors (Van Beuzekom et al. 2010). The holes in the defensive layers arise for two reasons: active failures and latent conditions (Reason 2000). Active failures result from people with direct contact with the patient or system and have been further described as slips, lapses, rule-based and knowledge-based mistakes (Reason 1995, 2005). In contrast, latent conditions or latent risk factors refer to decisions taken, within an organization (by senior managers or clinicians) which create the conditions for unsafe acts to occur. These include inadequate or inappropriate staffing, heavy workload, employment of inadequately trained or prepared personnel, poor supervision, stressful environment, poor communication, poor equipment maintenance and conflict of priorities (economic versus clinical needs) (Mahajan 2010). A combination of latent conditions and active failures is implicated in most adverse outcomes, and latent conditions may be present but unidentified for some time until other conditions change to promote a system failure.
Based on Reason's model of organisational accidents, Vincent et al. developed a framework for analyzing risk and safety in clinical medicine (Vincent et al. 1998). According to Vincent, factors influencing clinical practice comprise: institutional context, organizational and management factors, work environment, team factors, individual staff factors, task factors and patient characteristics. The authors of the framework claim that ‘instead of focusing simply on the actions of the staff involved and on patient characteristics, we can examine the whole gamut of possible influences’. They propose that ‘the framework enables a systematic and conceptually driven approach to the development of organisational risk assessment instruments’, recognizing that much work ‘still needs to be done to standardise the procedures of data gathering and analysis, and to validate the approach’ (Vincent et al. 1998).
Why report critical incidents – particularly if the patient was not harmed – instead of only focusing on those incidents which actually cause harm or even death? The rationale for a broader reporting system, including events not associated with an adverse outcome such as death, is given in the so-called ‘safety pyramid’ described by Heinrich (1931) where he identified ‘in a workplace, for every accident that causes a major injury, there are 29 accidents that cause minor injuries and 300 accidents that cause no injuries’ (http://www.skybrary.aero/index.php/Heinrich_Pyramid, accessed 5/6/2013). These conclusions were based on observations of 550,000 accidents (Koebberling 2005). Keeping these proportions in mind, it is much more probable to detect a precursor of a harmful event, than an actual harmful event itself, with obvious implications for patient safety. This has been shown in paediatric anaesthetic practice, where a total of 150 incidents were reported after implementation of a near-miss reporting system as compared to four incidents entered during the same time period in the preceding year (Guffey et al. 2011). In one case describing the implementation of a CIRS the ‘outcome’, once a critical incident was noted, analysed, and appropriate measures taken to prevent future occurrence, was that no further incidents were noted (Dominguez Fernandez et al. 2008).
To date a CIRS has not been implemented in veterinary medicine, neither in equine nor small animal anaesthesia. A recent panel meeting discussion dedicated to critical incident reporting in equine anaesthesia in preparation of a new CEPEF study (Confidential Enquiry into Perioperative Equine Fatalities) (Bettschart-Wolfensberger & Johnston 2012) is presented.
During the AVA Spring meeting 2012 in Davos, Switzerland, a panel meeting with 22 invited participants, all veterinarians, took place. The participants represented different levels of expertise in equine anaesthesia (residents, American and European Diplomates) with a majority working or having worked in university hospitals. The panel meeting included moderated plenary discussions and two group work sessions. The first group work consisted of a placemat activity, a cooperative learning strategy allowing participants to think about, record and share their ideas. The aim was to define the term ‘preventable critical incident’ and/or near miss in the context of equine anaesthesia, and, – if possible, to define different categories useful to classify critical incidents. In preparation for the second group work, participants were asked to note critical incidents anonymously. The second group work session consisted of a moderated group work with the aim to allocate the collected critical incidents according to the framework for analysing risk and safety in clinical medicine developed by Vincent et al. (Vincent et al. 1998).
Some participants indicated that at their home institution a form of reporting system existed. Patient records are often very detailed, but with the exception of actual or potential litigation, not analyzed systematically. There was widespread awareness of critical incidents and near misses amongst participants, but reporting methods varied widely. There was an agreement that human errors or failures do also happen in equine anaesthesia and are well-known, ‘but nobody speaks about them’. Fear of punishment in case one admits an error has been made, was a reason provided to avoid incident reporting. There was also the understanding that accepting that errors are normal would help to identify them earlier (before they can contribute to a multiple holes aligning, as described by the Swiss cheese model). Difficulties in communication, especially with surgeons prior to and during anaesthesia, was mentioned several times as a factor potentially promoting the developments leading up to a critical incident. Within the first group work, different groups reached similar results with similar definitions and categorizations. A ‘preventable critical incident’ was defined as ‘an impairment of physiological functions potentially leading to death or permanent damage’. Another definition was ‘an event which could lead to injury of people, death of the horse or loss of its use’. Among the potential categories to classify critical incidents, the localisation in terms of where and when the critical incident occurred (e.g. airway obstruction) was mentioned. Human error, machine dysfunction and missing or inadequate monitoring were also identified by all groups. One approach suggested was to classify the categories into patient, human, equipment, drugs and procedural factors. Examples of preventable critical incidents are given in Table 1. Unlike the first group work, the second one asking for allocations of reported critical incidents (provided by participants) according to the framework for analysing risk and safety in clinical medicine was difficult. It was apparent that the critical incident examples were not easily attributable to a single category as defined by Vincent et al. but rather to several.
Table 1. Examples of preventable critical incidents as suggested by the panel
Aspiration during/following anaesthesia
Wrong drug/dose/mode of administration
Intra-arterial application of drug
Machine dysfunction/missing monitoring
Untreated hyperkalemia before anaesthesia
Horse waking up (e.g. during MRI)
No blood transfusion ready with acute bleeding
Hypoglycemia in neonates
Fracture in recovery phase
Although several CIRS were mentioned to exist, apparently no systematic approach including a structured reporting, collation and analysis (Choy 2008) is in place by the institutions represented by the panel members. Instead, flexible use of various reporting systems has been described, similarly to human medicine (Haller et al. 2011a,b). The first group work, using placemat activity, worked well with each group able to define what a critical incident meant for the participants. An open and lively debate indicated that critical incidents do occur and present potentially stressful events in equine anaesthesia for all those involved. Communication as a contributing factor is also consistently mentioned in the human medical field (Cooper 1978; Kantelhardt et al. 2011). As reflected in the panel discussion various methods to categorise critical incidents (and near misses) are possible and have been reported in the literature. Some of the proposed categories refer to the location where they occurred (e.g. in hospital versus field situation) while others, the organ system involved (Freestone et al. 2006). An overlap with regard to the classifying categories is also present with those described by Maaloe et al. (Maaloe et al. 2006). Some other groups proposed categories to classify critical incidents which resemble the categories used by Vincent et al. (Vincent et al. 1998).
The described difficulty – and therefore challenge to successful CIRS implementation – of mentioning, sharing and learning from human errors without blaming the individual who committed the error is in accordance with many papers from CIRS reports in human medicine (Freestone et al. 2006; Dominguez Fernandez et al. 2008; Haller et al. 2011a,b).
The second group work, the attempt to allocate and analyse the noted critical incidents according to the framework published by Vincent et al. was more difficult. There are at least three potential reasons for this. First it might well be that the non-standardised way of reporting the perceived incidents precluded a further causal analysis and allocation to one of Vincent's categories because the given information was not detailed enough. In addition, there is still no general agreement on how to define a ‘critical incident’ that could be applied in all clinical disciplines (Koebberling & Bernges 2007). This lack of clarity in knowing what should be reported and a lack of definition regarding the scope and nature of adverse events has been described as the major barrier to extrapolating meaningful data from CIRS at a national or international level (Rooksby et al. 2007; Ragg 2011). Before CIRS can be successfully implemented in equine anaesthesia, a clear definition of a critical incident has to be provided and information given that allows a causal root analysis. An insightful definition of a critical incident might be an ‘Oh S***!’ moment (Rooksby et al. 2007), but this approach is very equivocal given the very variable temperament and emotional expressions of an individual anaesthetist. A second reason explaining the perceived difficulty to use the framework of Vincent et al. might be that the question for the second group work was not useful or inappropriately formulated. This was highlighted by the comment of one participant indicating that ‘it is not possible to allocate an incident to only one of Vincent's categories, because if you wait long enough (or go further upstream in the chain of events) several if not all categories will be involved’. Potentially it might be more useful to ask to what extent each of the different categories was causally involved. The presence of a dedicated and appropriately trained risk officer has been considered to be essential for implementation of a functioning CIRS (Ahluwalia & Marriott 2005). A third reason of the difficulty to use Vincent's framework might be due to its general inappropriateness for causal root analysis in equine anaesthesia. Although Vincent et al. reviewed also other frameworks in the human medical field and derived the major factors from the study of numerous medical publications on error, adverse outcomes and risk management to ensure that all factors of potential relevance to medicine were included, equine anaesthesia might still present specific challenges not encountered in human medicine.
Although CIRS has been advocated and recommended many times as a suitable measure to increase patient safety, critical voices argue that the success of incident reporting, though obvious in aviation and other high-risk industries including nuclear power plants, oil and gas industries, is ‘yet to be seen in health-care systems’ (Mahajan 2010). And one might also ask why, after the seminal paper of Cooper 1978; it took such a long time, i.e. until the beginning of the new millennium, for CIRS to be introduced into human medicine (Brun 2005; Missbach-Kroll et al. 2005; Hubler et al. 2006; Dominguez Fernandez et al. 2008; Guffey et al. 2011; Kantelhardt et al. 2011).
Under-reporting is known to be a major weakness of CIRS (Haller et al. 2011a,b). The major barriers to critical incident reporting and learning are fear of punitive action, poor safety culture in an organisation, lack of understanding about what should be reported, difficulties in data entry, lack of awareness of how the incidents will be analysed, and how the reports will ultimately lead to changes which will improve patient safety (Mahajan 2010; Guffey et al. 2011). All of these might well apply to equine anaesthesia, too.
Additionally, a criticism mentioned several times addresses the problem, that frequencies of specific incidents and their corresponding underlying reasons might rather display the degree and extent of problem awareness of those who report and the focus of their interest and/or understanding, but not the true frequency of such events (Brun 2005).
In the future, a pilot study involving a small number of equine hospitals might be an option to test the appropriateness of a CIRS in equine anaesthesia, possibly as an integral part of a future CEPEF4 study. Several types of online documentation systems exist, with at least one being developed in collaboration with experts from flight safety (Kantelhardt et al. 2011). We came to the conclusion that critical incidents also happen in equine anaesthesia which require measures to be taken. We trust that a CIRS will serve as a practical tool to improve safety in equine anaesthesia – in addition to other retrospective mortality and morbidity studies.
We would like to thank all panel participants for their contribution to a lively discussion and sharing with us their personal experience: Jiske L'Ami, Andrew Bell, Perrine Benmansour, Leah Bradbury, KW Clarke, R Eddie Clutton, Sophie Cuvelliez, Sabine Kästner, Elizabeth Leece, Thijs van Loon, Paul MacFarlane, Miguel Gozalo Marcilla, Alejandra Garcia de Carellan Mateo, Frances Reed, Stephanie von Ritgen, Stijn Schauvliege, Claudia Spadavecchia, Polly Taylor.