Disaster triage: Evidence, consistency and standard practice

Authors


  • Peter Aitken, MBBS, FACEM, EMDM, MClinEd, Associate Professor, Senior Staff Specialist; Gerard FitzGerald, MBBS, MD, FACEM, FRACMA, Professor.

  • The views expressed in this Editorial reflect those of the authors only, and do not necessarily represent the views of any organisations mentioned.

See also pp. 321–328

This issue (Field and Norton)1 and the last issue (Varshney et al.)2 of Emergency Medicine Australasia contain two articles that address the thorny issue of disaster triage tags.

Triage has military origins, with the initial purpose being to direct care to those most likely to be able to return to the battlefield. Conversely, modern disaster triage attempts to ensure both the prioritisation of care for those who need it most and a fair distribution of resources, so that health responders are able to ‘do the most for the most’.

Despite the relatively simple concepts on which triage is based, there are a number of issues yet to be resolved. Much of this comes back to a lack of evidence to support and drive change. For disaster triage to be most effective there needs to be clearly defined, consistent and understood categories, an accepted and evidence-based guide to reproducible, consistent and defensible decision making and a method of recording and displaying the information. The disaster triage tag is simply the end-product of this process and provides documentation of the decision made.

The main issues of concern and controversy include the methods of standardising triage assessments through algorithms or guidelines, the use of the ‘expectant’ category and which triage tag is most useful or functional? As with any process or procedure it boils down to the standard questions of why, when, where, how and who? We would suggest that analysing these issues there are three priority questions.

  • 1When should patients be triaged?
  • 2How should they be triaged?
  • 3How should that triage decision best be documented?

The ability to address these issues can then be guided by a number of disaster management principles:

  • 1The greatest good for the greatest number implies maximising the use of relatively scarce resources.
  • 2The principle of familiarity implies we should stick to what we know best.
  • 3The comprehensive, all-hazards, all-agencies approach begs for standardised scalable approaches.

Triage is a continuous process reinforced at particular steps in the patient journey. These key points are the time when more standardised approaches are necessary. These steps include the priority for extrication, for transportation from the scene, for clinical care, for theatre and the priority for bed access. The processes within health services should not differ in substance from those that occur on a daily basis, so that the only fundamental issue applies to the management of patients in the field and on first hospital reception.

Extrication is a dynamic process that is determined by the choices made by patients, bystander, first responders and rescuers. Patients will walk or run from a scene of danger and those who do not are either trapped or incapable because of injury or other impediment. Emergency services confront this on a daily basis and the real challenge is to apply these daily rules to the more major and catastrophic events.

A simplistic approach would be that those who are walking should keep walking to a safe location and those who are not should be categorised into those who are dead, those who are trapped and need extrication, and those who are not, but need assistance to evacuate. We would suggest this practical approach to extrication triage is both rational and normal, and needs no further complexity.

The key point of first triage occurs on the arrival of patients at the location(s) of forward medical treatment and evacuation. It is at this point that patients are usually assigned an assessment based on their clinical urgency. The categorisation used in these circumstances has a limited evidence base.

Most work has been done on algorithms of which a number are in current practice. The ‘Sieve and Sort’ approach, based on the Revised Trauma Score and as taught in Major Incident Medical Management and Support courses, is most widely used in Australia.3 A number of others exist, including START4 (Simple Triage And Rapid Treatment), and more recently SALT5 (Sort, Assess, Lifesaving interventions, Treatment/transport) and the Field Triage Score based on pulse and Glasgow Comas Scale Motor status alone,6 all of which are mainly used in the USA. The need for validation of these algorithms is driven by concerns about both ‘under-triage’ and ‘over-triage’,4 and the best use of resources.

Under-triage has the potential for injured patients to not receive care in an appropriate time frame with increased morbidity and mortality. Over-triage has the potential, by increased numbers of high triage category patients, to overwhelm available resources, again increasing morbidity and mortality. Triage accuracy has also been shown to affect the ability of trauma teams to cope with heavy casualty loads and reduce the likelihood of resource saturation.7 The optimum algorithm will ensure those most in need of care, with survivable injuries, receive it first and ensures resources are not wasted on futile attempts to save those with unsalvageable injuries. The issue of resource allocation and defensibility of practice is linked to patient categories, especially with the use of the ‘expectant category’, which remains controversial. Whether this applies equally in different systems is another unanswered question. Regardless of this, the ideal algorithm also needs to be consistent, with inter-rater reliability and based on evidence to defend practice.

If the issue of familiarity is considered, then two schools of thought emerge. One is that, given the infrequency that such events occur, a more sustainable and defensible option would be to use the system in regular use. In the ED context that would be the Australasian Triage Scale (ATS). Even in catastrophic circumstances, the ATS could be applied and if necessary the category one patients might simply be deemed unsustainable or expectant. Conversely, prehospital care providers, who increasingly are using tags during multi-casualty incidents, would suggest that the ED continues with the disaster triage tag after hospital arrival.

This leads us to two other questions of who should undertake triage, and how should it be documented? Studies have shown that first-year medical students who received brief START (triage algorithm) training achieved triage accuracy scores similar to those of emergency physicians, registered nurses and paramedics in previous studies but not in direct comparison trials.8

However, given the stresses of a mass casualty or disaster response, we would propose that, in accordance with the principle of familiarity, the people who normally triage should triage in a disaster. They are experienced and familiar and therefore more efficient. A paramedic should perform initial triage, whereas who should triage on arrival at a casualty clearing post or hospital is less well defined.

However, we also recognise that there will be times when the scale of catastrophe is such that triage decisions will imply abandonment, and therefore greater seniority might be required not just to ensure a defensible standard, but also to limit debate, someone with grey hair and their hands tied behind their backs. The former to brook no argument, and the latter to stop them treating the first patient. Ideally, if resources and time allow, this should be a consensus decision between clinicians and one supported by senior management and the health incident controller.

So finally, how do we document patients in a disaster? We recall similar conversations from the past when the relative merit of various colours, tag structure and content were hotly disputed or debated by people, such as ourselves, who had become instant experts in colour schemes and form design. Unfortunately, there is little real science to inform this question, so the authors of both papers are to be commended for attempting to provide a more solid basis to the evidence.1,2

There are a number of characteristics of the ideal triage tag. Visibility is essential so that responders can see the tag, preventing duplication of effort. They also need to be flexible: just as a patient's condition can deteriorate or improve with treatment, disaster triage tags need to be able to be upgraded or downgraded. They also need to provide space for recording of information (such as drugs given), so that patient information is kept together. And finally, water- and weather-proof, so that information is not lost to the elements. The ‘ideal’ triage tag does not appear to have been invented, based on the number of options used across various Australian states and territories,9 and the failed attempts to develop a consistent national approach to disaster triage tags through Standards Australia. Australia is not alone in this quest for a national standard.10

The papers by Field and Norton1 and Varshney et al.2 show that although there is no clear demarcation in terms of the relative effectiveness, utility or sustainability, or any other factors, including cost, which would provide best evidence of the ‘perfect’ tag, some are obviously preferred to others. Thus, it is all relative. We should pick one and concentrate our energies on sustaining its possible use by ensuring sufficient supplies and training of personnel in its use. Recent technological changes have provided opportunities for creative solutions, such as the SMART tag, and these advances might help with utility and functionality. The SMART tag, which was one of the preferred tags in these studies, has also been adopted for use by the majority of ambulance services in Australia, and is thus likely to emerge as a surrogate national standard.

Competing interests

PA is a Section Editor for Emergency Medicine Australasia. GF is on the Editorial Board of Emergency Medicine Australasia.

Ancillary