SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. References
  9. Supporting Information

ACADEMIC EMERGENCY MEDICINE 2012; 19:98–101 © 2011 by the Society for Academic Emergency Medicine

Abstract

Objectives:  The objective was to assess the incidence of various technical errors committed by emergency physicians (EPs) learning to perform focused assessment with sonography in trauma (FAST).

Methods:  This was a retrospective review of the first 75 consecutive FAST exams for each EP from April 2000 to June 2005. Exams were assessed for noninterpretable views, misinterpretation of images, poor gain, suboptimal depth, an incomplete exam, or backward image orientation.

Results:  A total of 2,223 FAST exams done by 85 EPs were reviewed. Multiple noninterpretable views or misinterpreted images occurred in 24% of exams for those performing their first 10 exams, 3.6% for those performing their 41st to 50th exams, and 0% for those performing their 71st to 75th exams (Cochran-Armitage trend test = 10.5, p < 0.0001). A single noninterpretable view, poor gain, suboptimal depth, incomplete exam, or backward image orientation occurred in 48% of exams for those performing their first 10 exams, 17% for those performing their 41st to 50th exams, and 5% for those performing their 71st to 75th exams (Cochran-Armitage trend test = 11.6, p < 0.0001).

Conclusions:  The incidence of specific technical errors of EPs learning to perform FAST at our institution improved with hands-on experience. Interpretive skills improved more rapidly than image acquisition skills.

Focused assessment with sonography in trauma (FAST) is an important tool in the evaluation of abdominal trauma.1–6 To further our understanding of training in FAST, the purpose of this study was to assess the incidence of specific technical errors of clinicians learning to perform FAST.

Methods

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. References
  9. Supporting Information

Study Design

This was an institutional review board–approved retrospective study of FAST exams from April 2000 to June 2005 to assess for specific technical errors committed by the emergency physician (EP) performing the FAST exam.

Study Setting and Population

The study was conducted at an American College of Surgeons–designated Level I trauma center with 75,000 annual adult visits and an emergency medicine residency program, where FAST is done by trauma protocol as part of the secondary survey. Images are maintained for the purposes of quality assurance (QA), educational feedback, credentialing, and Medicare-compliant billing.

During the study period, FAST was done using an Aloka SSD-1400 with a 3.5- to 5-MHz curved linear array probe to evaluate 1) Morison’s pouch, 2) the splenorenal recess, 3) the suprapubic window, and 4) the subxiphoid window for free fluid (FF) with static images and physician documentation meeting the American College of Emergency Physicians (ACEP) and American College of Radiology (ACR) guidelines.1,7 As part of the routine QA process, every exam was subsequently reviewed within 2 weeks for physician feedback regarding image quality and interpretation.

The physicians performing FAST were resident and attending EPs who had completed an emergency ultrasound (EUS) course involving 6 hours of didactics on US physics; knobology; abdominal, cardiac, and vascular access applications; and 4 hours of hands-on practice on normal volunteers, including at least three FAST exams. This included a discussion of Bahner’s description of essential factors related to image quality and acquisition.8 In addition, EPs completed a separate 4-hour course on first-trimester obstetric ultrasonography cosponsored by the departments of obstetric radiology and emergency medicine.

Study Protocol

The first 75 FAST exams for each physician were obtained, since only two operators performed more than 75 exams. Each study was reviewed by one of two EP-sonographers using a standardized data sheet to document 1) if each view was interpretable; 2) if it had correct interpretation; 3) if it had appropriate gain, depth, and orientation; 4) if it had correctly labeled structures; and 5) if the study was complete.

Exams done by two physicians were entered under the more experienced physician, but also tracked for the junior physician to ensure accurate assessment of individual physician experience. For example, if an exam was done by a resident performing her 23rd exam with the assistance of an attending physician performing her 40th exam, it was tracked for the sake of this study as the 40th exam of the attending but also noted for the resident so that her next exam was counted as her 24th exam.

Study Measurements

Images were “noninterpretable” if anatomic structures could not be clearly identified, thus obscuring diagnosis, or the reviewing experts could not read them diagnostically (Data Supplement S1, Figures 1 and 2, available as supporting information in the online version of this paper). Images were “misinterpreted” if the presence or absence of FF was misidentified as determined by the reviewing sonographers (e.g., regarding pleural fluid as perihepatic FF or misidentifying anterior pericardial fat as a traumatic effusion). A priori, “equivocal” or “indeterminate” exams were defined as “misinterpreted.”

“Poor depth” was defined as viewing the region of interest outside of the focal range of the probe, having the image in less than half the screen, or having the image too deep (Data Supplement S1, Figures 3–7, available as supporting information in the online version of this paper). “Poor gain” was defined as improper gain that obscured anatomical structures or made image interpretation difficult (Data Supplement S1, Figure 3, available as supporting information in the online version of this paper) or obscurations of the near or far fields due to gain that prevented evaluation of a relevant area of interest (e.g., diaphragmatic-hepatic space; see Data Supplement S1, Figures 6–8, available as supporting information in the online version of this paper). “Backward image orientation” was defined as having the probe marker to the patient’s left or caudal direction (Data Supplement S1, Figure 9, available as supporting information in the online version of this paper).

Criterion Standard.  The criterion standard was review by one of two expert EP-sonographers who were blinded to the identity of the physician and his or her experience level. The first completed a college certification course in ultrasonography, was certified by the American Registry of Diagnostic Medical Sonographers (ARDMS) and the American Board of Emergency Medicine (ABEM), and performed over 400 prior FAST exams. The second expert completed an EUS fellowship, was board-certified by ABEM, and had performed over 500 prior FAST exams.

Data Analysis

Data were collected in an Excel database (Microsoft Excel, Microsoft Corp., Redmond, WA) and translated into a native SAS format using DBMS/Copy (Dataflux Corp., Cary, NC) for analyses with SAS version 9.1 (SAS Institute, Cary, NC). A priori it was determined to sort the data by operator in groups of 10 exams (e.g., exams 1–10, 11–20,) up to 71–75 to assess the relationship between increasing number of exams and proportion of errors using the SAS TREND option with the Cochran-Armitage trend test. Here, the null hypothesis assumes no linear trend in binomial proportions of response across increasing number of exams, whereas the alternative hypothesis assumes that there is a linear trend in binomial proportions of response across increasing number of exams. The reported p-values are for a two-sided test and the two-tailed alpha level was set at 0.05. No statistical adjustments (e.g., Bonferroni corrections) were made for multiple comparisons. Additionally, the proc genmod procedure was used to perform an analysis to account for the fact that clustered examinations by one operator are typically more similar than those performed by another operator; thus, the data analyses took intraoperator cluster correlation into account rather than assuming independence among all observations. We planned a priori to calculate Cohen’s K statistic for the interrater reliability of the two reviewing experts by having them perform blinded, concurrent review of 200 FAST exams selected by a random numbers generator.

Results

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. References
  9. Supporting Information

A total of 2,323 FAST exams were done by 85 physicians during the study period, and 2,223 (96%) were available for review and were included in this study. Table 1 depicts the incidence of each error. The percentage of exams that were non-interpretable by location were 50% (splenorenal), 30% (subxiphoid), 15% (hepatorenal), and 5% (suprapubic).

Table 1.    Specific Errors, n (%)
 Number of ExamsCochran-Armitage Trend TestOR for Error per Each Increase in Exam Group (95% CI)
1–1011–2021–3031–4041–5051–6061–7071–75
Multiple noninterpretable views135 (21)50 (11)27 (7)14 (5)8 (4)5 (4)0 (0)0 (0)9.8 (p < 0.0001)0.77 (0.72–0.82) p < 0.0001
Single noninterpretable view77 (12)46 (10)29 (8)11 (12)16 (7)9 (7)4 (5)0 (0)4.2 (p < 0.0001)0.92 (0.88–0.97) p = 0.0008
Misinterpreted exams37 (6)20 (4)12 (3)7 (2)1 (1)0 (0)0 (0)0 (0)5.1 (p < 0.0001)0.82 (0.78–0.86) p < 0.0001
Poor gain225 (36)85 (18)47 (13)35 (12)12 (6)8 (6)0 (0)0 (0)12.8 (p < 0.0001)0.75 (0.71–0.79) p < 0.0001
Suboptimal depth98 (16)40 (9)13 (4)15 (5)10 (5)5 (4)1 (1)0 (0)7.2 (p < 0.0001)0.82 (0.77–0.88) p < 0.0001
Missing views40 (6)17 (4)16 (4)19 (6)5 (2)6 (5)3 (4)1 (5)1.4 (p = 0.16)1.0 (0.9–1.0) p = 0.2
Mislabeled structure(s)19 (3)14 (3)3 (1)2 (1)0 (0)0 (0)0 (0)0 (0)11.6 (p < 0.0001)0.78 (0.72–0.84) p < 0.0001
Backwards orientation107 (17)95 (20)53 (15)42 (14)26 (12)15 (11)10 (12)0 (0)3.4 (p = 0.001)0.93 (0.90–0.97) p < 0.0001
Number of residents855242352817126  
Number of exams6324723642972201338322  

Poor depth was associated with image non-interpretability or misinterpretation (odds ratio [OR] = 3.1, 95% confidence interval [CI] = 2.2 to 4.4; 53 of 292 studies with multiple noninterpretable or misinterpreted views, compared to 129 of 1,931 without). Poor gain was associated with a trend for noninterpretability or misinterpretation (OR = 1.3, 95% CI = 1.0 to 1.8; 65 of 292 with multiple noninterpretable or misinterpreted views, compared to 347 of 1,931 without). However, the combination of poor depth and poor gain was highly associated with image noninterpretability or misinterpretation (OR = 6.0, 95% CI = 3.7 to 10.0; 30 of 292 studies with multiple noninterpretable or misinterpreted views, compared to 36 of 1,931 without). The reviewing experts had a high level of agreement: K = 0.98 (95% CI = 0.96 to 1.0) for noninterpretability and misinterpretation of images and K = 0.95 (95% CI = 0.89 to 1.0) for the remaining errors.

Discussion

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. References
  9. Supporting Information

Our study differs from prior work because we defined “error” in terms of image quality and technique, rather than the detection of FF or need for laparotomy. This was done for several reasons. First, all patients underwent FAST as part of an existing trauma protocol, but not necessarily computed tomography or operative exploration, thus precluding such comparison. Second, the technical performance of FAST may add another dimension to training models. Finally, technical errors may affect FAST accuracy as suggested by the association between image interpretability and poor depth or gain.

This study involved 100% of the EPs in our ED. Early on, the most common errors were poor gain or depth and multiple noninterpretable views. Poor gain and depth relate primarily to image acquisition and improved greatly with training. Physicians learning FAST should pay special attention to these aspects of image quality because they are associated with image interpretability.

The splenorenal and subxiphoid views accounted for 80% of noninterpretable images and may be specific aspects of FAST that require special attention during training. In our experience, the splenorenal recess is difficult to assess, since it is further posterior than Morison’s pouch and the spleen provides a smaller acoustic window than the liver.

It appears that physicians acquire the ability to interpret FAST images earlier than the technical skills required to actually perform the exam. Misinterpreted exams occurred in less than 5% of cases done by operators with at least 10-exam experience, whereas noninterpretable views occurred in over 5% of exams when done by operators with up to 60-exam experience. It may be that the simplicity of looking for an anechoic stripe on FAST lends itself to easy interpretation, which could thus be tested easily with a written exam of FAST images. Likewise, since it appears to take more experience to reliably obtain appropriate images, it may be that a separate practical exam may be required to assess the technical skills of operators to perform FAST.

Limitations

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. References
  9. Supporting Information

This study involved both selection and spectrum biases, as only exams documented for education or clinical management were reviewed. However, this presents a “best-case scenario” for FAST, and our findings would likely persist in a prospective study of consecutive patients.

Secondly, while the FAST images met the ACEP and ACR documentation guidelines,1,7 still images may not accurately depict the true quality of an exam. Although image recording is the basis for medicolegal documentation, it may be that video or bedside review would demonstrate a lower rate of noninterpretable images.

Third, while the operators in this study completed similarly structured initial training, three different EP-sonographers provided the lectures and made minor variations in their lectures from year to year. It is unknown how this may have affected our results. Likewise, residents were not required to participate in any other EUS courses, which would likely alter the incidence of errors obtained.

Conclusions

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. References
  9. Supporting Information

The incidence of specific technical errors committee by EPs learning to perform focused assessment with sonography in trauma at our institution improved with hands-on experience. Interpretive skills improved more rapidly than image acquisition skills.

References

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. References
  9. Supporting Information

Supporting Information

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Discussion
  6. Limitations
  7. Conclusions
  8. References
  9. Supporting Information

Data Supplement S1. Technical errors in FAST.

Please note: Wiley Periodicals Inc. are not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

FilenameFormatSizeDescription
ACEM_1242_sm_DataSupplementS1.docx999KSupporting info item

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.