Presented at the New York American College of Emergency Physicians Regional Conference, Lake George, NY, July 5, 2009; and the American College of Emergency Physicians Scientific Assembly, Boston, MA, October 5, 2009. Abstract presented during plenary session of New York Ultrasound Symposium, New York, NY, April 14, 2010.
Comparison of a Multimedia Simulator to a Human Model for Teaching FAST Exam Image Interpretation and Image Acquisition
Article first published online: 15 APR 2011
© 2011 by the Society for Academic Emergency Medicine
Academic Emergency Medicine
Volume 18, Issue 4, pages 413–419, April 2011
How to Cite
Damewood, S., Jeanmonod, D. and Cadigan, B. (2011), Comparison of a Multimedia Simulator to a Human Model for Teaching FAST Exam Image Interpretation and Image Acquisition. Academic Emergency Medicine, 18: 413–419. doi: 10.1111/j.1553-2712.2011.01037.x
The authors have no relevant financial information or potential conflicts of interest to disclose.
Supervising Editor: James Miner, MD.
- Issue published online: 15 APR 2011
- Article first published online: 15 APR 2011
- Received June 18, 2010; revisions received August 23 and September 15, 2010; accepted September 28, 2010.
ACADEMIC EMERGENCY MEDICINE 2011; 18:413–419 © 2011 by the Society for Academic Emergency Medicine
Objectives: This study compared the effectiveness of a multimedia ultrasound (US) simulator to normal human models during the practical portion of a course designed to teach the skills of both image acquisition and image interpretation for the Focused Assessment with Sonography for Trauma (FAST) exam.
Methods: This was a prospective, blinded, controlled education study using medical students as an US-naïve population. After a standardized didactic lecture on the FAST exam, trainees were separated into two groups to practice image acquisition on either a multimedia simulator or a normal human model. Four outcome measures were then assessed: image interpretation of prerecorded FAST exams, adequacy of image acquisition on a standardized normal patient, perceived confidence of image adequacy, and time to image acquisition.
Results: Ninety-two students were enrolled and separated into two groups, a multimedia simulator group (n = 44), and a human model group (n = 48). Bonferroni adjustment factor determined the level of significance to be p = 0.0125. There was no difference between those trained on the multimedia simulator and those trained on a human model in image interpretation (median 80 of 100 points, interquartile range [IQR] 71–87, vs. median 78, IQR 62–86; p = 0.16), image acquisition (median 18 of 24 points, IQR 12–18 points, vs. median 16, IQR 14–20; p = 0.95), trainee’s confidence in obtaining images on a 1–10 visual analog scale (median 5, IQR 4.1–6.5, vs. median 5, IQR 3.7–6.0; p = 0.36), or time to acquire images (median 3.8 minutes, IQR 2.7–5.4 minutes, vs. median = 4.5 minutes, IQR = 3.4–5.9 minutes; p = 0.044).
Conclusions: There was no difference in teaching the skills of image acquisition and interpretation to novice FAST examiners using the multimedia simulator or normal human models. These data suggest that practical image acquisition skills learned during simulated training can be directly applied to human models.
The Focused Assessment with Sonography for Trauma (FAST) exam is an essential tool in the evaluation of trauma patients.1,2 It is recognized by the American College of Surgeons and the American College of Emergency Physicians as a required aspect of residency training.3,4 However, there is a paucity of literature examining the quality of various teaching strategies designed to meet this educational mandate.5–8 Multiple authors have suggested the use of simulation in teaching and assessing resident competency in procedures, hypothesizing that it improves patient safety and the quality of care.9–16
Although the best model for practical FAST instruction has not been determined, the requirement to incorporate bedside ultrasound (US) education in emergency medicine curricula has resulted in the development of US simulation software. Few studies have examined the effectiveness of multimedia simulation in teaching FAST image interpretation,17 and no studies have examined its effectiveness in teaching image acquisition.
This study examined the difference between two educational interventions in a brief practical setting for initial training. Rather than attempt to prove trainees as competent following our brief educational intervention, we assessed the trainees’ basic set of new skills by evaluating both image interpretation and image adequacy. We hypothesized that there is no difference in the basic skills of image interpretation and acquisition between students trained using the two educational interventions.
This was a prospective, blinded, controlled study conducted on a consecutive sample of fourth-year medical students with no prior US training. Institutional review board (IRB) approval was obtained. The requirement for written consent was waived by the IRB. Verbal consent was obtained from all participants.
Study Setting and Population
The study site was a medical school affiliated with an urban tertiary care medical center with Level I trauma center designation. Consecutive fourth-year medical students were enrolled in the study during their required emergency medicine rotation. The study interventions were performed on 1 day during each 4-week medical school rotation, from November 2008 to October 2009. Participants were verbally queried as a group on their prior US experience before study enrollment. The sole exclusion criterion was prior practical US training, such as a radiology rotation with an US focus or an echocardiography rotation. Outside corroboration was not sought to validate subject responses.
All study participants received a standardized 1-hour didactic lecture covering the principles of the FAST exam. The lecture was given by the same investigator to each group of students. The lecture was an excerpt of the FAST lecture from the Society for Academic Emergency Medicine’s narrated US lecture series.18 This lecture covered image acquisition techniques and pitfalls and included normal and pathologic FAST exam findings.
For the practical session, the participants were separated into two groups based on the last digit of their Social Security numbers. The first group practiced the FAST exam on an Ultrasim (MedSim, Ft. Lauderdale, FL). The Ultrasim is a multimedia US simulator, with settings that allow normal and pathological programs. The simulator creates real-time dynamic US exams by matching the orientation of the machine’s probe on a mannequin to a stored 3-D image data set from a real patient to produce a realistic image on the simulator screen. The second group practiced on normal human models using a SonoSite MicroMaxx US machine (SonoSite, Bothell, WA). Normal human models were chosen for this group, instead of models with pathologic findings, to mirror the most readily available practical model used for training. Participants were allowed to practice on their assigned models until they felt confident in their ability to perform the exam. We did not limit the number of exams or the time each individual took for practice. Participants were also allowed to observe the practice sessions of the other individuals in their group. Two investigators, experienced in emergency US, proctored the practical sessions.
Participants were evaluated with regard to their ability to acquire and interpret FAST images. Four outcome measures were recorded, with image interpretation as the primary outcome and adequacy of image acquisition, confidence of image adequacy, and time to image acquisition as secondary outcomes.
To evaluate image interpretation, the participants reviewed 10 previously recorded FAST exam video clips and identified whether the exams were overall positive or negative (2 points) and whether each of the four views was individually positive or negative (2 points each), for a total score of up to 10 points for each of the 10 exams, for a maximum possible score of 100.
To evaluate adequacy of image acquisition, both groups performed the four-view FAST exam on a standardized normal patient (body mass index of 32.3) using a Sonosite MicroMaxx US machine. Using the same human model allowed for standardization of testing between study participants. Participants recorded still images of each of the four quadrants of the FAST exam that they deemed adequate to demonstrate the key elements of each view. Appropriate gain and depth were established on the machine for each view by the same investigator who provided the didactic lecture. Participants were instructed that the parameters could be adjusted at their discretion. Acquired images were graded according to an a priori set of criteria (Table 1) by two blinded investigators, who have extensive experience in emergency US. One of these blinded investigators was the proctor for the simulator practical group. The scoring system was designed to give equal weight to each of the four views while allowing for the independent grading of key elements in each view. There are no previously validated methods for assessing image adequacy to use as a preestablished guide for this study. A third blinded expert in emergency US reviewed images when there was a disagreement in grading between the initial investigators.
|Right upper quadrant||(3 points) Visualization above the hemidiaphragm and costophrenic angle|
|(3 points) Visualization of two thirds of the kidney–liver interface|
|Left upper quadrant||(2 points) Visualization above the hemidiaphragm and costophrenic angle|
|(2 points) Visualization of part of the spleen–diaphragm interface|
|(2 points) Visualization of greater than two thirds of the spleen–renal interface|
|Cardiac||(6 points) Appreciation of the area around the entire heart|
|Pelvic||(6 points) Visualization of the retrovesicular or retrouterine space|
The total times to acquire images were recorded for all participants. After finishing the image acquisition task, the students recorded their level of confidence in the quality of their images. Confidence was measured with a 10-cm continuous visual analog scale, with the 10-cm mark demonstrating a very high level of confidence and the 0-cm mark demonstrating a very low level of confidence.
Data were analyzed using Minitab statistical software (Minitab, Inc., State College, PA). A power analysis determined that a sample size of approximately 70 students (35 per group) was needed to have an 80% power to detect an absolute difference in interpretation scores between groups that was two-thirds of the within-group standard deviation. Alpha level was 0.05, selected as the smallest effect that would be important to detect, in the sense that any smaller effect would not be of substantive clinical significance. The initial family-wise error rate was calculated as 0.19, as four comparisons were made. A Bonferroni correction was performed to adjust the family-wise error rate, given that multiple outcome measures were compared. The Bonferroni adjustment determined a p-value of 0.0125 to be significant.
The scores for image interpretation, scores for image acquisition, levels of confidence, and times to complete image acquisition were compared between the two groups. The Mann-Whitney test was used to compare results. These results are reported as medians and corresponding interquartile ranges (IQR). The inter-rater reliability for evaluation of image adequacy was calculated by kappa statistic and Kendall’s correlation coefficient.
Ninety-two medical students were enrolled into the study (Figure 1). We exceeded the sample size established by our power calculation due to open enrollment of medical students during their required emergency medicine rotation. We did not refuse participants who were desirous of training. Three students initially approached declined participation, and one student arrived too late to enroll. Forty-eight students were allocated to the human models group, while 44 students were allocated to the simulation group. Five students in the simulation group eloped from the study following the image interpretation test without notifying the investigators or providing reason for departure. These five participants were excluded from evaluation except for their image interpretation test score. No participants were excluded because of prior US experience.
There were no differences in scores for image interpretation (p = 0.16), scores for image acquisition (p = 0.955), levels of confidence (p = 0.36), or time to acquire images (p = 0.044) between the human model and simulator groups. The median scores and 25th percentile through 75th percentile IQR reported in Figures 2 through 5. The data were not linearly distributed for the outcomes of image interpretation, image acquisition, time to acquire images, or the simulator group’s level of confidence, as assessed by the Anderson-Darling test.
The two raters agreed 72% (63/87) of the time when interpreting the participants’ acquired images. Concordance by the kappa statistic was 0.689, which represents substantial agreement. Kendall’s correlation coefficient, which factors in the degree of disagreement of the two raters, was 0.765.
The FAST exam is a required part of emergency medicine training. Successfully incorporating FAST training into emergency medicine education, and ultimately practice, requires that an established and proven regimen of training exist. As demand for effective training increases, the most efficient methods of skill acquisition need to be examined and incorporated into training programs.19
Most educators agree that a FAST curriculum needs to incorporate three components: didactic, practical, and experiential.2,20 There are many studies looking at didactic content2 and experiential numbers needed to reach proficiency.5–8,21 Little research exists pertaining to the different practical models for training and their effectiveness in imparting knowledge on image interpretation and, particularly, image acquisition.
FAST educators have customarily endorsed the use of practical human models that allow for pathologic exams with actual or simulated fluid.5,6,21–23 A consensus conference of the American College of Surgeons notes that while normal models are important, trainees should be required to be exposed to a minimum but unspecified number of pathologic exams.4 The American College of Emergency Physicians also recommends a certain number of abnormal examinations in a practitioner’s experiential phase of training prior to receiving credentialing.2
One small study by Gracias et al.23 showed that the interpretive abilities of physicians (nine faculty/fellows, two residents) with fewer than 30 examinations improved when known positive peritoneal dialysis patients were included in the practical portion of FAST training courses. Finding patients with such pathologic conditions presents a challenge to US course organizers and an even larger dilemma for those teaching individual practitioners on multiple occasions. Recognition of the potential value of positive examinations in practical sessions combined with the difficulty in recruiting US models has set the stage for simulation. Salen et al.17 showed that there was no statistical difference in the interpretive abilities of 20 residents exposed to peritoneal dialysis patients in comparison to multimedia simulated examinations for their practical sessions. However, do these simulated examinations demonstrating pathology really improve interpretation and image acquisition when compared to normal patient models?
Our data did not demonstrate any difference between the interpretive abilities of the participants exposed to simulated exams demonstrating pathology and those participants trained on normal human models. This lack of overall difference between the two groups was not attributable to an overall lack of knowledge or curriculum failure, given that both groups had relatively high interpretive exam scores. These results mirror the findings of Knudson and Sisley,9 who trained 74 nonrandomized residents using pathologic multimedia simulated examinations compared to normal human models and also found no difference in interpretive abilities between the groups. Perhaps the benefit of actual or simulated pathologic exams very early on in training has been overemphasized. It is also possible that the equivalency suggested between peritoneal dialysis models and multimedia simulated exams is misleading. Either way, the relative merit of supplying positive multimedia simulated US exams for beginner pathologic sessions has to be called into question when viewed within the context of the significant costs associated not only with machine purchase, but also the cost of maintenance and software upgrades.
With our data, we believe that we are the first to validate a multimedia US simulator as being sufficient to teach the hand–eye skills necessary for image acquisition on a human and with proficiency no different than practice with a human model. Our results showed no difference in the image acquisition between those participants trained on a real-time US simulator versus those trained in a traditional practical fashion on a normal human model. These measures, taken together, do imply that an instructor can effectively teach trainees practical image acquisition skills without recruiting a human model. If our findings prove applicable to other examination types, this would suggest that sensitive exams, such as transvaginal US, could be taught effectively through simulation early in training, thus avoiding exposing patients and participants to the awkwardness of initially learning this skill at bedside. Although medical students were used as an US-naïve population in this study, our results may be applicable to other US-naïve populations desirous of US training.
There are several limitations imparted by the study design. The participants were aware that their actions would be part of data analysis and were cognizant of the four measured outcomes. Also, although medical students serve as a plentiful sample of novice US users, this group has several potential problems. First, this consecutive group of medical students was enrolled during a required fourth-year rotation in emergency medicine. Therefore, the sample included students pursuing a variety of specialties and varying educational goals. Second, it is not clear if results derived from a population of medical students can be generalized to other US-naïve groups. Finally, a varied number of study participants were available from month to month, leading to a difference in how many exams study participants were exposed to. We did not limit or independently measure the number of practice attempts undertaken by individual participants or quantify their overall practice times. During the hands-on practical sessions, trainees in the simulator group practiced with a probe with a curvilinear face that was provided with the machine for use with the FAST software. Trainees in the human model group practiced with a phased-array probe, as the MicroMaxx used in the study did not have a curvilinear probe. The MicroMaxx and phased-array probe was subsequently used for the practical test, potentially giving the human model group a confounding advantage given their familiarity with the machine and the probe.
These results were obtained from a single center. Although we attempted to use a generic didactic curriculum by selecting a previously recorded and readily available resource,18 the practical teaching may be very specific to the proctors and not generalizable to another group with different proctors. Although we tested all the components that are thought necessary to scan patients in a clinical setting, including ability and time to obtain images, ability to interpret images, and confidence with the examination, these measures may not reflect actual clinical performance. A standardized patient was used to allow for reproducible assessment across both groups, but this did not replicate the variable scan environment inherent to actual clinical scenarios.
There was no difference in the efficacy of multimedia simulation in comparison to normal human models in teaching novices the skills of both image interpretation and image acquisition for the FAST exam. These data suggest that practical image acquisition skills learned during simulated training can be directly applied to human models.
The authors thank Sandra L. Werner, MD, for her assistance with data analysis, as well as Paul Feustel for his help with statistical analysis. The authors also thank Kenneth Nipps for volunteering as the standardized human model throughout the study.
- 3Accreditation Council for Graduate Medical Education. Resident Competency Guidelines. Available at: http://www.acgme.org/acWebsite/RRC_110/110_guidelines.asp#res. Accessed Jan 22, 2011.
- 4American College of Surgeons. Ultrasound Examinations by Surgeons. Available at: http://www.facs.org/fellows_info/statements/st-31.html. Accessed Jan 22, 2011.
- 18SAEM narrated lecture series. “Focused Assessment with Sonography in Trauma: The FAST exam.” Available at: http://www.saem.org/saemdnn/Education/EducationResources/NarratedUltrasoundLectures/FocusedAssessmentwithSonographyinTrauma/tabid/645/Default.aspx. Accessed Jan 16, 2010..