Highlights in Emergency Medicine Medical Education Research: 2008


  • Susan E. Farrell MD, EdM,

  • Wendy C. Coates MD,

  • Gloria J. Khun DO, PhD,

  • Jonathan Fisher MD, MPH,

  • Philip Shayne MD,

  • Michelle Lin MD

  • From the Office of Graduate Medical Education, Partners Healthcare System, Center for Teaching and Learning, Harvard Medical School, and the Department of Emergency Medicine, Brigham and Women’s Hospital (SEF), Boston, MA; the Department of Emergency Medicine, Harbor-UCLA Medical Center, Los Angeles Biomedical Research Institute at Harbor-UCLA (WCC), Torrance CA, and University of California, Los Angeles–David Geffen School of Medicine (WCC), Los Angeles, CA; the Department of Emergency Medicine, Wayne State University (GJK), Detroit, MI; the Department of Emergency Medicine, Beth Israel Deaconess Medical Center (JF), Boston, MA; the Department of Emergency Medicine, Emory University School of Medicine (PS), Atlanta, GA; and the University of California, San Francisco, Department of Emergency Medicine, San Francisco General Hospital (ML), San Francisco, CA.

Address for correspondence and reprints: Susan E. Farrell, MD, EdM; e-mail: sefarrell@partners.org.


Objectives:  The purpose of this article is to highlight medical education research studies published in 2008 that were methodologically superior and whose outcomes were pertinent to teaching and education in emergency medicine.

Methods:  Through a PubMed search of the English language literature in 2008, 30 medical education research studies were independently identified as hypothesis-testing investigations and measurements of educational interventions. Six reviewers independently rated and scored all articles based on eight anchors, four of which related to methodologic criteria. Articles were ranked according to their total rating score. A ranking agreement among the reviewers of 83% was established a priori as a minimum for highlighting articles in this review.

Results:  Five medical education research studies met the a priori criteria for inclusion and are reviewed and summarized here. Four of these employed experimental or quasi-experimental methodology. Although technology was not a component of the structured literature search employed to identify the candidate articles for this review, 14 of the articles identified, including four of the five highlighted articles, employed or studied technology as a focus of the educational research. Overall, 36% of the reviewed studies were supported by funding; three of the highlighted articles were funded studies.

Conclusions:  This review highlights quality medical education research studies published in 2008, with outcomes of relevance to teaching and education in emergency medicine. It focuses on research methodology, notes current trends in the use of technology for learning in emergency medicine, and suggests future avenues for continued rigorous study in education.

The performance of successful medical education research is a unique academic endeavor, requiring in-depth knowledge of both educational theory and traditional research methods. As defined by the Carnegie Foundation’s Ernest Boyer, medical education research embodies scholarship in teaching, focusing not on the act of teaching or the design of educational innovations, but rather on the scientific investigation and assessment of the effects of teaching and educational efforts.1 Leaders in academic medicine in the United States and abroad have sought to define the contours of educational research,2–4 and a number of commentaries have suggested how medical education research studies should be assessed, suggesting the use of metrics adapted from traditional bench and clinical research.5–7

Recent activities in emergency medicine (EM) are fostering the advancement of medical education research as an avenue for publication and promotion, including the creation of the Educational Research Task Force by the Society for Academic Emergency Medicine, and the sponsorship by the Council of Emergency Medicine Residency Directors, in conjunction with the Association of American Medical Colleges, of a mentored Medical Education Research Certificate program. These efforts promote medical education research that identifies important and testable educational hypotheses, studied with valid and reliable methods suitable to the population of interest in EM education.

The Alliance for Clinical Education has produced summary reviews of education-related research in internal and family medicine, and these models served as a basis for this article’s work.8,9 However, this article specifically focuses on and highlights medical education research studies that are methodologically superior and produce results that are pertinent to teaching and education in EM. It is hoped that readers will benefit from unbiased summaries of these studies and that they may serve as a resource and an endorsement of the performance of high-quality medical education research in our specialty.


Five contributing reviewers and authors were identified by one of the project organizers (ML) and invited to participate in the project based on their years of teaching and national committee service in undergraduate and graduate medical education, writing, and research in EM.

Potential candidate articles were independently identified by two authors (SEF and ML) through a PubMed database Boolean search using the following medical subject heading terms: emergency medicine and medical education, medical student, internship, housestaff, resident, undergraduate medical education, graduate medical education, and continuing medical education. Articles were limited to English language articles published in 2008. Sixty-four articles were identified. Abstracts were reviewed and all commentaries, editorials, policy or consensus papers, and reviews or descriptions of curricula were eliminated in favor of medical education research studies. Medical education research studies were defined as hypothesis-testing investigations and measurements of educational interventions. Educational research pertinent to teaching and learning in EM or applicable to teaching and learning in an emergency department were included. Based on these criteria and limitations, a total of 30 articles were independently included and agreed upon by both authors (SEF and ML).

A reviewer’s rating score sheet was adapted based on that of the Research in Medical Education symposium of the Association of American Medical Colleges and applying a number of the criteria used in the Alliance for Clinical Education study reviews.8,9 All articles were to be independently rated based on eight equally weighted anchors. To achieve the goal of identifying superior educational research, four of the eight anchors explicitly addressed research methodologic criteria: “clarity of the study question,”“applicability of the research design to the study question,”“data collection methods,” and “data analysis.” Additional rating criteria included “relevance to teaching,”“generalizability of the results,”“innovation of the study,” and “clarity of writing.” Possible rating scores for each anchor consisted of a numerical rating: 5 = excellent, 4 = good, 3 = fair, 2 = unsatisfactory, and 1 = not applicable. Based on this rating method, the maximum possible rating score for any article was 40.

The 30 articles were listed in alphabetical order by first author and, along with the definitions of the rating scores, were made available to all six authors for review. Each author independently reviewed and rated all 30 articles on each of the eight anchors and a total rating score was calculated for each article. Using each reviewer’s total rating score for each article, a rank list of all 30 articles was created for each reviewer. Each of the resulting six rank lists ranked the articles by the total rating score assigned by each reviewer. Because the articles were rated independently of each other, it was possible that more than one article could be assigned the same total rating score. All six rank lists were compared.

It was established a priori that articles to be highlighted in this review must have high total rating scores assigned by five of the six reviewers (83% agreement). It was also agreed that any article in which one of the reviewers was a coauthor would not be reviewed by that author, but percentage agreement would be determined based on 80% ranking agreement among the other five reviewers. Of note, this standards-based review methodology did not predetermine the number of articles that could be designated in the “highlighted” category. In theory, all articles could have been included based on the total rating score assigned by the reviewers; however, it was decided not to make the ranking determination based on normative standards, but rather to rate each article on its own merit, independent of the other articles. The six rank lists were analyzed, and 83% agreement (five of six authors) in article rating was achieved within the top 10 ranked articles on each rank list.

Results and discussion

Five medical education research studies met the a priori criteria and are summarized here in alphabetical order, by the first author’s name. All 30 articles are listed in Data Supplement S1 (available as supporting information in the online version of this paper).

Baskin C, Seetharamu N, Mazure B, et al. Effect of a CD-ROM-based educational intervention on resident knowledge and adherence to deep venous thrombosis prophylaxis guidelines. J Hosp Med. 2008; 3:42–7.

Background:  Resident educational requirements are challenged by the necessities of patient care, greater nonteaching demands on the part of faculty, and mandated limitations to resident work hours. Computer-based self-learning may contribute to the educational needs of residents within the context of today’s current postgraduate medical training system. This study sought to evaluate the effectiveness of a computer-based CD-ROM self-teaching tool on residents’ knowledge of, and adherence to, anticoagulation protocols.

Methodology:  A CD-ROM of modules designed to teach anticoagulation guidelines was developed by a consensus of faculty experts at the study institution. A total of 117 residents from 11 departments, including EM, were administered a multiple-choice test to evaluate their knowledge of the guidelines before and after the educational intervention. Patient charts were also reviewed before and after the intervention to determine the proportion of charts that indicated adherence with venous thromboembolism (VTE) prophylaxis guidelines. The intervention was repeated in a subsequent year, comparing a study cohort to a control cohort of PGY-1 interns at a comparable institution in the hospital system.

Results:  Across specialties, there was a statistically significant increase in multiple-choice test scores after viewing the CD-ROM teaching modules. Compared to a control group, the improvement in test scores for those residents who received the teaching intervention was statistically significant. Based on chart review, adherence to VTE protocols increased from 75% to 95% of applicable patient charts, and this change was maintained up to 7 months after the educational intervention.

Limitations/relevance for future educational advances:  This single-institution study was able to complete pre- and postintervention testing on only 44% of enrolled residents. It is unclear how comparably matched a subsequent intervention group and control group were in a follow-up study of the intervention, and although this study does indicate a positive impact of this educational intervention, the CD-ROM modules were not compared to other modalities for teaching the material. However, this study does indicate that computer-based self-learning interventions can be effective in achieving some resident educational objectives. Given the constraints of time and EM shift work, the role of self-learning as it relates to EM training and resident learning objectives warrants further study.

Strengths of the study:  This article exemplifies the measurement of a pre- and posteducational intervention that is relevant to specific patient care outcomes. The authors were able to assess residents’ knowledge using changes on an objective measurement test. More importantly, by assessing VTE protocol compliance, a clinically relevant patient care indicator, they were able to assess a change in learner behaviors associated with the educational intervention. Examining a change in the actual observed rate of VTE protocol compliance during the study was beyond the scope of this investigation, but would be an important outcome that would additionally bridge the gap between educational research and improvements in clinical care. Finally, the authors attempted to control for the effects of training on changes in knowledge by repeating their intervention in a case–control cohort study and showed a statistically significant postintervention improvement.

Chenkin J, Lee S, Huynh T, Bandiera G. Procedures can be learned on the Web: a randomized study of ultrasound-guided vascular access teaching. Acad Emerg Med. 2008; 15:949–54.

Background:  Web-based, self-directed learning is a flexible, interactive multimedia method for achieving educational objectives within the reality of time and physical space constraints. Web-based formats for teaching factual information have been shown to be effective in promoting knowledge acquisition. This study compared the effectiveness of a Web-based tutorial to a didactic instruction session in the teaching of ultrasound-guided vascular access (UGVA) techniques.

Methodology:  An interactive Web-based tutorial designed to teach UGVA techniques was developed by the authors, pilot-tested, and revised based on expert feedback. Junior EM residents and faculty were invited to participate if they had completed basic ultrasound training, but had no training in UGVA techniques. After a precourse written examination to test their knowledge of UGVA, 21 participants were randomized to the Web-based tutorial or a 1-hour didactic session, both of which consisted of the same content material. Participants subsequently had 2 hours of independent skills practice with no instructor present. After a 2-week rest period, during which time each group had access to either the Web material or written tutorial information, participants repeated the written examination, and their skills were tested in a four-station, pilot-tested objective structured clinical examination (OSCE). The participants’ OSCE performances were scored on a checklist of critical actions and global ratings by physician examiners who were blinded to the participants’ educational intervention.

Results:  There was no significant difference in the mean postintervention OSCE scores or the written examination scores between the Web-based and didactic training groups. Both groups had similar improvements in their written examination scores. There was no statistically significant difference in the groups’ ratings of the effectiveness of the teaching format to which they were exposed.

Limitations/relevance for future educational advances:  This single-site study had fairly small enrollment, which limits its external validity. The authors did not control for the learning effects of the 2 hours of independent skills practice, compared to the learning that occurred as a result of the Web-based and didactic teaching sessions. The addition of a preintervention OSCE, in addition to the written examination, would strengthen the ability to measure a change in skills as a result of the two teaching methods. However, the study indicates that Web-based self-learning may be at least as effective as didactic sessions in transmitting some forms of knowledge. As in the Baskin study, independent educational experiences that can be adapted to the time constraints and schedules of EM training warrant further investigation.

Strengths of the study:  This article is an investigation of educational interventions based on randomization of study participants into two groups. Pre- and posttesting was used to assess changes in participants’ knowledge. The translation of the educational interventions to an improvement in learners’ behaviors was measured at OSCE skills stations and judged by examiners who were blinded to participants’ educational interventions during scoring.

Lampe CJ, Coates WC, Gill AM. Emergency medicine subinternship: does a standard clinical experience improve performance outcomes? Acad Emerg Med. 2008; 15:82–5.

Background:  All graduating medical students should have exposure to, and clinical skills with which to initially care for, a broad range of emergent illnesses. EM clerkships can provide this necessary experience. However, medical students who participate in an EM subinternship may have a varied clinical experience, depending on the number of patients for whom they care and the variety of clinical complaints to which they are exposed. This study sought to determine if a standardized clinical experience would improve students’ general EM knowledge.

Methodology:  Fourth-year medical students were assigned to the control group (CG) or treatment group (TG) on a rotating monthly basis. Students in the CG saw patients based on their own interest and faculty guidance. Students in the TG were additionally instructed to identify and see at least one patient with each of 10 common predetermined ED chief complaints. All students kept logs of their patient encounters. On the first day of the rotation all students were tested on their knowledge of the 10 common chief complaints using a written exam. All students completed a final written examination, which tested their general EM knowledge, including knowledge pertinent to the 10 chief complaints.

Results:  Thirty-seven students completed the study. Controlling for sex, specialty choice, and pretest scores, students in the TG had a 7% point greater change in their posttest scores compared to students in the CG. A greater percentage of students in the TG tended to see more of the prescribed clinical cases.

Limitations/relevance for future educational advances:  This single-site study had a small enrollment. Concomitant clinical experiences to which students may have been exposed were not controlled for. However, this study does indicate that a semistructured approach to students’ clinical activities in EM may improve their overall general knowledge of emergency medical care. The impact of such an experience on an individual student’s ability to manage actual urgent and emergent clinical problems warrants investigation.

Strengths of the study:  This article illustrates the effect of a structured clinical education intervention on knowledge acquisition. The authors controlled for a number of factors that may have impacted changes in students’ posteducation exam scores.

Wenk M, Waurick R, Schotes D, et al. Simulation-based medical education is no better than problem-based discussions and induces misjudgment in self-assessment. Adv Health Sci Educ Theory Pract. 2009; 14:159–71.

Background:  The use of simulation-based teaching has grown exponentially in medical schools and in EM residencies in the past 5 years. Teaching through simulation provides a hands-on and safe environment for learning. However, the educational objectives for which simulation is best suited are still being defined and simulation has not definitively been shown to be superior to other teaching modalities. This German study compared simulation-based teaching to problem-based discussion in the acquisition of knowledge and the skills of rapid sequence intubation (RSI).

Methodology:  After three teaching sessions on a high-fidelity simulator, this prospective study randomized 32 fourth-year medical students to either a simulation-based teaching (SBT) or problem-based discussion (PBD) group. Both groups learned about RSI using a case-based method. Pre- and postintervention measures of students’ confidence and knowledge were assessed using a five-point Likert scale and a multiple-choice exam, respectively. Postintervention, all students were tested on their RSI knowledge and videotaped performing the learned skills in the context of cases managed in a high-fidelity simulator. Students’ management skills were independently scored on a checklist by two reviewers who were blinded to the students’ educational interventions.

Results:  Postintervention multiple-choice exam and simulation case scores did not differ between the SBT and the PBD groups. The authors noted that the SBT-trained students rated their confidence higher than did PBD-trained students, although their actual skills were not better than PBD-trained students.

Limitations/relevance for future educational advances:  Because this single-site study had a small enrollment, its external validity is limited. It does, however, use sound methodology to address the fact that low-fidelity (paper-based) simulation experiences may be as effective as high-fidelity mannequin-based simulation training for some knowledge content and skills acquisition that are central to EM training. Future educational research should define how various methods of training can best serve the learning needs of EM trainees.

Strengths of the study:  This article is an example of educational research using randomization of study subjects. The authors designed the study to assess both learner confidence, but more importantly, learner competency by using two objective, external measures: a multiple-choice test and a standardized checklist scored by two independent, blinded reviewers. All students were familiarized with the simulation equipment before randomization into the two study groups, thus addressing the potentially confounding factor of equipment familiarity during postintervention skills testing.

Youngblood P, Harter PM, Srivastava S, Moffett S, Heinrichs WL, Dev P. Design, development, and evaluation of an online virtual emergency department for training trauma teams. Simul Healthc. 2008; 3:146–53.

Background:  Simulated team training in crisis scenarios, such as trauma resuscitations, is relevant to both undergraduate and graduate medical education. Because high-fidelity simulators are expensive and resource-intensive, an alternative, lower-cost, computer-based, interactive virtual emergency department (VED) was developed, and its effect on learners’ trauma team work abilities was studied. This novel study compared virtual simulation with high-fidelity patient simulation (HFPS) in teaching trauma team leadership and management skills.

Methodology:  Thirty recent medical graduates and senior medical students received online instructional materials related to trauma team leadership and trauma care. They were subsequently randomized to training in either the VED or HFPS group, in which they completed four learning cases as a member of a trauma team. In each study arm, trainees’ performance in trauma resuscitation was assessed by three independent evaluators using a standardized behaviorally anchored scoresheet before and after the educational intervention.

Results:  Trainees in both study groups demonstrated a statistically significant improvement in their trauma performance scores before and after the learning experiences. There was no statistically significant difference between the trainees’ postintervention scores when comparing the two teaching modalities.

Limitations/relevance for future educational advances:  The small sample size and the technologic resources used in this pilot study limit the generalizability of the results. However, the implication that computer-based virtual simulation and traditional high-fidelity simulation teaching may be similar in effectiveness in educating trainees in teamwork in trauma resuscitation warrants further research. Future study should investigate how well the outcomes of teaching methods translate to trainees’ patient care behaviors in the clinical setting.

Strengths of the study:  This article is another example of a comparative study of learners’ behaviors after being randomized to two educational interventions. Learners’ trauma skills were assessed by three independent evaluators using a performance assessment instrument with a detailed scoring rubric. Interrater reliability was measured and was acceptable (κ = 0.71). This well-written article highlights an innovative alternative to high-fidelity simulation in team training for high-risk scenarios.

Trends in medical education research

Of the 30 studies reviewed for this article, there was noted to be a striking trend toward the use of technology in EM-related educational research. Although technology was not a component of the structured literature search employed to identify the candidate articles for this review, 14 of the articles identified, including four of the five highlighted articles of 2008, employed or studied technology as a focus of the educational research. Nine articles focused on simulation technology in education, while five additional articles relied heavily on other forms of technology as a learning tool (see Data Supplement S1). Researchers employed or investigated simulation in many ways.

One article reported on the growth of simulation-based training in EM training programs (Okuda et al.10). Four studies indicated that learner attitudes toward this modality are favorable (Grant et al.,11 Hicks et al.,12 Wang et al.,13 and Wenk et al.14). Three investigated the efficacy of a specific educational intervention conducted with simulation as the learning modality (Binstadt et al.,15 Brett-Fleegler et al.,16 and Schlicher and Ten Eyck17). Six articles focused on technology as a teaching tool. Three studied an educational intervention on a CD-ROM (Baskin et al.18) or on the World Wide Web (Chenkin et al.,19 and Ricks et al.20). Three studies relied on virtual reality as the basis of their educational intervention, in which teamwork training was a high-priority learning outcome (Creutzfeldt et al.,21 Hedrick and Young,22 and Youngblood et al.23).

Twenty-two articles focused on residents as learners, while nine included medical students as subjects (Data Supplement S1). An analysis of the trends in study methodology slightly favored observational studies (13/30; 43%) over the more rigorous experimental or quasi-experimental design (8/30; 27%). However, of the five articles presented here, four were in the experimental or quasi-experimental design category. Other than the prevalence of articles related to technology, there were no other notable topical trends. Table 1 summarizes the characteristics of all 30 articles.

Table 1. 
Topic Trends for the Highlighted Articles of 2008*
TopicAll articles (n = 30)Highlighted articles (n = 5)
  1. CPR = cardiopulmonary resuscitation; DVT = deep venous thrombosis.

  2. *Numbers do not equal 100% as categorizations were not mutually exclusive.

Learner group
 Medical students93
Study methodology
 Survey (self-assessment, attitudes)90
Experimental or quasi-experimental84
 Evaluation of learner51
 Medical (DVT, geriatrics)21
 Practice management (communication, errors, malpractice, overcrowding, rounds)50
 Technology (other than simulation)63

Overall, of the 30 studies reviewed, only 36% noted some form of funding support for the research. Three of the highlighted articles in this review reported external funding. Of note, 83% of the Canadian educational studies were funded projects.


As a result of the authors’ attempts to standardize the article review process, a rating scale was created that included a rating of “not applicable” equal to “1,” while an “unsatisfactory” rating was equal to “2.” This was subsequently recognized during the review to create the possibility that an unsatisfactory article could potentially achieve a higher total rating score than an article whose content would be considered “not applicable.” As a result of this possible inconsistency, each reviewer’s independent ratings were recalculated, excluding the anchor, “not applicable.” Recalculations did not change the top-ranked or highlighted articles, as none of them received an “unsatisfactory” rating in any of the anchors. Future iterations of this project will include an adjusted rating score to avoid the potential for this anomalous result.

It is sobering that for this review, only 30 articles published in 2008 met the criteria of hypothesis-testing studies of educational interventions. Although the initial PubMed search was intended to be extensive, it is possible that the inclusion criteria may have been too narrow, missing some articles. However, it is equally or perhaps more likely that high-quality medical education research constitutes a small proportion of published research. This may in part be due to a lack of funded support and resources for the completion of these academic endeavors. Only 36% of the studies reviewed for this article were supported by funding. It has been noted that current levels of funding are inadequate to support the need for broad, rigorous testing of the usefulness of educational innovations,22 despite the fact that such research requires resources that are common to all superior investigational work: advanced training in the performance of medical education research, funding, protected time, and recognition.23


This review highlights quality medical education research studies published in 2008, with outcomes of relevance to teaching and education in EM. It sheds light on the strengths, weaknesses, and unique challenges of performing and publishing EM education research, while also noting current trends in the use of technology for teaching and learning and future avenues for continued research.

The five highlighted articles met rigorous scoring criteria that emphasized methodology and can serve as model resources for EM education researchers. As previously noted, many educational studies are performed in the absence of funding, the lack of which negatively affects study feasibility and design considerations.24 This fact may be reflected here in the small number of 2008 publications that met inclusion criteria for this review. Only 36% of the articles were funded medical education research. Yet, of the five articles meeting reviewers’ criteria as superior studies, three were funded medical education research. Among these studies, methodologic flaws still exist. To develop EM education research at an outstanding level, the challenge to our specialty, as to all specialties, is to provide faculty development research resources, departmental support and publicity, and extramural funding to EM education researchers.25

In the past 30 years, EM has gained respect for the quality and quantity of research produced in both basic and clinical research. High-quality, rigorous educational research, a clear and appropriate challenge to the academic community, is feasible and its results should be similarly disseminated to EM educators. Its outcomes facilitate educators’ goals of being the most effective teachers, who employ teaching methods that are based in valid and reliable evidence.