SEARCH

SEARCH BY CITATION

Keywords:

  • undergraduate medical education;
  • graduate medical education;
  • continuing medical education;
  • education research;
  • emergency medicine

Abstract

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Limitations
  6. Conclusions
  7. References

Objectives:  The objective was to critically appraise and highlight methodologically superior medical education research specific to emergency medicine (EM) published in 2009.

Methods:  A search of the English language literature in 2009 querying Ovid MEDLINE In-Process & Other Non-Indexed Citations, Ovid MEDLINE 1950 to Present, Web of Science, Education Resources Information Center (ERIC), and PsychInfo identified 36 EM studies that used hypothesis-testing or observational investigations of educational interventions. Six reviewers independently ranked all publications based on 10 criteria, including four related to methodology, that were chosen a priori to standardize evaluation by reviewers. This was a refinement of the methods used to appraise medical education published in 2008.

Results:  Seven studies met the standards as determined by the averaged rankings and are highlighted and summarized here. This year, 16 of 36 (44%) identified studies had funding, compared to 11 of 30 (36%) identified last year; five of seven (71%) highlighted publications were funded in 2009 compared to three of five (60%) highlighted in 2008. Use of technology in medical education was reported in 14 identified and four highlighted publications, with simulation being the most common technology studied. Five of the seven (71%) featured publications used a quasi-experimental or experimental design, one was observational, and one was qualitative. Practice management topics, including patient safety, efficiency, and revenue generation, were examined in seven reviewed studies.

Conclusions:  Thirty-six medical education publications published in 2009 focusing on EM were identified. This critical appraisal reviews and highlights seven studies that met a priori quality indicators. Current trends are noted.

ACADEMIC EMERGENCY MEDICINE 2010; 17:S16–S25 © 2010 by the Society for Academic Emergency Medicine

Medical education research that uses sound research methods and is focused on emergency medicine (EM) topics can serve as a resource to further the expertise of EM educators and researchers. An annual review of this literature is necessary because of the rapid changes taking place in the field of medical education. This research exposes educators to new methodologies, theories, and techniques that can be used to improve teaching, provide a foundation for future medical education research by other scholars, and advance the field of medical education as a discipline. The execution of medical education research requires in-depth knowledge of education theory, research methodology, and an understanding of current educational needs and opportunities. Medical education research, which focuses on the scientific investigation and assessment of the effects of teaching and educational efforts, can often provide the explanation as to why success or failure occurs in a particular educational situation.1

Medical education scholars have suggested the use of methodologies and metrics adapted from traditional bench and clinical research to perform and assess medical education research.2–5 The Research in Medical Education Symposium of the Association of American Medical Colleges (AAMC) developed criteria for evaluating the quality of educational research submitted for publication and presentation at the national AAMC meeting. In 2008 we used these criteria to highlight medical education publications focused on EM that used sound research methods. We also assessed trends in EM education research methods.2

This article reviews and highlights medical education research studies published in 2009 that are pertinent to teaching and education in EM and that are methodologically superior. This article is intended to serve as an unbiased summary of excellent research. It is hoped that educators and researchers in EM will find this a valuable resource for their own efforts.

Methods

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Limitations
  6. Conclusions
  7. References

Article Identification

A medical librarian assisted with performing the initial literature search in the medical and social sciences literature domains and supplied medical subject heading (MeSH) and keyword terms. The databases Ovid MEDLINE In-Process & Other Non-Indexed Citations and Ovid MEDLINE 1950 to Present were queried through a Boolean search strategy using the following MeSH terms: emergency medicine and medical education, medical student, internship, housestaff, resident, undergraduate medical education, graduate medical education, and continuing medical education. Keyword variants for the MeSH terms were included in the search for comprehensiveness. Boolean searches of other databases, including Web of Science (emergency medicine and education), Education Resources Information Center (ERIC) (emergency medicine), and PsychInfo (emergency medicine and education) were performed using keyword searching and where possible using the databases’ controlled vocabularies. Publications were limited to English language articles published in 2009.

Inclusion and Exclusion Criteria

Publications on the education of medical students, residents, and academic and nonacademic attending physicians were included. Medical education studies were defined as hypothesis-testing investigations and measurements of educational interventions using either quantitative or qualitative methods. Publications were excluded if they were opinion, comments, literature reviews, descriptive, or reports on education of prehospital personnel or if the study could not be generalized to EM training outside of the country in which it was performed.

Data Collection and Analysis

Three authors (SF, GK, ML) independently screened abstracts of all retrieved publications. Retrieved publications were maintained in a database. All differences in opinion were resolved by discussion. Publications that met inclusion criteria were posted in a shared folder online for all six authors to independently score.

Scoring

In 2008, in preparation for writing a review of medical education research pertaining to EM, the authors of “Highlights in emergency medicine medical education research: 2008”2 held an a priori discussion to determine categories for scores in reviewing potential candidate publications. There were four categories that addressed research methods, specifically “clarity of the study question,”“applicability of the research design to the study question,”“data collection methods,” and “data analysis.” Additional rating criteria included “relevance to teaching,”“generalizability of the results,”“innovation of the study,” and “clarity of writing.”2 Possible rating scores ranged from 1 to 5. In preparation for identifying superior educational publications from 2009, the categories and scores used in 2008 were reevaluated. The categories for methodology were modified to “study design,” implementation of study design,” data collection,” and “data analysis.” Additional categories were “introduction,”“discussion,”“limitations,” innovation of project,”“relevance of project,” and “clarity of writing.” Each of the categories was scored based on predefined criteria. This was done in an effort to make scoring as objective as possible (see Table 1). Assessment of qualitative research methodology was based on published recommendations for strength and appropriateness of methods and differed from quantitative research reviewed only in determining strength of methodology.6–9 Scoring for “introduction,”“discussion,”“limitations,”“innovation of project,”“relevance of project,” and “clarity of writing” was done in the same manner for quantitative and qualitative research (see Table 2).

Table 1.    Emergency Medicine Quantitative Medical Education Research Scoring Sheet
DomainItemItem ScoreMaximum Domain Score
Introduction 3
 1. One point for each:  
 Description of background literature1 
 Clearly frame the problem1 
 Clear objective/hypothesis1 
Study design 2
 1. Pick most appropriate score:  
 Not appropriate for hypothesis0 
 Appropriate design, but not best method1 
 Excellent design for question asked2 
Implementation of study design 4
 1. Pick most appropriate score:  
 No pretest, posttest 0 
 Posttest only1 
 Pretest, posttest2 
 Both experimental and control group with nonrandom assignment3 
 Both experimental and control group with random assignment 4 
Data collection (institutions + response rate) 4
 1. Institutions—pick most appropriate score:  
 Single institution0 
 Two institutions1 
 More than two institutions2 
2. Response rate—pick most appropriate score:  
 Response rate < 50% or not reported0 
 Response rate 50%–74%1 
 Response rate ≥ 75%2 
Data analysis (add appropriateness + sophistication) 3
 1. Appropriateness of analysis—pick most appropriate score:  
 Data analysis inappropriate for study design or type of data0 
 Data analysis appropriate for study design and type of data1 
2. Sophistication of analysis—pick most appropriate score:  
 Descriptive analysis only1 
 Beyond descriptive analysis2 
Discussion 3
 1. One point for each:  
 Data supports conclusion1 
 Conclusion clearly addresses hypothesis/objective1 
 Conclusions placed in context of literature1 
Limitations 2
 1. Pick most appropriate score:  
 Limitations not identified accurately0 
 Some limitations identified1 
 Limitations well addressed2 
Innovation of project 2
 1. Pick most appropriate score:  
 Previously described methods0 
 New use for known assessment1 
 New assessment methodology2 
Relevance of project 2
 1. Pick most appropriate score:  
 Impractical to most programs0 
 Relevant to some1 
 Highly generalizable2 
Clarity of writing 3
 1. Pick most appropriate score:  
 Unsatisfactory0 
 Fair1 
 Good2 
 Excellent3 
Total 28
Table 2.    Emergency Medicine Qualitative Medical Education Research Scoring Sheet
DomainItemItem ScoreMaximum Domain Score
Introduction 3
 1. One point for each:  
 Description of background literature1 
 Clearly frame the problem1 
 Clear objective/hypothesis1 
Study design 2
 1. Pick most appropriate score:  
 Not appropriate for hypothesis0 
 Appropriate design, but not best method1 
 Excellent design for question asked2 
Implementation of study design 4
 1. Pick most appropriate score:  
 Structured interview 0 
 Direct observation1 
 Structured interview and direct observation2 
 Appropriate subjects one group1 
 Appropriate subjects multiple groups 2 
Data collection (add method of collection + data from multiple sources) 4
 1. Method for collecting and recording data:  
 Written records of observations/interviews1 
 Written records plus audio taping of interviews 2 
2. Data from multiple sources (i.e., interviews, historical   documents, videos of interviews):1 
 Data collected until saturation reached and verified with   subjects for accuracy2 
Data analysis (add appropriateness + sophistication) 3
 1. Appropriateness of analysis—pick most appropriate score:  
 Data coded 1 
 Data coded and then verified with experts or subjects2 
2. Sophistication of analysis—pick most appropriate score:  
 Descriptive analysis only0 
 Beyond descriptive analysis1 
Discussion 3
 1. One point for each:  
 Data supports conclusion1 
 Conclusion clearly addresses hypothesis/objective1 
 Conclusions placed in context of literature1 
Limitations 2
 1. Pick most appropriate score:  
 Limitations not identified accurately0 
 Some limitations identified1 
 Limitations well addressed2 
Innovation of project 2
 1. Pick most appropriate score:  
 Previously described methods0 
 New use for known assessment1 
 New assessment methodology2 
Relevance of project 2
 1. Pick most appropriate score:  
 Impractical to most programs0 
 Relevant to some1 
 Highly generalizable2 
Clarity of writing 3
 1. Pick most appropriate score:  
 Unsatisfactory0 
 Fair1 
 Good2 
 Excellent3 
Total 28

Reviewers were excluded from scoring their own publications. All reviewers read the first five publications selected for review in alphabetical order to test the scoring rubric for clarity and consistency in scoring. After reading the first five publications, each reviewer varied the order in which the remaining publications were read in an attempt to prevent bias resulting from reviewer fatigue. Each reviewer independently reviewed and rated the remaining publications, and a total rating score was calculated for each article.

All rating scores were entered into Microsoft Excel 2007 (Microsoft Inc., Redmond, WA). Using each reviewer’s total rating score for each article, a rank list of all publications was created for each reviewer. The rankings were then averaged to prevent overvaluing of any one reviewer’s scoring. All publications with a mean rank of less than 10 were included in the final analysis. This was a refinement of the criteria used to appraise medical education publications published in 2008.2

Results

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Limitations
  6. Conclusions
  7. References

A total of 250 papers satisfying the search criteria were retrieved from Medline (209 publications), Web of Science (40 additional publications), ERIC (one), and PsychInfo (none). Seven papers that met a priori criteria and had a mean rank < 10 are highlighted for review. They are presented in alphabetical order by surname of the first author.

Review of Publications

Adler MD, Vozenilek JA, Trainor JL, Eppich WJ, Wang EE, Beaumont JL, Aitchison PR, Erickson T, Edison M, McGaghie WC. Development and evaluation of a simulation-based pediatric emergency medicine curriculum. Acad Med. 2009; 84:935–41.

Background: The infrequency of severe childhood illness limits opportunities for EM providers to learn from real-world experience. High-fidelity simulation offers an evidence-based educational approach to develop and practice infrequently used but life-saving clinical skills. This study reports reliability data derived from checklist rating instruments developed to assess residents’ pediatric resuscitation skills.

Methodology: This was a two-phase, randomized trial with a wait-list control condition. The authors developed and tested the reliability and validity of a checklist rating instrument and, using that instrument, assessed pediatric resuscitation skills after a simulation-based curriculum. The authors assessed interrater agreement using intraclass correlation and compared group performance using mixed-model analysis of variance.

Results: Sixty-nine of 81 (85%) residents completed both assessments and their scores were analyzed. Training year was significantly associated with better performance. There was limited effect from the instructional intervention based on final learner evaluations, with statistically significant but modest improvement in second-session scores for two of three evaluation cases. The authors attributed small improvements in knowledge to 1) reduction of frequency and intensity of simulation-based instruction due to participants’ scheduling conflicts and 2) learners’ inability to transfer learning to new situations when the test and teaching cases were similar but not identical. Intraclass correlation surpassed 0.78.

Strengths of the study: This was an elegant randomized multicenter study. It produced a valid, reproducible pediatric EM instructional and evaluation method, but resulted in modest educational gains. The authors feel that the study provides evidence of pitfalls of insufficient training when schedule constraints require fewer instructional sessions or require providing intense instruction in a limited time frame.

Relevance for future educational advances: This study can be used as a methodology to produce valid and reliable assessment tools of performance. It also demonstrates a method for construction of content using cases for high-fidelity simulation training. These methods are extremely resource-intensive and may mandate programs pooling resources or cooperatively sharing resources for instruction and evaluation. It is sobering to learn of the modest educational gains achieved with this amount of time and effort expended.

Barsuk JH, Cohen ER, Feinglass, J, McGaghie WC, Wayne DB. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med. 2009; 169:1420–3.

Background: Simulation-based education may improve trainee skills in central venous catheter (CVC) insertion. This study sought to determine the effect of simulation-based instruction in CVC insertion on the incidence of catheter-related bloodstream infections (CRBSI) in a medical intensive care unit (MICU).

Methodology: Before rotating in two adult MICUs, 92 internal medicine and EM second and third-year residents underwent simulation-based instruction in CVC insertion. Instruction included a pretest simulation using a 27-item checklist, 5 hours of training with didactic, ultrasound-guided, and simulation practice, followed by a posttest. Rates of CRBSI per 1,000 catheter-days were measured in the MICUs and, in a surgical ICU (SICU) where simulation-based training was not used to educate residents, for 16 months before and after the implementation of the educational intervention.

Results: Over the 32-month study period, the rate of CRBSIs in the MICUs decreased from 3.2 to 0.5 per 1,000 catheter-days (p = 0.001), equivalent to an 84.5% reduction in CRBSIs after the administration of simulation-based CVC insertion instruction. The corresponding rates in the SICU were 4.86 and 5.26, respectively.

Strengths of the study: Although a single-institution, observational, cohort design, the study indicates that the implementation of this educational intervention for residents was associated with a significant reduction in CRBSIs in the MICUs where trained residents rotated. To determine if changes in patient mix may have contributed to the change in infection rates, the authors compared Charlson comorbidity scores for MICU patients before and after the educational intervention: despite an increase in scores, indicating a sicker patient population, infection rates decreased after simulation-based CVC insertion instruction was implemented.

Relevance for future educational advances: Simulation-based training can be resource-intensive. This study shows that the focused use of simulation to support clinically relevant education can be associated with a positive effect on measurable patient care outcomes. This study makes the important leap from an educational intervention to its effect on patient outcomes. While many educational research projects measure changes in knowledge, behavior, or performance, improving patient outcomes is an important criterion standard for the proof of concept of education research in clinical medicine.

Bernstein SL, Boudreaux ED, Cabral L, Cydulka RK, Schwegman D, Larkin GL, Adams AL, McCullough LB, Rhodes KV. Efficacy of a brief intervention to improve emergency physicians’ smoking cessation counseling skills, knowledge, and attitudes. Subst Abuse. 2009; 30:158–81.

Background: Tobacco use and tobacco-related illnesses are common among emergency department (ED) patients. Studies have found that adult smokers in the ED are interested in cessation and in receiving cessation interventions while in the ED. Tobacco cessation counseling and tobacco-related morbidity is not part of the core curriculum taught to U.S. emergency physicians (EPs). This study examined the effect of a brief educational intervention on EPs’ knowledge, attitudes, and behaviors regarding screening ED patients for tobacco use, delivering a brief intervention, and referring them to treatment.

Methodology: EPs received a 1-hour lecture (including role-play) on the health effects of smoking and strategies to counsel patients. After the lecture, cards promoting a national smokers’ quit line were placed in EDs, to be distributed by providers. Providers completed pre-/postintervention questionnaires. Patients were interviewed pre-/postintervention to assess provider behavior.

Results: Three-hundred thirty-seven providers and 1,168 (463 pre- and 683 postintervention) patients were enrolled, with a 61% response rate. Postintervention, providers showed statistical improvement in knowledge of Public Health Service guidelines and attitudes (provider survey). Postintervention, providers were more likely to consider tobacco counseling part of their role and felt more confident in counseling. Data from 1,168 patient interviews and chart reviews showed that, postintervention, providers were more likely to ask patients about smoking, make a referral, and document smoking counseling. Postintervention, 30% of smokers were given a quit line referral card. A brief educational intervention significantly improved ED-based tobacco interventions.

Strengths of the study: This was a large multicenter study that demonstrated that a very brief educational intervention of lecture and role-play significantly changed providers’ knowledge and attitudes regarding tobacco use, counseling, and referral of patients to smoking cessation programs. Most importantly, provider behavior significantly changed based on postintervention patient interviews and chart review.

Relevance for future educational advances: This multicenter study demonstrated that brief traditional methods of instruction significantly changed provider knowledge, beliefs, and behavior. It is a model of low-cost, low-technology intervention that can be used by many emergency programs. It remains to be seen if such simple solutions can produce sustainable change, as the follow-up period was only 2 weeks.

Donoghue AJ, Durbin DR, Nadel FM, Stryjewski GR, Kost SI, Nadkarni VM. Effect of high-fidelity simulation on Pediatric Advanced Life Support training in pediatric house staff; a randomized trial. Pediatr Emerg Care. 2009; 25:139–44.

Background: High-fidelity simulation is well suited to training for critical clinical situations that are uncommon, but that require a high degree of preparedness. This study sought to compare the effects of high-fidelity simulation (SIM) and traditional mannequin (MAN) training on the cognitive performance of pediatric house staff in Pediatric Advanced Life Support (PALS).

Methodology: Fifty-one pediatric house staff were randomized to PALS training with either SIM or MAN methods. Both groups completed pretesting of residents’ cognitive performance on a scale of 0 to 100 using the assigned simulation methods in mock code scenarios, a didactic session reviewing PALS algorithms, and posttesting using the same assigned methods. Differences in pre- and posttesting cognitive performance were measured between groups.

Results: Both groups demonstrated improvement in their cognitive performance after PALS training. However, the SIM-trained group showed a significantly greater improvement in cognitive performance after training with high-fidelity simulation scenarios, compared to traditional MAN-training (score improvement SIM 11.1 vs. MAN 4.8, p = 0.007).

Strengths of the study: This multicenter study used randomization to compare the effect of high-fidelity simulation and traditional mannequin-based PALS training on pediatric residents’ cognitive performance and decision-making during pediatric mock code scenarios.

Relevance for future educational advances: Simulation-based training can be resource-intensive, and critics have argued that it may not be worth the expenditure of resources. This study indicates that training in simulation-based scenarios that better approximate reality may improve residents’ thinking and decision-making related to uncommon, but life-threatening clinical events. It begins to answer those who have called for proof that the high cost of high-fidelity simulation is justified based on improved learning.

Goldman E, Plack M, Roche C, Smith J, Turley C. Learning in a chaotic environment. J Workplace Learn. 2009; 21:555–74.

Background: Emergency medicine residents learn much of the art and skill of clinical EM while in the process of providing patient care in the ED. However, the relationship between an individual resident’s activities and motivation to learn and the contextual factors of the ED environment that promote or hinder learning are not well understood. This study sought to explore and identify the learning processes of EM residents working in a busy ED and to understand the relationships between factors that facilitate learning in this potentially chaotic workplace.

Methodology: Semistructured interviews were used to determine EM residents’ thinking about how they learn while working in the ED. Interviews were transcribed, analyzed, and coded using a phenomenologic approach to understand four components of the learning process: learning catalysts, learning strategies, learning challenges, and supports. Emerging themes were analyzed for credibility by both physician and nonphysician investigators, peer debriefing, peer and member checks through result reporting, and sufficient data collection to ensure saturation of themes.

Results: Analysis of the data revealed four different types of learning episodes in the ED and a number of factors that facilitate resident learning. The learning episodes that were identified included: 1) active participation in the ED environment through patient care and management; 2) short, focused, specific learning moments; 3) repetitive cycles of clinical experiences with similar patient symptoms or situations; and 4) intense clinical experiences that evoked a high level of interpersonal exchange. Factors that facilitated learning included the resident’s expectations for learning, self-directed practice, eliciting feedback, self-reflection, the behaviors of other residents and faculty (such as directed learning activities), and the contextual factors of the ED environment, such as patient volume.

Strengths of the study: This qualitative design was informed by situated (workplace) learning theory, chaos theory, and an understanding of the practice of clinical EM. Validity and reliability of the study were assured through the use of triangulation by multiple physician and nonphysician investigators, purposive sampling of residents for diverse opinions, peer debriefing, review of interview data, sufficient data collection to ensure theme saturation, and code-checking and reporting of initial findings to participants.

Relevance for future educational advances: This qualitative study identifies four episodes and facilitating factors that contribute to EM residents’ learning in the ED. This study provides a better understanding of how residents learn while providing patient care in the ED. This can then heighten the awareness of EM educators to the importance of identifying learning episodes to maximize the learning potential of such episodes during residents’ daily work in an academic ED.

Larsen DP, Butler AC, Roediger HL 3rd. Repeated testing improves long-term retention relative to repeated study: a randomized controlled trial. Med Educ. 2009; 43:1174–81.

Background: The cognitive retention and retrieval of factual medical knowledge is important for facility with both urgent clinical decision-making and with summative test-taking success. This study sought to evaluate whether repeated testing of material taught in a pediatric and EM conference would enhance residents’ long-term retention of the material when tested 6 months later.

Methodology: Forty pediatric and EM residents participated in an interactive teaching session on status epilepticus (SE) and myasthenia gravis (MG). Residents were then randomized into two groups: one group to be repeatedly tested on SE and to study MG and the other to be repeatedly tested on MG and to study SE. Both groups underwent testing and received feedback on the material twice at 6-week intervals and again at 6-months.

Results: Although recall dropped off in both groups at 2 weeks, at 6 months the group that had undergone repeated testing on assigned material had improved retention compared to the group that had only studied the same material. Across both groups, repeated testing produced a final test score that was on average 13% higher than that produced as a result of repeated study (39% vs. 26%, p < 0.001).

Strengths of the study: Although conducted at a single institution, this study used randomization to compare the effect of two methods designed to enhance residents’ cognitive knowledge and recall at 6 months after an initial teaching session.

Relevance for future educational advances: All practicing physicians are faced with the requirement to pass tests of knowledge as part of in-training, qualifying, or continuing certification examinations. This study suggests that repeated long-term testing and feedback enhance recall and may provide strategies to assist residents who struggle to learn, maintain, and recall an ever-growing body of factual medical knowledge.

Ten Eyck RP, Tews M, Ballester JM. Improved medical student satisfaction and test performance with a simulation-based emergency medicine curriculum: a randomized controlled trial. Ann Emerg Med. 2009; 54:684–91.

Background: Although many medical schools are introducing simulation into their curricula, conclusive quantitative assessment of the effectiveness of simulation compared to traditional modes of instruction is lacking. This study sought to determine the effect of a simulation-based curriculum on fourth-year medical student test performance and satisfaction during an EM clerkship.

Methods: This was a randomized controlled study using a crossover design for curriculum format and an anonymous end-of-rotation satisfaction survey. Students were randomized into two groups with one group starting the rotation with simulation and the other with group discussion. Midrotation, each group crossed over to the opposite format. All students completed the same multiple-choice end-of-rotation examination. Authors assessed paired samples of the number of questions missed for material taught in each format. Students rated satisfaction with a five-point Likert scale framed as attitude toward simulation compared with group discussion. Scores ranged from 5, signifying strong agreement with a statement, to 1, signifying strong disagreement.

Results: Ninety students (99%) completed the multiple-choice test. Significantly fewer questions were missed for material presented in simulation format compared with group discussion, with a mean difference per student of 0.7 (95% confidence interval [CI] = 0.3 to 1.0; p = 0.006). This corresponds to mean scores of 89.8% for simulation and 86.4% for group discussion. Eighty-eight (97%) students completed the satisfaction survey. Students rated simulation as more stressful (mean = 4.1; 95% CI = 3.9 to 4.3), but also more enjoyable (mean = 4.5; 95% CI = 4.3 to 4.6), more stimulating (mean = 4.7; 95% CI = 4.5 to 4.8), and closer to the actual clinical setting (mean = 4.6; 95% CI = 4.4 to 4.7) compared with group discussion.

Strengths of the study: This was a randomized controlled study that showed a significantly higher end-of-rotation examination score favoring simulation. Interestingly, students perceived that while simulation was more stressful, it was also more enjoyable and closer to the clinical setting.

Relevance for future educational advances: This study indicates that scores and student satisfaction are both improved with the use of simulation for instruction. Given the high cost and labor-intensive nature of this instructional modality, educators should consider whether educational gains justify use of this instructional strategy. Future studies will need to determine if the educational gains in high-fidelity simulation laboratories translate to improved performance in the clinical setting.

Trends in Medical Education Research in 2009

This is an observational identification of the features of the current year’s publications. Each publication is assessed for author type (EM, medical education, other medical specialty), study funding, type of journal publication, research subjects (medical students, residents, practicing physicians, other), study design (observational, survey, experimental, quasi-experimental, qualitative), and topic of research.

Several interesting trends were identified in this year’s 36 article database (see Table 3). One positive trend was that 16 of 36 studies (44%) received some type of funding, compared to 11 of 30 (36%) studies in 2008.2 Reed et al.10 reported a correlation between the quality of published medical education research studies and funding. It is interesting to note that five of seven (71%) studies reviewed this year received some type of funding,11–15 which seems to corroborate the conclusion by Reed et al. that funding enhances the quality of research.10 Only 17 (57%) of last year’s articles appeared in EM journals, and only three (1%) appeared in medical education journals.2

Table 3.    Trends for the Reviewed Publications of 2009
 All Publications (n = 36)Highlighted (n = 7)
  1. ACGME = Accreditation Council for Graduate Medical Education.

Funding165
Learner group  
 Medical students111
 Residents265
 Attending physicians71
 Patients1 
 Multiple subjects7 
Study methodology  
 Survey70
 Observational161
 Quasi-experimental/  experimental115
 Qualitative 1
Topics of study  
 ACGME competencies  
  Assessment8 
  Professionalism7 
 Pediatrics52
 Practice management  (patient safety, efficiency,  revenue generation)72
 Technology (simulation)14 (12)3 (3)

In contrast, 23 (64%) of this year’s articles appeared in EM journals, while five (14%) were published in medical education journals.11,15–18 The remainder appeared in journals that focused on the topics of the individual articles. Thirty-two (89%) of the 2009 studies had at least one EM author. The trend toward technology as the focus in medical education research in EM continued this year. Fourteen studies (39%) were based on technology,11,12,18–29 of which 12 (33%) used simulation.11,12,18–20,22–28 Last year, 14 of 30 (47%) articles focused on technology, including four of five (80%) of the highlighted articles.2 As in 2008, simulated topics tended to be those that rarely occurred or that were difficult to teach in daily practice. There has been a call for demonstration of cost-effectiveness of high-fidelity simulation in teaching.30 Seven studies included evaluation of instruction using high-fidelity simulation,12,19,22,25–27,31 with all showing greater learner satisfaction with simulation and equal or slightly higher scores in the simulation group, one showing significant improvement after training with high-fidelity simulation,22 and one showing significantly improved patient outcomes.31

Subjects of the research were medical students in 11 (31%) of the studies, residents in 26 (72%), faculty in seven (19%), and patients in one, while seven (19%) studies had multiple subjects. Methodology included seven surveys (14%), 16 observational (44%), 11 experimental or quasi-experimental (31%), and one qualitative. Five of the seven (71%) highlighted articles were a quasi-experimental or experimental design,11,13,15,22,27 one was observational,12 and one was qualitative.14

The most noticeable difference in the research topics was the preponderance of practice management topics in this year’s articles. Overall, there were seven articles (19%) that aimed to teach or evaluate these topics,12,17,32–36 which ranged from patient safety topics to efficiency and revenue generation. In 2008, there was only one that specifically addressed these topics.2 Other topics addressed the Accreditation Council for Graduate Medical Education (ACGME) Core Competencies, especially those that are difficult to assess. Eight of the articles focused on evaluation of learners or programs.11,16,24,26,32,35,37,38 Seven studies focused on professionalism and careers in EM.18,23,32,36,39–41 Pediatric topics accounted for five of the articles;11,19,21,22,42 many employed technology as their basis for experimentation, especially on topics related to critically ill pediatric patients who do not present frequently to the ED.

Limitations

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Limitations
  6. Conclusions
  7. References

There were alterations in the process used to identify publications, including the addition of a trained librarian and more databases. This leads to the possibility that the number of EM medical education publications did not increase compared to 2008, but instead the improved search process led to the identification of more publications.

Although this year’s article search was meant to be extensive in reviewing the MEDLINE, ERIC, and PsychInfo literature sources, it is possible that the article inclusion criteria may have been too narrow, missing some publications. When rating any research it is possible for bias to exist. The selection and scoring of publications was not blinded, which may have led to bias.

To minimize bias, the reviewers attempted to standardize their individual article ratings through a priori discussions of the rating definitions and rating agreements and did not score any paper in which they were an author, and the reviewers (SF, GK, ML) selecting the papers for review did not have any research that was ever considered for review. The use of rankings limited the variance inherent to individual reviewer ratings.

Conclusions

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Limitations
  6. Conclusions
  7. References

This critical appraisal of the literature provides a snapshot of exemplary EM educational research of 2009 and highlights advances in the field. The same six-person team that rated EM medical education publications of 2008 systematically rank-ordered 2009 educational research publications, based on methodologic rigor, clarity of writing, and innovation. The seven publications highlighted represent methodologically superior research conducted in 2009. Current trends in educational research are also reported.

Each of the highlighted research publications contributes to the growing field of medical education research relevant to EM, while addressing the methods to control, justify, or minimize the limitations that are inherent to this focus. Our highlighting the unique strengths of these high-quality publications is meant to encourage educators to conduct methodologically sound educational research.

Comparing the literature of 2008 to 2009, the number of published educational research papers meeting our criteria increased from 30 to 36, and the number of funded studies increased from 11 to 16. It is hoped that this encouraging trend toward high-quality and funded educational research in EM will continue. Support of researchers performing medical education research focused on EM will assist academic emergency physicians in implementing innovative educational approaches, based on the most valid and effective evidence.

References

  1. Top of page
  2. Abstract
  3. Methods
  4. Results
  5. Limitations
  6. Conclusions
  7. References