Critical Appraisal of Emergency Medicine Educational Research: The Best Publications of 2011

Authors


  • The authors have no relevant financial information or potential conflicts of interest to disclose.

Address for correspondence and reprints: Jonathan Fisher, MD, MPH; e-mail: jfisher2@bidmc.harvard.edu.

Abstract

Objectives

The objective was to critically appraise and highlight medical education research studies published in 2011 that were methodologically superior and whose outcomes were pertinent to teaching and education in emergency medicine (EM).

Methods

A search of the English language literature in 2011 querying PubMed, Scopus, Education Resources Information Center (ERIC), and PsychInfo identified EM studies that used hypothesis-testing or observational investigations of educational interventions. Six reviewers independently ranked all publications based on 10 criteria, including four related to methodology, that were chosen a priori to standardize evaluation by reviewers. This method was used previously to appraise medical education published in 2008, 2009, and 2010.

Results

Forty-eight educational research papers were identified. Comparing the literature of 2011 to that of 2008 through 2010, the number of published educational research papers meeting the criteria increased over time from 30, to 36, to 41, and now to 48. Five medical education research studies met the a priori criteria for inclusion as exemplary and are reviewed and summarized in this article. The number of funded studies remained fairly stable over the past 3 years, at 13 (2008), 16 (2009), 9 (2010), and 13 (2011). As in past years, research involving the use of technology accounted for almost half (n = 22) of the publications. Observational study designs accounted for 28 of the papers, while nine studies featured an experimental design.

Conclusions

Forty-eight EM educational studies published in 2011 and meeting the criteria were identified. This critical appraisal reviews and highlights five studies that met a priori quality indicators. Current trends and common methodologic pitfalls in the 2011 papers are noted.

Resumen

Evaluación Crítica de la Investigación en Docencia de la Medicina de Urgencias y Emergencias: Las Mejores Publicaciones de 2011

Objetivos

El objetivo fue destacar y evaluar críticamente los estudios de investigación en docencia médica publicados en 2011 que fueron metodológicamente superiores y cuyos resultados fueron pertinentes para la enseñanza y la docencia en Medicina de Urgencias y Emergencias (MUE).

Metodología

Una búsqueda de la literatura en lengua inglesa de 2011 mediante PubMed, Scopus, Education Resources Information Center (ERIC) y PsychInfo identificó estudios de MUE que usaron una prueba de hipótesis o investigaciones observacionales de intervenciones docentes. Seis revisores independientes clasificaron todas las publicaciones en base a 10 criterios, que incluyeron cuatro relacionados con la metodología, que fueron seleccionados a priori para la evaluación estandarizada por los revisores. Esta metodología se usó previamente para evaluar lo publicado en docencia médica en 2008, 2009 y 2010.

Resultados

Se identificaron 48, artículos de investigación docente. Comparando la literatura de 2011 con la del 2008 hasta 2010, el número de artículos de investigación docente publicados que cumplían los criterios se incrementó a lo largo del tiempo (30, 36, 41, y ahora 48). Cinco estudios de investigación en docencia médica cumplieron los criterios a priori de inclusión como ejemplares, y son revisados y resumidos en el presente artículo. El número de estudios financiados permaneció bastante estable a lo largo de los últimos tres años, 13 (2008), 16 (2009), 9 (2010) y 13 (2011). Como en los pasados años, la investigación que incluye el uso de tecnología estuvo presente en casi la mitad (n = 22) de las publicaciones. Los diseños de estudio observacional sumaron 28 de los artículos, mientras que nueve estudios tuvieron un diseño experimental.

Conclusiones

Se identificaron 48 estudios docentes en MUE publicados en 2011 y que cumplieron los criterios. Esta evaluación crítica revisa y destaca cinco estudios que cumplieron a priori los indicadores de calidad. Se apuntan las tendencias actuales y los errores metodológicos más frecuentes en 2011.

Like medicine itself, our understanding of medical education is constantly evolving. Medical education research attempts to provide an evidence-based approach to training the physicians of tomorrow. Publication of medical education research exposes educators to new educational theories, methods, and innovations that can be used to improve teaching, provide a foundation for future medical education research, and advance the field of medical education as a discipline. The execution of high-quality medical education research requires in-depth knowledge of educational theory, research methodology, and current educational needs and opportunities. Medical education research, which focuses on the scientific investigation and assessment of the effects of teaching and educational efforts, can often provide the explanation as to why success or failure occurs in a particular educational situation.[1] Educational research should be held to the same scientific rigors as clinical and basic science research.

Educational research in emergency medicine (EM) has benefited recently from increased attention and emphasis. Both the Society for Academic Emergency Medicine (SAEM) and the Council of Emergency Medicine Residency Directors (CORD) recently announced funding grants for educational research. SAEM, CORD, the American College of Emergency Physicians, and the American Academy of Emergency Physicians all provide opportunities to report results in their journals and to present research at their academic assemblies. From 2009 to 2011, Academic Emergency Medicine (AEM) published education-focused supplements sponsored by CORD and the Clerkship Directors in Emergency Medicine (CDEM). Recently, SAEM and AEM hosted the consensus conference “Education Research in Emergency Medicine: Opportunities, Challenges and Strategies for Success.” The goal of the conference was to bridge the gap between educators and researchers to advance the science of educational research. Much of the work product of this conference is published in the December 2012 issue of AEM.

Medical education scholars have suggested the use of methodologies and metrics adapted from traditional bench and clinical research to perform and assess medical education research.[2-6] The Research in Medical Education (RIME) Symposium of the Association of American Medical Colleges (AAMC) developed criteria for evaluating the quality of educational research submitted for publication and presentation at the national AAMC meeting. We used modified RIME criteria to scientifically appraise and rank all of the EM educational research published the prior year and highlighted those that received best scores based on a priori criteria.[2, 3, 7] We also assessed trends in EM education research methods.

The same reviewers used previously published criteria to critically analyze and rank the EM educational research from 2011. The focus of this article is to review and highlight the 2011 studies that are pertinent to teaching and education in EM, and those that are methodologically superior, and to identify trends in EM educational research over the past 3 years. This article is intended to serve as an unbiased summary of high-quality educational research. It is hoped that educators and researchers in EM will find this a valuable resource for their own efforts. By highlighting high-quality educational research that employs stronger methodology and focuses on higher-level outcomes, the hope is to raise the bar for educational research in EM to be on par with the clinical and basic science of EM.[8, 9]

Methods

Article Identification

A medical librarian (LM) performed the literature search in the medical and social sciences literature domains and supplied medical subject heading (MeSH) and keyword terms. MEDLINE was searched through PubMed utilizing a Boolean search strategy that incorporated the following MeSH terms: emergency medicine and medical education, medical student, internship, housestaff, resident, undergraduate medical education, graduate medical education, and continuing medical education. Keyword variants for the MeSH terms were included in the search for comprehensiveness. Boolean searches of other databases, including Scopus (“medical education” and emergency), Education Resources Information Center (ERIC; emergency medicine), and PsychInfo (emergency medicine and education) were performed using keyword searching and where possible using the databases' controlled vocabularies. Publications were limited to English language papers published in 2011. Searches were run in December 2011, January 2012, and February 2012. In addition, the CORD/CDEM AEM education supplement was reviewed for educational research papers.

Inclusion and Exclusion Criteria

Publications on the EM education of medical students, residents, academic and nonacademic attending physicians, and other emergency providers were included. Medical education studies were defined as hypothesis-testing investigations and measurements of educational interventions using either quantitative or qualitative methods. A publication was excluded if it was opinion, comments, a literature review, descriptive, or a report on education of prehospital personnel or if the study could not be generalized to EM training outside of the country in which it was performed.

Data Collection and Analysis

One author (JF) screened abstracts of all retrieved publications and applied the exclusion criteria. Two authors (GK, ML) reviewed and approved the selection. Retrieved publications were maintained in an EndNote X5 (Thomson Reuters, New York City, NY) database. All differences in opinion were resolved by discussion. All of the publications that met inclusion criteria were scored independently by each of the five reviewers.

Scoring

Using the criteria developed in 2009,[2] and then modified in 2010 and 2011,[3, 7] papers were scored in 10 categories. This year's revision was to make the subcategories binary to attempt to improve ease and consistency of scoring. Categories for methodology were “study design,” “implementation of study design,” “data collection,” and “data analysis.” Additional categories were “introduction,” “discussion,” “limitations,” “innovation of project,” “relevance of project,” and “clarity of writing.” Each of the categories was scored based on predefined criteria to make scoring as objective as possible (Table 1). Possible scores ranged from 0 to 26.

Table 1. EM Educational Research Scoring System
DomainItemItem ScoreMaximum Domain Score
Introduction1. One point for each
Description of background literature13
Clearly frame the problem1
Clear objective/hypothesis1
Measurement1. Give a point for each criterion met
No pretest, posttest 04
Posttest only1
Pretest, posttest1
Groups
Both experimental and control group1 
Random assignment to groups1
Data collection (institutions + response rate)1. Institutions—give a point for each criterion met
Single institution 04
At least two institutions1
More than two institutions1
*A survey of many institutions should get both points  
2. Response rate—give a point for each criterion met
Response rate < 50% or not reported0 
Response rate > 50%1
Response rate ≥ 75%1
Data Analysis (add appropriateness + sophistication)1. Appropriateness of analysis—give a point for each criterion met
Data analysis inappropriate for study design or type of data03
Data analysis appropriate for study design and type of data1
2. Sophistication of analysis—give a point for each criterion met
Descriptive analysis only1 
Beyond descriptive analysis1
Discussion1. One point for each
Data supports conclusion13
Conclusion clearly addresses hypothesis/objective1
Conclusions placed in context of literature1
Limitations1. Give a point for each criterion met
Limitations not identified accurately02
Some limitations identified1
Limitations well addressed1
Innovation of project1. Give a point for each criterion met
Previously described methods02
New use for known assessment1
New assessment methodology1
Relevance of project1. Give a point for each criterion met
Impractical to most programs02
Relevant to some1
Highly generalizable1
Clarity of writing1. Give a point for each criterion met
Unsatisfactory03
Fair1
Good1
Excellent1
Total26

Reviewers were excluded from scoring publications where there was significant conflict of interest (own publication, own institution, or had a vested interest in the authors or work). Publications were listed alphabetically by first author and each reviewer was assigned a different place to start on the list in an attempt to prevent bias resulting from reviewer fatigue. Each reviewer independently reviewed and rated the publications, and a total rating score was calculated for each article. All rating scores were entered into Microsoft Excel 2010 (Microsoft Inc., Redmond, WA). Using each reviewer's total rating score for each article, a rank list of all publications was created for each reviewer. The rankings were then averaged to prevent overvaluing of any one reviewer's scoring. The a priori criteria for papers to be featured as exemplary were: 1) the average of all reviewers' rankings of an article placed the article's rank in the top ten and 2) all reviewers or all but one reviewer ranked the article in their individual top ten ranking.

Results

A total of 440 papers satisfied the search criteria. Forty-eight papers[10-57] met the inclusion criteria and were scored by each of the five reviewers, with a range of mean scores from 10.25 to 22.75. Six papers that met a priori criteria and had a mean rank of ≤ 6 were considered methodologically superior and are highlighted for review.[21, 25, 32, 33, 47, 58] They are presented in alphabetical order by the surname of the first author.

Branzetti JB, Aldeen AZ, Foster AW, Courtney DM. A novel online didactic curriculum helps improve knowledge acquisition among non-emergency medicine rotating residents. Acad Emerg Med. 2011;18:539.

Background

Housestaff in academic emergency departments (ED) include not only emergency medicine (EM) residents but also rotating non-EM residents. Few departments, however, have targeted didactic curricula for the latter. The primary objective of the study was to determine the effect of an online, six-lecture video series on the medical knowledge of the rotating non-EM residents.

Methods

This was a prospective, randomized study conducted on rotating non-EM residents at a single ED over 9 consecutive months. Each participant completed one of two 42-question multiple-choice tests as a pretest before randomization into either the control (no videos) or experimental (videos) group. The other 42-question examination was given as a posttest after 2 weeks. The primary outcome measure was the difference between pretest and posttest scores. Student's t-test was used to compare the mean scores, and a linear regression model examined the effect of study group, pretest type, and resident type on the change in scores.

Results

The residents with access to the online video series (n = 29) showed a 17.3% improvement in test scores, compared to 1.6% in the control group (n = 25). Eighty-one percent of the surveyed participants felt that the online lectures enhanced their education.

Strengths of the Study

In this prospective study, the authors conducted a power analysis and achieved the target sample size of at least 22 residents per study arm. The randomization design of the participants and the tests and the intention-to-treat analysis reduced bias.

Relevance for Future Educational Advances

These encouraging results suggest that asynchronous, video-based lectures can serve as an effective learning resource for trainees in the ED, which perhaps may include EM residents, medical students, nursing students, and midlevel providers. The six videos can be found at http://bitly.com/I8FQqS.

Damewood S, Jeanmonod D, Cadigan B. Comparison of a multimedia simulator to a human model for teaching FAST exam image interpretation and image acquisition. Acad Emerg Med. 2011;18:4139.

Background

Ultrasound (US), specifically the focused assessment with sonography for trauma (FAST), is a valuable and frequently used modality in the evaluation of traumatized patients in the ED setting. Despite an educational mandate for training of this skill, there is currently no widely accepted instructional strategy for teaching the FAST exam. This study compared the use of human models versus US simulation software. Postinstruction trainee skills were measured by evaluating image interpretation and image adequacy.

Methodology

This was a prospective, blinded, controlled education study using medical students as an US-naive population. After a standardized didactic lecture on the FAST exam, trainees were separated into two groups to practice image acquisition on either a multimedia simulator or a normal human model. Each group was allowed unlimited practice time until they felt confident in their ability to perform the exam. Four outcome measures were then assessed: 1) correct image interpretation of 10 prerecorded FAST exams for positive or negative findings, 2) adequacy of image acquisition by performance of a four-view FAST exam on a standardized normal patient, 3) perceived confidence of image adequacy using a 10 cm analog scale, and 4) time to image acquisition. Power analysis determined that a sample size of approximately 70 students (35 per group) was needed to have an 80% power to detect an absolute difference in interpretation scores between groups that was two-thirds of the within-group standard deviation.

Results

Forty-eight subjects who trained on a normal human model, and 39 subjects who trained on a multimedia simulator, comprised the final study population. There were no differences in scores for image interpretation (p = 0.16), image acquisition (p = 0.955), levels of confidence (p = 0.36), or time to acquire images (p = 0.044) between the human model and simulator groups.

Strengths of the Study

This was a well-designed randomized controlled trial using four different outcome measures. The outcomes were a combination of subject and objective measures using multiple raters. This study demonstrated that subjects trained to perform a FAST exam using simulation software were similar to those trained using normal human models when evaluated for acquisition and interpretation of US images, confidence in performance of the exam, and time for performance of the procedure. This suggests that proficiency in performance of the FAST exam using software translates to proficiency when performing the exam on a patient. Simulation software is more convenient for instruction, as trainees can practice at their convenience and for multiple episodes to improve proficiency. The software can be designed to demonstrate a variety of pathologies, and both normal and abnormal findings can be programmed as necessary, a situation that is difficult to replicate with human models.

Relevance for Future Educational Advances

If these findings can be replicated in practitioners with variable knowledge and experience in performance of US and with various US procedures, training would potentially be convenient for trainees, provide exposure to multiple pathologic states, and allow trainees to practice until they feel confident of success on evaluation. Studies should be performed to see if distributed practice compared to clustered practice results in increased proficiency and confidence in performance by trainees with various levels of experience in the use of US.

Humbert AJ, Besinger B, Miech EJ. Assessing clinical reasoning skills in scenarios of uncertainty: convergent validity for a script concordance test in an emergency medicine clerkship and residency. Acad Emerg Med. 2011;18:62734.

Background

A script concordance test (SCT) is a relatively novel method for assessing clinical reasoning. Each test item starts with a short clinical vignette, followed by a single additional data point. The learner then is asked to judge the effect of this data element on the proposed diagnosis, test, or therapy on a Likert scale of –2 to +2. The objective of the study was to determine whether a 59-item SCT exam, which was content mapped to the national fourth-year emergency medicine (EM) clerkship curriculum,[58, 59] could differentiate fourth-year medical students (MS4), EM residents, and experts.

Methods

This single site, observational study enrolled MS4 students rotating in a required EM clerkship over a 12-month period, volunteer EM residents, and volunteer EM physician experts. They each completed the SCT exam. The SCT results were compared to USMLE Step 2-CK and the American Board of Emergency Medicine EM in-training exam for the medical students and residents, respectively.

Results

The SCT scores differentiated the different training levels of participants. The mean (±SD) scores were 60% (±6.2%) for 319 MS4 students, 70% (±5.4%) for 40 EM residents, and 79% (±2.9%) for 12 EM experts. Pearson correlation coefficients were calculated and showed that SCT scores were significantly correlated with USMLE scores for MS4 students and in-training exam scores for EM residents.

Strengths of the Study

To the authors' knowledge this was the first study of its kind to study medical students using SCT in the field of EM. Furthermore, it provided solid evidence of convergent validity by showing a strong correlation with other established measures of clinical knowledge.

Relevance for Future Educational Advances

The SCT can serve as a practical tool to measure clinical reasoning in uncertain scenarios, in addition to high-fidelity simulations, oral examinations, and objective structured clinical examinations. As the least resource-intensive of these tools, SCTs provide a practical and viable option in the assessment of medical students and residents in EM.

Ilgen JS, Bowen JL, Yarris , Fu R, Lowe RA, Eva K. Adjusting our lens: can developmental differences in diagnostic reasoning be harnessed to improve health professional and trainee assessment? Acad Emerg Med. 2011;18(Suppl 2):S7986.

Background

A majority of medical errors come from cognitive errors, as opposed to insufficient data or lack of knowledge. Current theories suggest that clinicians employ a mix of analytic and nonanalytic reasoning (such as pattern recognition and/or heuristics) to identify clinical presentations. This study attempted to measure whether the type of diagnostic reasoning used reliably distinguishes clinicians across levels of experience.

Methodology

Clinicians with various levels of training, from third-year medical students to faculty in EM and internal medicine, were categorized as novice (medical students), intermediate (first- and second-year residents), and experienced (senior residents and faculty) clinicians. Participants were prospectively tested on a block-randomized selection of previously authenticated clinical scenarios that were categorized as either simple or complex. Participants were asked for their first diagnostic impression on the first six scenarios and then were led through careful, sequential diagnostic steps to reach a diagnosis on the next six scenarios (directed searches). The outcome measured was diagnostic accuracy.

Results

A total of 115 of 444 possible subjects participated (26%), a low response rate, but an adequate number of subjects. There was no difference in accuracy detected for specialty, age, or sex as confounding variables. Experience and case complexity determined accuracy as expected. Accuracy was higher for directed search conditions than compared to first impressions. There was no significant difference between mean scores for intermediate subjects compared to experienced ones. The authors reported “differences in diagnostic accuracy followed expected patterns by experience levels and vignette complexity, supporting the argument that these scores are a valid reflection of reasoning performance. Reliability studies yielded intriguing results, suggesting instructional conditions influence the psychometric properties of the instrument.”[33]

Strengths of the Study

This was a prospective, block-randomized trial that used objective measurements to assess diagnostic accuracy improvement when the thought processes are actively organized.

Relevance for Future Educational Advances

This was a novel, prospective study demonstrating that a directed search strategy could improve diagnostic accuracy in case vignettes. It would have been enhanced by a better response rate. The measurements reliably identified more experienced clinicians. The results suggest that clinical reasoning can be taught and improved upon by training clinicians in analytic reasoning to enhance nonanalytic diagnosis.

Roppolo LP, Heymann R, Pepe P, et al. A randomized controlled trial comparing traditional training in cardiopulmonary resuscitation (CPR) to self-directed CPR learning in first-year medical students: the two-person CPR study. Resuscitation. 2011;82:31925.

Background

Self-directed CPR training is not only shorter, more convenient, and cost-efficient than traditional CPR training, but studies suggest that it may be superior. There is a concern that self-directed learning would not prepare health care providers for integration into a team as required for the cooperation needed during two-person CPR, which is common in the acute patient setting. This study compared aspects of two-person CPR performance after training using two self-directed CPR methods to traditional CPR training.

Methodology

This was a randomized, single-blind, case–control study at a single institution. Medical students without recent (>5 years) CPR training were randomly assigned to either traditional CPR training or one of two self-directed CPR training programs. Students were videotaped while being tested using standard CPR simulation in two-person teams. The videos from the CPR skills testing were reviewed by a three-person consensus panel blinded to group assignment, using a standardized checklist of performance. Efficacy of performance (compression and ventilation skills) and measures of team performance were compared.

Results

A total of 180 medical students met eligibility requirements and participated. There was no difference between groups in the quality of chest compressions or ventilations. The students from the two self-directed courses were significantly better able to initiate the switch between performing ventilation and compressions than students who completed the traditional course (84 and 81% vs. 66%). There was no difference between groups in the number of students who failed to clear personnel from the bedside prior to shocking the patient. The self-directed learning groups not only had a high level of success in initiating the “switch” to two-person CPR, but also were not significantly different from students who completed traditional training.

Strengths of the Study

This study uses excellent, prospective methodology to address a clear hypothesis. Students were evaluated in a blinded fashion using concrete, objective assessments. The topic is highly relevant to medical educators in this age of the proliferation of online teaching aids. The final result demonstrates the equivalency of an alternative training method to traditional CPR classes that can be administered more efficiently and conveniently at the learner's discretion.

Relevance for Future Educational Advances

This study is an excellent model for the evaluation of alternate skills training methods. Comparing clinical performance (even if simulated) following a training exercise is superior to measuring learner attitudes or testing medical knowledge. The only better method would be to measure actual performance during real-life resuscitations, which is not generally feasible.

Yarris L, Fu R, LaMantia J, Linden J, Hern G, Lefebvre C. Effect of an educational intervention on faculty and resident satisfaction with real-time feedback in the emergency department. Acad Emerg Med. 2011;18:50412.

Background

Feedback is recognized as necessary for efficient learning, valued by adult learners, and mandated by the Accreditation Council of Graduate Medical Education (EM Guidelines). However, the best mechanism to deliver feedback is not specified, and there are multiple challenges in providing feedback in the ED. This study sought to determine if an educational intervention combined with a standardized feedback card would improve faculty and resident satisfaction with feedback delivery in the ED.

Methodology

This was a cluster-randomized, controlled study of 15 EM residency programs. An educational intervention was created combining a feedback curriculum with a card system designed to promote timely, effective feedback. Sites were randomized either to receive the intervention or to continue their current feedback method. Participating faculty and residents completed a pre–post intervention Web-based survey. The primary outcome was overall feedback satisfaction on a 10-point scale. Additional items addressed specific aspects of feedback. Responses were compared using a generalized estimating equations model, adjusting for confounders and baseline differences between groups. The study was designed to achieve at least 80% power to detect a one-point difference in overall satisfaction (α = 0.05).

Results

Response rates for pre- and postintervention surveys were 65.9 and 47.3% (faculty) and 64.7 and 56.9% (residents). Residents in the intervention group reported a mean overall increase in feedback satisfaction scores compared to those in the control group (mean increase 0.96 points, standard error [SE] ± 0.44, p = 0.03) and significantly higher satisfaction with the quality, amount, and timeliness of feedback. There were no significant differences in mean scores for overall and specific aspects of satisfaction between the faculty physician intervention and control groups.

Strengths of the Study

This study used a randomized, controlled design to test the effect of a feedback intervention on faculty and resident satisfaction with provided feedback compared to the traditional methods used by programs. Overall resident satisfaction with feedback, as well as resident satisfaction with the quality, amount, and timeliness of feedback, was significantly higher in the intervention group compared to the control group although faculty satisfaction did not increase.

Relevance for Future Educational Advances

Although this study demonstrated an increase in resident satisfaction, it was not designed to determine if either knowledge increased or behavior changed as a result of the feedback provided. Residents view feedback as an exchange of information rather than as an impetus for change in behavior. In contrast, faculty viewed feedback as an impetus for changing behavior. Studies to explore faculty perception and satisfaction with methods of providing feedback may be helpful in aligning learner and instructor views of feedback. Future educational studies to determine optimal linking of feedback valued by the learner with subsequent changes in behavior are needed.

Discussion

Trends in Medical Education Research in 2011

Over the past 4 years, the number of EM education research papers has continued to grow. Comparing the literature of 2011 to that of 2008 through 2010, the number of published educational research papers meeting our criteria increased from 30, to 36, to 41, and then to 48.

In 2011, most articles that were relevant to EM were published in our specialty's journals. Five appeared in general medical education journals, including three in Teaching and Learning in Medicine. Most authors were emergency physicians (81%), but nine included authors with primary affiliations in medical education, and 12 (25%) involved collaboration with investigators outside of the specialty of EM. The most common clinical collaborators were pediatricians (five; 10% of all studies). A total of 13 (27%) of the studies received some sort of funding. Common sources of funding were federal grants, university-sponsored grants or faculty funding, and outside organizations or foundations. Seven of these (54%) had at least one coauthor who was in another specialty

Most of the research was conducted in the United States. The spike reported last year of 27% of the papers originating internationally was attenuated. Two studies were conducted in Canada[42, 50] and others in Iran,[11] France,[13] the Republic of Korea,[24] Botswana,[42] and Australia.[54] Study subjects were most often medical students (20; 42%) or residents (23; 48%); however, faculty, fellows, nurses, and patients were studied 35% of the time (17 papers). Six studies (12.5%) had a combination of subjects. Observational studies prevailed in 2011 (28; 58.3%). This is a noteworthy shift from last year, when survey methodology accounted for a large percentage (46%) of published studies in our literature.[7] This year, 10 articles (21%) used surveys as their primary methodology. Notably, nine (19%) of the 2011 studies featured experimental designs, in which testable hypotheses were generated and evaluated. As in past years, articles that employed technology (22; 46%), particularly simulation (14; 29%), dominated.

An emerging trend is that medical education topics were commonly studied in 2011, including several articles on curriculum evaluation (18; 37.5%) and learner satisfaction (18; 37.5%). Asynchronous learning techniques were studied in six articles (12.5%). Seven (14.5%) articles were focused on pediatric topics, while six (12.5%) studied workplace efficiency topics.

An analysis of the six articles that were highlighted this year reveals several interesting trends. As in previous years these studies were more likely to have been funded (67%). This further confirms Cook's assertion that funding medical education research yields higher quality outcomes.[60] It is striking that the trend of experimental design prevailed again this year with four of the six featured studies. The topics of the research varied in these studies, which underscores that excellent methodology in study design can be applied across a wide variety of investigations. A summary table of this year's trends is provided (Table 2).

Table 2. Trends for the Articles Published in 2011
VariableAll Publications (n = 48)Highlighted (n = 6)
  1. a

    It is possible to exceed 100% for these categories because of multiple learner populations or study topics.

Funding134
Learner groupa
 Medical students204
 Residents234
 Other172
Study methodology
 Survey100
 Observational282
 Experimental or quasi-experimental94
 Qualitative10
Topics of studya
 Technology223
 Curriculum evaluation181
 Learner satisfaction182
 Evaluation and feedback173
 Simulation141
 Pediatrics70
 Learning theory72
 Asynchronous learning62
 Workplace efficiency60
 Ultrasound41

Limitations

Limitations to this analysis of the literature remain similar to those from previous years. Although this year's article search was meant to be extensive in reviewing the MEDLINE, ERIC, and PsychInfo literature databases, it is possible that the article inclusion criteria may have been too narrow, missing some publications.

When rating any research it is possible for bias to exist. Although reviewers did not assess papers that they had been involved in writing, the selection and scoring of publications was not blinded, which may have led to bias. To minimize bias, the reviewers attempted to standardize their individual article ratings through a priori discussions of the rating definitions and rating agreements. The use of rankings limited the variance inherent to individual reviewer ratings.

Conclusions

This critical appraisal of the EM literature provides a snapshot of exemplary educational research in 2011 and highlights advances and trends of research in the field. The six publications highlighted represent methodologically superior research published in 2011.

Each of the highlighted research publications contributes to the growing field of medical education research relevant to EM, while addressing the methods to control, justify, or minimize the limitations that are inherent to this focus. Highlighting the unique strengths of these high-quality publications is meant to encourage educators to conduct methodologically sound educational research. Growth and support of medical education research focused on EM will assist academic emergency physicians in implementing innovative educational approaches, based on the most valid and effective evidence and ultimately improve patient care.

Ancillary