Conference Attendance Does Not Correlate With Emergency Medicine Residency In-Training Examination Scores
Presented at the annual meeting of the Society for Academic Emergency Medicine, New Orleans, LA, May 2009.
CoI: The author states that there are no financial conflicts of interest.
Address for correspondence and reprints: H. Gene Hern, Jr., MD, MS; e-mail: email@example.com.
Objectives: The residency review committee for emergency medicine (EM) requires residents to have greater than 70% attendance of educational conferences during residency training, but it is unknown whether attendance improves clinical competence or scores on the American Board of Emergency Medicine (ABEM) in-training examination (ITE). This study examined the relationship between conference attendance and ITE scores. The hypothesis was that greater attendance would correlate to a higher examination score.
Methods: This was a multi-center retrospective cohort study using conference attendance data and examination results from residents in four large county EM residency training programs. Longitudinal multi-level models, adjusting for training site, U.S. Medical Licensing Examination (USMLE) Step 1 score, and sex were used to explore the relationship between conference attendance and in-training examination scores according to year of training. Each year of training was studied, as well as the overall effect of mean attendance as it related to examination score.
Results: Four training sites reported data on 405 residents during 2002 to 2008; 386 residents had sufficient data to analyze. In the multi-level longitudinal models, attendance at conference was not a significant predictor of in-training percentile score (coefficient = 0.005, 95% confidence interval [CI] = –0.053 to 0.063, p = 0.87). Score on the USMLE Step 1 examination was a strong predictor of ITE score (coefficient = 0.186, 95% CI = 0.155 to 0.217; p < 0.001), as was female sex (coefficient = 2.117, 95% CI = 0.987 to 3.25; p < 0.001).
Conclusions: Greater conference attendance does not correlate with performance on an individual’s ITE scores. Conference attendance may represent an important part of EM residency training but perhaps not of ITE performance.
Medical knowledge is one of the six core competencies that the Accreditation Council for Graduate Medical Education (ACGME) has identified as essential for assessment of resident physicians’ outcome performance.1 The American Board of Emergency Medicine (ABEM) in-training examination (ITE) is a criterion-referenced multiple choice test given to all emergency medicine (EM) residents annually to assess their preparation in medical knowledge for the ABEM qualifying examination. The ITE is a single-session written examination containing 225 multiple-choice questions that takes approximately 4.5 hours to complete. The questions are drawn from the Model of the Clinical Practice of Emergency Medicine,2 which defines the scope of medical knowledge required for emergency physicians. The examination is administered to more than 4,500 EM residents every year at the end of February. “It is a standardized examination that residents and program faculty can use to judge an individual resident’s progress toward successful ABEM certification. There is a strong relationship between in-training and qualifying examination scores. Physicians with higher in-training scores have a higher likelihood of passing the qualifying examination, and those with lower scores have a lower likelihood of passing the qualifying examination.”3
Residency programs help resident physicians master the necessary knowledge through a variety of methods, including didactic sessions, reading assignments, tests, and reinforcement of clinical lessons and practice-based improvement. The Residency Review Committee for Emergency Medicine (RRC-EM) of the ACGME has long recognized the important role that residents’ educational conferences play by explicitly supporting them in the special program requirements for EM.4 These requirements include an average of at least 5 hours of didactic sessions per week, protection from clinical duties to attend conference, and a minimum average conference attendance of 70% for each resident.
Other specialties have examined the association between learning habits, such as conference attendance, and medical knowledge acquisition as measured according to an ITE, with mixed results.5–10 Despite the enormous resources required of EM training programs to provide quality didactic conferences that rise to the RRC-EM standards, there has been no assessment as to whether attendance at these sessions affects resident performance on the ABEM ITE. The purpose of this study was to test the correlation between conference attendance and ITE scores for EM residents.
Study Design and Population
This was a retrospective cohort study of residents at four EM residency programs. Each program has been training residents for more than 20 years. Institutional Review Board approval was granted at each participating institution.
Each program abstracted didactic conference attendance over the course of 6 years. Each program linked this to individual residents’ percentile scores and percentage correct on the ABEM ITE, along with U.S. Medical Licensing Examination (USMLE) scores and sex. After wiping the data of other identity traits, each participating program sent a spreadsheet to the coordinating investigators (HGH, CPW), who then merged the data and submitted it for statistical analysis to the investigating statistician (HA). The ITE is a yearly examination taken by almost all EM residents. Although ABEM does not require it, all programs participate. It is taken at all programs on the same day in February.
To account for the lack of independence in observations of the same individual over the course of residency, as well as between observations at the same training site, we specified longitudinal multi-level models to describe the relationship between conference attendance and ITE scores. Using the Stata 10.0 (Stata Corp., College Station, TX) command “xtmixed,” we adjusted for identified covariates, including USMLE Step 1 score as a proxy for test-taking skills, sex, and age at the start of training.
Because multi-level modeling explores effects over time, and there is a possibility that the relationship between attendance and ITE score behaves differently in one year of residency than another, we then exploded the data according to year and specified linear regression models, adjusting for clustering of the data by site and controlling for the same covariates. In these models, because each individual took only one test per year, there was no clustering of the data according to individual.
Our dataset contained information on 405 residents, but only 368 had sufficient data to analyze. In the multi-level longitudinal models, attendance at conference was not a significant predictor of in-training percentile score (coefficient = 0.005, 95% confidence interval [CI] = –0.053 to 0.063; p = 0.87). Score on the USMLE Step 1 examination was a strong predictor of score (coefficient = 0.186, 95% CI = 0.155 to 0.217; p < 0.001), as was female sex (coefficient = 2.117, 95% CI = 0.987 to 3.25; p < 0.001).
In the year-by-year regression analysis, we found that, in each of post-graduate years 1, 2, 3, and 4, the relationship between conference attendance and ITE examination percentile score was not significant (p = 0.20, 0.30, 0.06, and 0.30, respectively).
There were 15 residents with USMLE Step 1 scores but insufficient attendance or ITE data for inclusion. Their median Step 1 score was 231 (interquartile range [IQR] 197–242), compared with 220 (IQR 209–234) for the study group. Of 18 residents with sex reported but otherwise insufficient data, 72% were male, compared with 55% in the analyzed group. Thus, this non-random group of excluded residents may have had slightly higher Step 1 scores and were more likely to be men.
The multilevel longitudinal modeling allowed us to analyze individuals specifically but did not allow for discussion of differences between programs. Therefore, no data on individual program differences are available.
Studies of trainees in other specialties (internal medicine,5–7,9 general surgery,8 and family medicine10) have recently examined the effect of conference attendance on written examination scores. The results have been extremely varied. To our knowledge, this is the first study that has directly examined the effect of conference attendance in EM training programs on the ABEM ITE. Our study did not demonstrate a correlation between conference attendance and performance on the examination. We believe that this is an important finding.
The mastery of a specific subset of facts and demonstration of medical knowledge competence are essential goals in all residency programs. ITEs are a well-established and reliable knowledge assessment tool. Although medical knowledge is easily assessed, the specific activities that affect knowledge acquisition and their relative importance are significantly more difficult to evaluate. ITEs assess medical knowledge, even though that knowledge may have been obtained outside of the training program or the specific didactic educational activities offered. In addition to the didactics and clinical experiences provided during training, an individual resident’s prior experiences, attitudes, beliefs, and behaviors will affect knowledge acquisition. Recent study results have demonstrated a positive correlation between ITE score and conference attendance and between ITE score and professional behaviors,7,9 although prior studies did not.5,6 We were unable to identify other studies that conclusively demonstrated the effect of other specific practices or curriculum components on the assessment of medical knowledge. The identification of any practices that have a significant correlation to objective knowledge assessment, such as the ABEM ITE, would be valuable to residency program directors and accreditation organizations.
Our results also suggest that the USMLE Step 1 score was a strong predictor of ITE score, as was female sex. The correlation between USMLE score and ITE score has been published before not only in EM, but also in internal medicine.11,12 The relationship between female sex and ITE score has not been studied, but recent multi-level analyses suggest that women out-perform men on the USMLE examinations, both Step 1 and Step 2.13–15
These findings in no way suggest that the ITE is not a valuable tool for knowledge assessment and preparation for ABEM written certification examinations. The ITE is a consistent predictor of performance on the ABEM certification examination, and is an important part of the preparation of EM residents for ABEM certification.16 The correlation between ITE and post-residency certification examinations is well documented in other specialties, as well.17–20
Attendance at EM conferences probably provides other benefits, if not ITE score improvement. It is likely that the opportunity to discuss topics such as variance in practice patterns, evidence-based medicine, procedure instruction, and simulation provide valuable benefits to EM conference attendance. In addition, morbidity and mortality conferences provide opportunities to discuss errors, quality improvement, interpersonal communication skills, and any number of other important facets of EM practice.
This study was done retrospectively; our analysis could account only for previously available data. We could not control for each resident’s independent reading or other education.
The multi-site nature of the project expands the applicability of our results but creates a limitation, as well. Each site has its own distinct preparation process for the ITE and places varying importance on the result of the examination, which may have confounded our results. In addition, attendance at sessions specifically aimed at preparing for the examination (rather than general EM education) was not tracked. Individual programs may have vastly different didactic programs, as well. This, in addition to the actual ITE preparation, is a substantial limitation. The model we chose to assess for individual ITE scores also does not allow for between-program analyses.
To meet the RRC-EM standards, each program requires individual residents to meet a minimum threshold for attendance (which could be different according to program). Therefore, an “artificial minimum” attendance may have altered our analysis.
Finally, the quality of educational opportunities and approaches may differ greatly according to program, which cannot be controlled for. Each program had some didactic time devoted to ITE preparation (practice questions, online testing), but none offered more than 1 hour per week. In addition, reading groups, residents under remediation, and special preparation sessions cannot be accounted for. The multi-site nature of this study is intended to minimize that effect, but the applicability of these results to an individual program may be limited.
There was no correlation between EM conference attendance and ABEM ITE score for EM residents at these four EM residencies. There was a correlation between USMLE score and ITE score and between female sex and ITE score.