SEARCH

SEARCH BY CITATION

Keywords:

  • education;
  • medical;
  • graduate;
  • emergency medicine;
  • internship and residency

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information

Objectives:  Effective feedback is critical to medical education. Little is known about emergency medicine (EM) attending and resident physician perceptions of feedback. The focus of this study was to examine perceptions of the educational feedback that attending physicians give to residents in the clinical environment of the emergency department (ED). The authors compared attending and resident satisfaction with real-time feedback and hypothesized that the two groups would report different overall satisfaction with the feedback they currently give and receive in the ED.

Methods:  This observational study surveyed attending and resident physicians at 17 EM residency programs through web-based surveys. The primary outcome was overall satisfaction with feedback in the ED, ranked on a 10-point scale. Additional survey items addressed specific aspects of feedback. Responses were compared using a linear generalized estimating equation (GEE) model for overall satisfaction, a logistic GEE model for dichotomized responses, and an ordinal logistic GEE model for ordinal responses.

Results:  Three hundred seventy-three of 525 (71%) attending physicians and 356 of 596 (60%) residents completed the survey. Attending physicians were more satisfied with overall feedback (mean score 5.97 vs. 5.29, p < 0.001) and with timeliness of feedback (odds ratio [OR] = 1.56, 95% confidence interval [CI] = 1.23 to 2.00; p < 0.001) than residents. Attending physicians were also more likely to rate the quality of feedback as very good or excellent for positive feedback, constructive feedback, feedback on procedures, documentation, management of ED flow, and evidence-based decision-making. Attending physicians reported time constraints as the top obstacle to giving feedback and were more likely than residents to report that feedback is usually attending initiated (OR = 7.09, 95% CI = 3.53 to 14.31; p < 0.001).

Conclusions:  Attending physician satisfaction with the quality, timeliness, and frequency of feedback given is higher than resident physician satisfaction with feedback received. Attending and resident physicians have differing perceptions of who initiates feedback and how long it takes to provide effective feedback. Knowledge of these differences in perceptions about feedback may be used to direct future educational efforts to improve feedback in the ED.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information

Performance evaluation is an important part of any training program and is a cornerstone of medical residency training. Current Accreditation Council for Graduate Medical Education (ACGME) common program requirements state that a residency training program must provide objective assessments of resident physician competence, use multiple evaluators, document progressive resident performance, and provide evaluation of performance with feedback.1 ACGME emergency medicine (EM) program requirements mandate “effective, ongoing evaluation” by the training program and “timely and regular performance feedback” by faculty.2 Furthermore, adult learners in medical education value feedback, defined as “an informed, non-evaluative, and objective appraisal of performance that is aimed at improving clinical skills rather than estimating the student’s personal worth.”3 Despite the importance that both educators and learners have placed on resident performance evaluation and feedback, the best method by which to deliver evaluation and feedback has not been specified. This has led to much heterogeneity in evaluation protocols of residency training programs.4,5

Although there is no recognized standard for how much feedback is necessary or ideal, delivering and encouraging feedback is recommended as a component of effective clinical teaching.6 One study found consistent views of EM residents that “providing feedback” is an important principle of effective clinical teaching.7 Elements of effective feedback have been proposed as “timely, specific, and respectful,” and some specific techniques for delivering feedback have been published.8–10

Providing effective feedback to adult learners in the emergency department (ED) can be challenging. Education in ambulatory medicine has been characterized as “variable, unpredictable, and lacking continuity.”11 The fast-paced nature of the specialty and the unpredictable educational conditions in the ED make delivery of consistent and frequent feedback during attending–resident interactions difficult. ED patient volume and over-crowding serve to minimize time and availability of faculty and residents and have the potential to negatively affect teaching and feedback delivery.12

To our knowledge, no studies have examined the perceptions of EM residents or faculty about feedback in the ED. It is crucial for educators to understand these baseline perceptions so that interventions aiming to improve the quality of feedback may be designed and assessed. The purpose of this study was to obtain a baseline assessment of the perceptions of resident and attending physicians about feedback in the ED. We hypothesized that attending and resident perceptions of feedback in the ED would differ and that attending physicians would report higher overall satisfaction with feedback given than residents would report with feedback received.

Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information

Study Design and Population

This was a multi-center cross-sectional survey study of resident and faculty perceptions regarding feedback in the ED. (Surveys are available as online Data Supplements: Resident Survey DS1, Faculty Survey DS2.) The local Institutional Review Board at each site reviewed and approved this study.

The study was conducted at 17 ACGME-accredited EM residency programs in the United States. All residents and core clinical faculty in each program were eligible to participate. Study investigators were exempt from participating. Each site had one to two study investigators. All attending physicians had a primary teaching appointment and worked the majority of their clinical shifts with residents. Participation was voluntary and confidential.

Survey Design and Administration

Eligible resident and attending physicians received an invitation to participate and a study information sheet via e-mail. Web-based surveys were administered to resident and attending physicians at each program during a 1-month period between November 2007 and March 2008 using SurveyMonkey (a commercially available web-based survey program; http://www.surveymonkey.com) to assess perceptions about receiving and giving feedback. Completion of the survey was considered consent to participate. During each site’s survey month, two additional e-mailed reminders were sent to encourage participation.

The surveys were designed using the tailored design method and reviewed by a group of clinician–educators with experience in instrument development to maximize the construct validity of the instrument.13 The web-based method was selected because the study population had convenient access to this method of survey administration. Content was determined based on available literature and as a result of focus groups conducted by the principal investigator with residents and faculty at the principal site. The principle site has a 3-year residency program and served as the pilot site. The survey was piloted and further revised based on feedback from residents and faculty about ambiguous items. Common items on the surveys for the two groups were worded identically except for grammatical changes needed to reflect that the items referred to the process of receiving feedback on the resident survey and giving feedback on the faculty survey.

The survey defined feedback as “a specific and timely appraisal of a resident’s performance in the ED, verbally communicated to them during or directly after their ED shift by an attending who has been working with them” and gave several examples of what feedback may include and what is not considered feedback. Respondents were asked to rate overall feedback satisfaction on a 10-point scale and to rate their satisfaction with three specific aspects of feedback on a 4-point Likert scale (extremely dissatisfied, somewhat dissatisfied, somewhat satisfied, extremely satisfied). Respondents were also prompted to rate the quality of different types of feedback on a 5-point Likert scale (poor, fair, good, very good, excellent), with a N/A category for the case of inadequate information. Multiple-choice questions addressed perceptions about the time required to give effective feedback and the frequency of feedback given and received on an ordinal scale and (for the attending physician survey only) barriers to giving feedback. A separate item addressed who usually initiates feedback in the ED. In addition to overall satisfaction with feedback, attending physicians were also asked to rate their “comfort” with giving feedback on a 10-point scale. An additional item asked attending physicians whether they felt that residents would state that they consistently gave feedback that met the residents’ expectations, with possible responses being “yes,”“no,” and “unsure.”

Data Analysis

The primary outcome was overall resident and attending physician satisfaction with feedback in the ED, measured on a 10-point scale. To facilitate analysis, Likert-scale responses to questions regarding satisfaction with specific aspects of feedback were collapsed into two categories as satisfied (somewhat or extremely satisfied) versus dissatisfied (somewhat or extremely dissatisfied). Responses rating the quality of specific aspects of feedback were dichotomized as very good to excellent versus good, fair, or poor. We chose to compare the top two categories with the bottom three for these items because we felt that this dichotomy would best differentiate feedback aspects that were above average from those that were average or below. In analyzing the item regarding who initiates feedback, the percentages of subjects who responded that the attending physician usually initiates feedback were compared between both groups.

Resident and faculty characteristics were summarized using descriptive statistics. Outcome variables, including overall satisfaction and comfort, were analyzed on a 10-point scale, and satisfaction was compared between the faculty and resident physicians using a linear generalized estimating equation (GEE) model. The assumptions for normality and homogenous variances were checked and satisfied. For dichotomized responses, a logistic GEE model was used to compare the attending and resident physician responses. For ordinal responses, a cumulative ordinal logistic GEE model was used.14 Responses from faculty and residents of the same EM program were not independent, and the GEE approach accounts for this correlation between responses within a single program. To facilitate the interpretation of the models, we also calculated the proportions of response categories for attending and resident physicians from the GEE logistic regression models. All analyses were performed using SAS 9.1.3 (SAS Institute, Inc., Cary, NC).

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information

Seventy-one percent (373/525) of attending physicians and 60% (356/596) of residents completed the survey. Resident and attending characteristics are described in Tables 1A and 1B. Approximately 60% of the resident respondents were male, and most (72%) were trained in academic settings. Seventy-five percent of the attending respondents were male, and most of them described themselves as clinician–educators (63%) who held the rank of clinical instructor or assistant professor (68% combined). Of 313 attending physicians who answered an item addressing previous feedback training, 119 (38%) reported that they had received formal instruction on how to give feedback. One hundred and eighteen respondents specified what type of instruction they received; reported training opportunities included sessions at a national meeting (n = 25, 21.2%), Emergency Medicine Foundation/American College of Emergency Physicians teaching fellowship (n = 21, 17.8%), faculty development seminar at home institution (n = 47, 39.8%), formal training as part of a masters degree or certificate track program (n = 7, 5.9%), or other opportunities (n = 18, 15.3%).

Table 1A.    Resident Respondents’ Characteristics
Characteristicn (%)
  1. *Note: The denominator for each characteristic reflects the number of total respondents who answered each question.

  2. For residency setting and anticipated practice setting, percentages do not equal 100% because the remainder of respondents chose “other.”

Male, n = 350*209 (59.7)
Year of training, n = 350*
 1123 (35.1)
 2117 (33.4)
 394 (26.9)
 416 (4.6)
Residency setting, n = 347*
 Academic250 (72.0)
 Community50 (14.4)
 Equal mix of both34 (9.8)
Anticipated practice setting, n = 350*
 Academic35 (10.0)
 Community136 (38.9)
 Equal mix of both170 (48.6)
Table 1B.    Attending Respondents’ Characteristics
CharacteristicN (%)
  1. *Note: The denominator for each characteristic reflects the number of total respondents who answered each question.

  2. For some characteristics, percentages do not equal 100% because the remainder of respondents chose “other.”

Male, n = 308*231 (75.0)
Academic rank, n = 311*
 Clinical instructor56 (18.0)
 Assistant professor156 (50.2)
 Associate professor60 (19.3)
 Professor30 (9.6)
Time teaching residents, years, n = 311*
 0–5 118 (37.9)
 6–10 69 (22.2)
 11–15 39 (12.5)
 15–20 38 (12.2)
 > 20 47 (15.1)
Academic track, n = 309*
 Clinician educator194 (62.8)
 Clinician researcher49 (15.9)
 Clinician administrator59 (19.1)
 Other7 (2.3)
Primary setting, n = 311*
 Academic228 (73.3)
 Community41 (13.2)
 Equal mix of both39 (12.5)

Attending physicians reported higher overall satisfaction with feedback in the ED than residents (mean score 5.97 vs. 5.29; p < 0.001). Attending physicians were more likely to indicate satisfaction with the timeliness of the feedback given than were residents (n = 212/313, 68% vs. n = 203/354, 57%) with feedback received (odds ratio [OR] = 1.57; 95% confidence interval [CI] = 1.23 to 2.00; p < 0.001). There were no statistically significant differences between attending and resident responses regarding satisfaction with overall quality (p = 0.07), or amount of feedback (p = 0.28). In addition, attendings reported a mean score of 6.70 (95% CI = 6.50 to 6.91) to rate their comfort giving feedback, which was highly correlated with their overall satisfaction (Pearson correlation coefficient = 0.78; p < 0.001).

When rating the quality of specific types of feedback, faculty were more likely than residents to rate feedback as very good or excellent (Table 2). For example, 50% of faculty rated the quality of positive feedback as very good or excellent, compared with 36% of residents (OR = 1.79, 95% CI = 1.38 to 2.33; p < 0.001). For constructive feedback, the proportions were 29% and 22% for faculty and residents, respectively (OR = 1.42, 95% CI = 1.13 to 2.33; p = 0.002). Faculty also reported higher ratings for quality of feedback about procedural skills, documentation, management of ED flow, and evidence-based decision-making (Table 2).

Table 2.    Comparison of the Ratings of Faculty and Residents on the Quality of Specific Types of Feedback Given or Received
Quality of Specific Types of FeedbackProportions Rating As Very Good or ExcellentOR (95% CI)p-value
FacultyResident
  1. Note: *Statistically significant results.

Positive feedback0.500.36*1.79 (0.38, 2.33)*<0.001*
Constructive feedback0.290.22*1.42 (1.13, 1.79)*0.002*
Feedback on fund of knowledge0.190.151.26 (0.88, 1.80)0.20
Feedback about communication skills0.240.231.05 (0.73, 1.50)0.80
Feedback about professionalism0.240.240.98 (0.68, 1.42)0.92
Feedback about procedural skills0.480.34*1.78 (1.30, 2.42)*<0.001*
Feedback about documentation0.360.28*1.45 (1.03, 2.03)*0.03*
Feedback about management of ED flow0.290.21*1.58 (1.14, 2.19)*0.006*
Feedback about evidence-based decision making0.280.18*1.79 (1.20, 2.65)*0.004*

Faculty were more likely to answer that, during their most recent shift in the ED, they gave feedback that met their expectations than residents were to answer that they received feedback that met their expectations during the last shift worked (64% vs. 39%; OR = 2.80, 95% CI = 1.89 to 4.16; p < 0.001). When asked who usually initiates feedback, faculty were much more likely to state that faculty usually initiate feedback than were residents (82% vs. 39%; OR = 7.09, 95% CI = 3.53 to 14.31; p < 0.001).

Table 3 shows the results of faculty and resident responses on the frequency of feedback given and received and the length of time required to give effective feedback based on the ordinal GEE logistic model. Attending physicians were more likely to report that they give feedback more frequently than less frequently than were residents reporting frequency of receiving feedback (OR = 10.45; 95% CI = 7.38 to 14.79; p < 0.001). When asked how long it takes to give feedback that meets their expectations, faculty were much less likely to report a shorter than longer feedback time categories (OR = 0.25, 95% CI = 0.18 to 0.34; p < 0.001).

Table 3.    Comparison of Faculty and Resident Responses on Frequency and Length Time of Feedback Given or Received
 ProportionsOR (95% CI)*p-Value*
FacultyResident
  1. * The odds ratio (OR) and p-value are estimated from the ordinal generalized estimating equation logistic regression model with cumulative logit link.

Frequency of feedback
 Every shift0.420.0710.45 (7.38–14.79)<0.001
 Not every shift, but every 2–3 shifts0.460.36  
 Not every shift, but every 4–5 shifts0.090.33  
 Every 6 shifts or less frequently0.030.24  
Length of time needed for quality feedback, minutes
 0–20.270.600.25 (0.18–0.34)<0.001
 3–50.530.34  
 6–100.140.04  
 > 100.050.01  

Table 4 lists attending physician perceptions of barriers to giving effective feedback in the ED. Time constraints were the most frequently cited reason for failing to give feedback (83.7%), although lack of privacy (44.8%), concern for effects on attending–resident rapport (33.0%), and not thinking of giving feedback (31.4%) were also common responses.

Table 4.    Perceived Barriers to Giving Effective Feedback in the ED (Attending Responses)
Question: Which factors act as barriers to your providing consistent, quality feedback to residents? (Check all that apply)
Response, n = 306n (%)
Time constraints256 (83.7)
The lack of a private place to talk137 (44.8)
I don’t want to harm my rapport with theresident by giving constructive feedback101 (33.0)
I just don’t think of it during my shifts96 (31.4)
The residents don’t accept feedback well63 (20.6)
I don’t feel skilled at giving feedback61 (19.9)
Other41 (13.4)

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information

The ACGME has identified six core competencies considered essential to residency training. Medical educators and learners acknowledge the importance of real-time feedback in assessing and improving many of the core competencies. In many instances, this feedback is informal and difficult to deliver in a timely and effective manner in the fast-paced, unpredictable environment of the ED. To the best of our knowledge, this is the first large multi-residency study to explore resident and faculty perception of this process as it occurs in the ED. Our findings suggest that the current informal process for providing feedback in the ED is not meeting learners’ needs in the sample population. Attending faculty and residents reported moderate overall satisfaction with the current process, with faculty reporting higher overall satisfaction. Faculty were more likely to report satisfaction with several aspects of feedback, including timeliness, quality of positive feedback, quality of constructive feedback, feedback about procedural skills, documentation, management of ED flow, and evidence-based decision-making.

The findings that may have the most practical importance are the disagreements regarding who usually initiates feedback and how often residents report getting feedback. Faculty reported that they nearly always initiated feedback, whereas residents reported that the attending physician initiated the process only one in four times. Faculty were also much more likely to report that they gave feedback every shift than were residents to report receiving feedback every shift. This suggests that faculty believe they are giving feedback when they are not or that residents are not recognizing the information they are receiving as feedback. These perception differences are important to consider when designing future educational interventions to improve the feedback process. Implementing feedback training sessions for faculty and residents and instituting feedback “scripts” or preambles may help to increase learner recognition of feedback. In addition, a formal process that facilitates feedback while considering the learner’s needs may be helpful in bridging the gap in resident and faculty perceptions of whether effective feedback is being given. One study examined the use of a written versus an oral format for feedback in the outpatient clinic setting and found that residents did not have a strong preference and that the oral format took more time (10–15 minutes vs. 2–5 minutes).15 In this study, feedback was given formally, halfway through the academic year. This study did not evaluate timeliness of feedback, which is critical for improvement. Another study found that the use of a card to stimulate feedback was effective in improving student satisfaction with feedback in the surgical setting.16

Findings of the attending subgroup analysis may help inform faculty development efforts to improve attending physician comfort with providing feedback. The most important barriers to giving feedback identified by faculty were time constraints and lack of a private area to give feedback. Therefore, to improve feedback, we must address these time constraints and privacy issues. Given that staffing levels are unlikely to change in the current economic environment, establishing facilitators or educating faculty about more-efficient ways to give feedback are essential. It may be helpful to note that residents were more likely than faculty to report that it takes 0 to 2 minutes to give feedback that meets their expectations. There is much literature on the “1-minute preceptor model” of teaching in the clinical arena, which may be applied to feedback.17

Faculty also noted concerns about negatively affecting their rapport with residents and not remembering to give feedback during the shift as barriers to giving effective feedback. In addition, faculty reported lower comfort with providing constructive feedback than any other specific type of feedback, although overall comfort with providing feedback was correlated with overall satisfaction with feedback given in the ED. This reinforces the notion that teaching faculty specifically how to provide constructive feedback may substantially affect their overall comfort and satisfaction with providing feedback to learners.

The discrepancies in perceptions between faculty and residents in the aspects of feedback noted above also suggest the need for the development of a collaborative educational process between learners and educators in regard to feedback. Learners and educators could together explore these differences in perception, suggest processes and programs for feedback that may narrow these differences, and collaboratively train in the giving and receiving of feedback.

Limitations

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information

There may be selection bias in that the faculty and residents who chose to respond to the survey may have different perceptions from non-responders. Indeed, 62.8% of attendings who responded were on a clinician–educator academic track (those who would be expected to be invested in resident education and to prioritize feedback). Therefore, our findings may underestimate the true discrepancy between attending and resident perspectives on feedback. Although external validity may be limited with our sample of 17 sites, the study sample included large and small, urban, and community sites that represented a wide range of geographic areas. Therefore, the findings are likely to be generalizable.

The outcomes measured in this study (resident and attending satisfaction with feedback) are subjective, although to our knowledge there are no more-objective measures of feedback effectiveness currently being used or studied (no criterion standard). Given that the subjects were advanced adult learners, we propose that their perception of whether the feedback meets their expectations is the most relevant outcome measure. Future studies may seek to determine whether there is an association between resident satisfaction of feedback and more objective performance measures—to determine whether residents who perceive feedback positively perform better than those who do not.

Conclusions

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information

Effective feedback is a critical component of medical education. Although evidence supports the need for performance evaluation and feedback in medical education, data about the perceptions of EM residents and faculty about feedback are lacking. This observational study reveals a substantial difference in the perceptions of EM residents and faculty regarding feedback delivered in the ED. EM faculty report greater satisfaction with the timeliness, quality, and frequency of feedback delivered than EM residents. Furthermore, there is a disconnect between the perceptions of faculty and residents about who initiates feedback and how much time is necessary to deliver effective feedback. These results suggest that there is room for improvement in the current informal feedback delivery process and may help inform further efforts to improve the effectiveness of educational feedback in the ED.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information
  • 1
    Accreditation Council for Graduate Medical Education. Website homepage. Available at: http://www.acgme.org/acWebsite/home/home.asp. Accessed Sep 21, 2009.
  • 2
    Accreditation Council for Graduate Medical Education. Emergency Medicine Guidelines. Available at: http://www.acgme.org/acWebsite/RRC_110/110_guide0lines.asp. Accessed Sep 21, 2009.
  • 3
    Ende J. Feedback in clinical medical education. JAMA. 1983; 250:77781.
  • 4
    Reisdorff EJ, Hayes OW, Carlson DJ, Walker GL. Assessing the new general competencies for resident education: a model from an emergency medicine program. Acad Emerg Med. 2001; 7:7527.
  • 5
    Rodgers KG, Manifold C. 360 degree feedback: possibilities for assessment of the ACGME core competencies for emergency medicine residents. Acad Emerg Med. 2002; 11:12959.
  • 6
    Bandiera G, Lee S, Tiberius R. Creating effective learning in today’s emergency departments: how accomplished teachers get it done. Ann Emerg Med. 2005; 45:25361.
  • 7
    Thurgur L, Bandiera G, Lee S, Tiberius R. What do emergency medicine learners want from their teachers? A multicenter focus group analysis. Acad Emerg Med. 2005; 12:85661.
  • 8
    Richardson BK. Feedback. Acad Emerg Med. 2004; 11:1283. e1–5.
  • 9
    Hewson MG, Little ML. Giving feedback in medical education: verification of recommended techniques. J Gen Intern Med. 1998; 13(2):1116.
  • 10
    Sherbino J, Bandiera G. Improving communication skills: feedback from faculty and residents. Acad Emerg Med. 2006; 13:46770.
  • 11
    Irby DM. Teaching and learning in ambulatory care settings: a thematic review of the literature. Acad Med. 1995; 70:898931.
  • 12
    Atzema C, Bandiera G, Schull MJ, Coon TP, Milling TJ. Emergency department crowding: the effect on resident education. Ann Emerg Med. 2005; 45:27681.
  • 13
    Dillman D. Mail and internet surveys: the tailored design method, 2nd ed. New York, NY: John Wiley & Sons, Inc., 2000.
  • 14
    Hosmer DW, Lemeshow S. Applied logistic regression, 2nd edn. New York, NY: John Wiley & Sons, Inc., 2000, pp 288308.
  • 15
    Elnicki DM, Layne RD, Ogden PE, Morris DK. Oral versus written feedback in medical clinic. J Gen Intern Med. 1998; 13:1558.
  • 16
    Paukert JL, Richard ML, Olney C. An encounter card system for increasing feedback to students. Am J Surg. 2002; 183(3):3004.
  • 17
    Neher JO, Gordon KC, Meyer B, Stevens NG. A five-step “Microskills” model of clinical teaching. J Am Board Fam Pract. 1992; 5:41924.

Appendix

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information

Appendix 1

Emergency Medicine Education Research Group (EMERGe) members include (alphabetical):

Steve Bowman, MD, Jennifer Brown, MD, Patrick Brunett, MD, Matthew Frederick, MD, Deepi Goyal, MD, Deborah Gutman, MD, MPH, H. Gene Hern, MD, MS, Jeremy Hess, MD, James Juarez, MD, Randy King, MD, Nicholas Kman, MD, Terry Kowalenko, MD, Chris Kyriakedes, MD, Joseph LaMantia, MD, Cedric Lefebvre, MD, Judith A Linden, MD, Samuel Luber, MD, Daniel Martin, MD, David Nestler, MD, MS, Jennifer Oman, MD, Marcia Perry, MD, Alison Southern, MD, Janis Tupesis, MD, David Wald, DO, Lalena Yarris, MD, MCR

Supporting Information

  1. Top of page
  2. Abstract
  3. Introduction
  4. Methods
  5. Results
  6. Discussion
  7. Limitations
  8. Conclusions
  9. References
  10. Appendix
  11. Supporting Information

Surveys are available as online Data Supplements: Resident Survey DS1, Faculty Survey DS2.

Please note: Wiley Periodicals Inc. are not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

FilenameFormatSizeDescription
ACEM_592_sm_FacultySURVEY.pdf71KSupporting info item
ACEM_592_sm_ResidentSURVEY.pdf55KSupporting info item

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.