ACADEMIC EMERGENCY MEDICINE 2012; 19:968–974 © 2012 by the Society for Academic Emergency Medicine
Objectives: The objective was to evaluate whether a standardized consultation model in the emergency department (ED), the 5 Cs of Consultation (Contact, Communicate, Core Question, Collaboration, and Closing the Loop), would improve physicians’ ability to relay appropriate information and communicate successfully during a consultation.
Methods: This was a prospective, randomized study at a large, academic, urban, tertiary care medical center in Chicago. Forty-three emergency medicine (EM) and EM/internal medicine (EM/IM) residents were randomized into two groups, an intervention group and an unstructured group, stratified by postgraduate year (PGY). Intervention group participants received an interactive educational session on the 5 Cs of Consultation, a standardized consultation model. Intervention and unstructured groups placed two simulated consultation phone calls, based on pretested simulated patient cases, to a standardized consultant. Three raters, naive to the consultation model and blinded to group assignments, individually assessed recordings of each call using a seven-item, five-point global rating scale (GRS). Finally, an attending surgeon and an attending psychiatrist each rated respective cases using a single global rating to provide validity evidence for the scale.
Results: Residents trained with the 5 Cs model communicated significantly better, regardless of PGY and clinical case. The intervention group had significantly higher mean GRS scores than the unstructured group (4.1 vs. 3.5, F(1,39) = 33.5, p < 0.0001). Secondary analysis of the recordings suggested that encounters with more 5 Cs behaviors tended to receive higher GRS scores.
Conclusions: A standardized educational model increased the effectiveness of consultation communication from the ED. Residents trained with the 5 Cs of Consultation scored better on consultation assessments compared with untrained residents. Training programs should consider adopting standardized consultation models.
Communication is a critical component of health care, yet it remains a major contributing factor to medical errors. The Joint Commission found that communication issues accounted for 70% of all reported sentinel events between 1995 and 2005.1 In the emergency department (ED), ineffective communication and a lack of standardized processes continue to impede consultations and patient hand-offs between physicians.2,3 Furthermore, poor communication results in costly medical errors and delays in treatment.4–6 Transitions in care (“sign-outs” or “hand-offs”) are vulnerable moments influenced by inadequate communication leading to adverse events and diminished patient care.6,7 Consultations are another vulnerable moment. In the ED, consultations comprise a significant component of patient care, occurring in 20% to 40% of all presenting patient admissions.4 Therefore, consultation proficiency is an important component of providing optimal patient care.2,5,8,9 An effective consultation relies on the ability to reliably ask for expert advice. Emergency physicians (EPs) consult inpatient specialists for transfer of care and/or continued care beyond the scope of the ED. The EP faces unique challenges to communication including multiple and often overlapping patient encounters, unscheduled care, incomplete historical date, unpredictable presenting conditions, and variable practice settings. These barriers, compounded with time constraints, can lead to difficulties and delays in effective consultation.3,10,11
There is a large and distinct gap in the existing consultative process. Eisenberg et al.11 write, “The current system is radically asynchronous and contains too much risk for slippage and miscommunication ….”11 Furthermore, this lack of standardization can lead to inefficiency and may increase patient morbidity, mortality, and health care costs.7 Additionally, many emergency medicine (EM) residents do not receive formal training in consultations and struggle to learn this competency on the job,3,7,12 despite the recognition by the Accreditation Council for Graduate Medical Education (ACGME) that interpersonal and communication skills are a core competency in medical education.13 In a recent survey of EPs, 29% reported a lack of a clear consultation protocol; this survey also found that the majority of EPs believed residents were inadequately trained in consultation.14
We aim to use the existing transition of care literature to build a structurally sound conceptual framework for consultations. We hypothesized that a standardized ED consultation model, the 5 Cs of Consultation, would improve the quality of consultations among physicians.14 The 5 Cs of Consultation (see Table 1) was developed from a detailed qualitative analysis of ED consultations.14 It was further shaped by a business model that described the consultative process15 and subsequently provided a structured framework for ED consultations. The 5 Cs are: Contact, Communication, Core Question, Collaboration, and Closing the Loop. Each C is further subdivided into specific action items necessary for successful consultation.
|Introduction of consulting and consultant physicians. Building of relationship.|
|Give a concise story and ask focused questions.|
|Have a specific question or request of the consultant. Decide on reasonable timeframe for consultation.|
|A result of the discussion between the emergency physician and the consultant, including any alteration of management or testing of patient’s status.|
|Closing the Loop|
|Ensure that both parties are on the same page regarding the plan and maintain proper communication about any changes in the patient’s status.|
In addition to testing whether learning the model would improve consultations by residents, we also sought to determine whether resident training level predicted the ability to provide effective consultations. We hypothesized that without training, residents do eventually develop these skills, and senior residents would demonstrate more effective communication skills than their junior counterparts.
This was a prospective, randomized study, using an educational intervention to assess consultation effectiveness as measured by a global rating scale (GRS). This study was approved by the institutional review board of the University of Illinois at Chicago. All study participants gave informed consent prior to taking part in this study.
Study Setting and Population
The setting was a large, urban, academic tertiary care center in Chicago, Illinois, in May 2010. All EM (postgraduate year [PGY] 1–3 levels) and EM/internal medicine (IM) residents (PGY1–5 levels) at the institution were eligible for participation. Forty-seven residents were eligible to participate; however, four declined to enroll in the study. Forty-three residents (32 men and 11 women) were randomly assigned to two groups stratified by PGY level using a computer-generated random number list (Figure 1). Participants were aware they were enrolled in a study to assess consultation skills among resident physicians and were grouped as junior- (PGY1–2) or senior- (PGY3–5) level resident physicians.
Study Protocol and Educational Intervention
The intervention consisted of a 90-minute interactive education session focusing on the 5 Cs of Consultation. The session included a didactic review of the consulting model and interactive, practice role-play cases for residents with formative feedback. Additionally, practice cases were reviewed in the group and consultation demonstrations were conducted. At the end of the session, each resident participated in a verbal simulated case, unrelated to those used for assessment, to ensure comprehension. All residents successfully completed the practice cases. Formative feedback was given to all participants in a group setting. The intervention group (19 residents) also received a laminated card outlining the 5 Cs of Consultation to be used at their discretion following the interactive educational session. Of note, the intervention group was asked not to share information regarding training with residents in the unstructured group. Upon study completion, the intervention session was offered to all EM and EM/IM residents for educational purposes. The unstructured group (24 residents) received a more traditional 90-minute didactic session related to communication and consultation (not the 5 Cs curriculum) and two selected consultation peer-reviewed journal articles.4,12
In the 2 weeks following the intervention, residents in each group made two phone calls (simulated consultations) to a standardized attending consultant. A single, dual board–certified EM/IM physician with extensive knowledge of the consultative process and 10 years’ experience served as the standardized consultant for all cases. Of note, the physician had no prior knowledge of the standardized consultation model. Residents identified themselves only by an assigned code number during the consultations, and the consultant was blinded to the residents’ group assignment and PGY level. All consultations were audio-recorded by the consultant.
Case Development. The two clinical case scenarios were developed with a small group of content experts in the fields of EM and medical education through a modified Delphi method, an iterative process used to find consensus among experts with various perspectives.16,17 Attributes such as relevance, realism, engagement, challenge, and instructional value were included in case development.18
A multitude of consultation types occur regularly in the ED: consultation for admission or “transfer of care,” consultation for opinion only, and consultation for treatment or special procedure, among others.4 Two clinical case scenarios were developed based on two unique types of consultation: an intervention consult and a specialist recommendation. Intervention consults are calls placed by an EP to a consultative service when a specific intervention or procedure is necessary. Specialist recommendations are calls seeking guidance when a patient’s management is outside the scope of knowledge of the EP. The intervention consult was surgical, involving an apparent acute appendicitis, while the specialist recommendation case was psychiatric, involving a schizophrenic patient, requiring guidance for further care. The cases were pilot-tested with Chicago-based EM residents not otherwise participating in the study, as well as with surgery and psychiatry attending physicians, whose feedback was incorporated into the final version of the cases (Data Supplement S1, available as supporting information in the online version of this paper).
An independent group developed a GRS instrument for this study using a modified Delphi method.16 The GRS was developed independently of the 5 Cs consulting model. GRS items were developed using common themes identified in the consultation literature,2,12 as well as recommendations from an expert panel of eight people (both EPs and consultants). The seed or source participant was a health communication expert (PhD) who was a member of the panel. Three iterations were necessary to establish the GRS items. Following development, the GRS was reviewed by a small group of surgical and psychiatric consultants in the proper context (with sample narrative cases). Changes were made accordingly to improve the clarity of the scale. The GRS was then used to assess the overall effectiveness of resident consultation skills, as determined by three independent raters, blinded to resident assignment to intervention or unstructured group and naive to the consultation model.
The GRS elicited an overall score based on seven five-point category items (Data Supplement S2, available as supporting information in the online version of this paper). For the GRS, which was normally distributed (skew = −0.24, 95% CI = −0.74 to 0.28; kurtosis = −0.40, 95% CI = −1.42 to 0.62) an overall rating for each case was computed from the mean of seven items. The overall rating for the two cases was then averaged to produce a final GRS score for each resident.
Finally, to address the possibility that the GRS items might inadvertently cue raters as to 5 Cs behaviors, a single, blinded, attending psychiatrist and a single, blinded, attending surgeon then rated each consult in their specialty (surgeon rated the surgical cases and psychiatrist rated the psychiatric cases) using a single five-point Likert-type rating ranging from “not at all effective” to “extremely effective.”
Consultant and Rater Training. The standardized consultant was trained on the two cases using a standard protocol similar to standardized patient training for an objective structured clinical examination.19 Mock encounters for both cases were conducted with the standardized consultant. A second researcher formally reviewed all of the recordings assessing for uniformity of the standardized consultant portrayal. The consultant adhered to the script for 95% of the session. Three attending physician raters were trained on use of the GRS, without reference to the consultation model. Mock encounters with the raters were conducted to assess the inter-rater consistency during training. The additional raters (surgeon and psychiatrist) were simply asked to assess the overall consultation without any prior knowledge of the study or its intention or any specific training using a single global rating. Finally, to ascertain whether any improvement in the groups could be attributed to behaviors specifically taught in the 5 Cs model, a separate group of three EM attending physicians, not otherwise involved in the study, were trained to use a 12-item checklist of 5 Cs behaviors to rate the recordings of the consultations.
Raters were given the consultation audio recordings and independently completed their assessment of the 86 calls in 2 weeks. Raters had the opportunity to review the cases as many times as needed to perform their assessments.
Raters were only provided the resident’s study number during the consultation and were blinded to the identity of the participant. A researcher not involved with data collection, rating, or interpretation of the study was in possession of the master list linking resident identities to their assigned numbers.
All statistical analyses were performed with Stata 9.2 (StataCorp, College Station, TX) and SAS 9.1 (SAS Institute, Cary, NC). A statistician blinded to the data collection process and given data stripped of any identifiable information performed the analysis. Initial analysis was performed to estimate the inter-item and inter-rater reliability. Cronbach’s alpha was computed across the individual GRS items to determine inter-item reliability (internal consistency reliability) and across raters to determine the inter-rater intercorrelation (rater reliability).
Linear mixed models were fitted to predict GRS scores. Predictors were case (two levels: surgery and psychiatry), rater (three levels), PGY level (two levels: PGY1–2 and PGY3–5), and intervention (two levels), as well as interaction between PGY level and intervention. Case and rater were modeled as random effects using a Kronecker covariance structure with an unstructured matrix for rater and a compound symmetry matrix for case.20
We hypothesized a significant main effect of intervention. Considering the possibility that the intervention might have greater effect on less senior residents, we included an interaction term, but the study was underpowered to detect this outcome. Considering the possibility that residents might improve naturally in consultation skill without training, we also planned to analyze the PGY level in the unstructured group and predicted performance would improve with increasing seniority.
A priori sample size calculations for the primary hypothesis (main effect of group) were based on power for a between-subjects t-test with α = 0.05 and an effect size of 1 standard deviation (SD). These assumptions yielded a sample size of 17 residents per group to obtain 80% power. Because mixed modeling of clustered data is generally more efficient than simple t-tests, we expected this to be a conservative calculation. For our secondary outcomes at this sample size, we obtain over 90% power to detect the simple effect of PGY in the unstructured group residents, but only 53% power to detect the expected interaction between intervention and PGY.
Characteristics of Study Subjects
Forty-seven residents were eligible to participate; however, four declined to enroll in the study. A total of 43 residents completed 86 calls that were subsequently rated. Forty-four percent of residents (19 of 43) were randomized to the intervention group and underwent the 5 Cs training module, whereas 56% (24 of 43) were randomized to the unstructured group and did not receive the 5 Cs training. Table 2 shows the sex and year of training of the participating residents.
|Characteristic||Intervention Group (n = 19)||Control Group (n = 24)||Total (N = 43), n (%)|
|PGY3 (break)||3||8||11 (26)|
Our data analysis involved three steps: 1) estimating reliability of the instrument, 2) evaluating the primary outcome (differences between intervention and unstructured groups), and 3) examining interactions and simple effects.
The inter-item reliabilities of the GRS (Cronbach’s alpha) were 0.9, 0.89, and 0.87 for raters 1, 2, and 3, respectively—higher than 0.70, a widely accepted minimum value.21 Cronbach’s alpha across raters (inter-rater reliability) for overall GRS score as a single measure was 0.71, but raters had different mean ratings, suggesting differences in their use of the scales. Accordingly, we averaged GRS items to form an overall GRS score for each encounter by each rater, but included rater as a covariate in regression models to control for rater differences.
Main Effect of Intervention
Controlling for PGY level, case, and rater covariates, residents in the intervention group had significantly higher mean GRS scores than those in the unstructured group (4.1 vs. 3.5, F(1,39) = 33.5, p < 0.0001; see Table 3). Table 3 shows the regression coefficients and standard errors for two regressions predicting the GRS. In the main effects model (columns 2 and 3), predictors included trial arm, case, dummy variables for rater, and the resident seniority (PGY1 or 2 vs. PGY3 or higher). Both models were fitted with random effects of case and rater and adjusted for clustering of respondents. After controlling for case and rater covariates, there was no significant association between PGY and GRS score (p = 0.85). There was also no significant effect of case on GRS score (p = 0.32). The difference between the length of the consultations between the unstructured and the intervention group was 125 seconds (95% CI = 115 to 135) and 107 seconds (95% CI = 96 to 118), respectively; this represents a statistically significant difference between the two groups (t(84) = 2.42, p = 0.018)
|Predictor||Main Effects (95% CI)||Main Effects and Interaction (95% CI)|
|Intervention (vs. control)||0.67* (0.46 to 0.88)||0.75 (0.40 to 1.10)|
|GRS rater (1 vs. 3)||1.05* (0.86 to 1.24)||1.05 (0.86 to 1.24)|
|GRS rater (2 vs. 3)||1.25* (1.10 to 1.40)||1.25 (1.10 to 1.40)|
|Case (psychiatry vs. surgery)||−0.05 (−0.04 to 0.14)||−0.05 (0.04 to 0.14)|
|PGY (1/2 vs. 3+)||−0.01 (−0.22 to 0.24)||0.06 (−0.23 to 0.35)|
|Intervention x PGY||−0.16 (−0.29 to 0.61)|
|Intercept||2.72 (2.51 to 2.93)||2.70 (2.47 to 2.93)|
Interaction and Simple Effects of PGY Level
We found no significant interaction between PGY level and intervention for the GRS score (F(1,39) = 0.44, p = 0.51). The simple effect of PGY level in the unstructured group was also not significant for GRS score (F(1,117) = 0.07, p = 0.79); this was also true when PGY1 data were evaluated separately from other data.
5 Cs Behaviors and Ratings. Inter-rater reliability for the checklist scores was 0.94 by Cronbach’s alpha and 0.79 by intraclass correlation coefficient using the two-way mixed model. Intervention group residents used significantly more 5 Cs behaviors on average across cases than the unstructured group residents (10.7 vs. 7.0, F(1,39) = 196, p < 0.0001). There was a significant correlation between mean (across raters) GRS and checklist scores for residents in both the surgery (r = 0.59) and the psychiatry (r = 0.71) cases (n = 43, p < 0.0001), with coefficients of determination (r2) being 0.3481 and 0.5041, respectively.
Specialist Raters. Mean ratings of the psychiatry consultations by a blinded psychiatrist using a simple effectiveness scale were higher for residents in the intervention group (mean ± SD = 4.1 ± 0.99) than for those in the unstructured group (mean ± sd = 3.38 ± 1.10; t(41) = 2.26, p = 0.029). Similarly, mean ratings of the surgical consultations by a blinded surgeon using the same scale were higher for residents in the intervention group (mean ± SD = 4.1 ± 0.94) than for those in the unstructured group (mean ± sd = 3.29 ± 0.62; t(41) = 3.41, p = 0.002).
To our knowledge, this is the first randomized study demonstrating the effectiveness of using a standardized model for clinical consultation. Residents trained in the 5 Cs of Consultation scored significantly better, regardless of whether a surgical or psychiatric case was presented and independent of postgraduate training level. Greater use of 5C behaviors, as measured by a behavioral checklist, was associated with higher global ratings (which were independent of the consultation model), including higher ratings by blinded specialists in surgery and psychiatry using a very simple rating scale that provided no possible cuing about the intervention.
Notably, and contrary to our hypothesis, the data show no natural progression in consulting skills with increasing PGY level, either overall or among residents unexposed to the intervention. We speculate this stems from a lack of formal education in consultation and perpetuated behaviors learned in early medical training. This finding highlights the need for an intervention to improve skills, as they are not simply acquired during traditional residency training.
Despite the statistically significant improvement in consultations demonstrated with the intervention, several limitations to this study remain. This study depended on residents’ ability to recall the 5 Cs and their subcomponents following the teaching session; however, there was no measure of retention of information performed. Determining retention of the 5 Cs model is important in deciding how often to conduct reinforcement training. Only one residency program and one specialty in a large urban academic hospital participated in the trial. Although use of a single blinded and standardized attending physician for all consults reduced bias in the assessment, the consultant may not have represented the typical consultant. Resident performance on these simulated scenarios may also have been atypical, as residents knew that calls were being recorded.22,23 Additionally, there was no comparison of resident performance before and after intervention, allowing for the possibility that, despite randomization, the intervention group may have been better and more experienced in the art of consultation at baseline. Further, the two forms of instruction may not have been equal and could have introduced a level of bias into the outcome such that interactive training may have resulted in higher performance than a more traditional method, and although the obtained rating scores established a measurable difference between the two groups, it is uncertain if higher GRS scores translate to an improvement in both the quality of the consultation and, most importantly, the outcome of the patient. Finally, to date, there is no evidence illustrating that better consultation leads to improved patient outcomes. Having demonstrated that attending physicians can discriminate better from worse consultations, and that residents trained in a consultation model performed better consultations than those not so trained, exploring the relationship between consultation quality and patient or system outcomes becomes a logical and important focus for future research. Specifically, future studies might record actual consultation after the intervention to determine if the intervention translates to changes in practice, decreases the time required to communicate with the consultant in a busy ED, or both. ED residents and consultants who conduct the consultation may also be asked to serve as raters in this.
Our study demonstrates the effectiveness of the 5 Cs model as a means of standardizing communication during consultation. Improved inter-physician communication can be essential to safe and more efficient patient care. Instruction in a standardized model such as the 5 Cs of Consultation improves the information delivery of resident consultations in the ED and could prove useful in other clinical settings that rely on effective communication between physicians. Medical educators should consider widespread use of standardized consultation models, such as the 5 Cs.