Patterns of interaction during rounds: implications for work-based learning


Dr Jennifer M Walton, Department of Paediatrics, University of Alberta, 8213 Aberhart Centre, 11402 University Avenue NW, Edmonton, Alberta T6G 2J3, Canada. Tel: 00 1 780 407 1385; Fax: 00 1 780 407 7136; E-mail:


Medical Education 2010:44:550–558

Objectives  In-patient rounds are a major educational and patient care-related activity in teaching hospitals. This exploratory study was conducted to gain better understanding of team interactions during rounds and to assess student and resident perceptions of the utility of this activity.

Methods  Data were collected by a non-participant observer using a novel, personal digital assistant (PDA)-based data collection system. Medical students and residents completed surveys related to the utility of rounds for patient care, education and ward administration. Analyses included descriptive and correlational statistics and the use of social network analysis to describe and measure patterns of interaction.

Results  Eighteen different rounds were observed. On average, rounds were 106 minutes long and included discussion of 22.1 patients. Three different patterns of verbal interaction were observed. In most cases, the attending physician was most talkative and many students and residents spoke infrequently. More time was devoted to patients discussed earlier in the round, regardless of diagnosis. Observed teaching was primarily factual and teacher-centred. Attending physician-dominated sessions were rated more highly for educational utility than those that were more interactive.

Conclusions  In-patient rounds are an example of an opportunity for powerful work-based learning. In this study, we used a novel method of observational data collection and analysis to examine this activity and found that it may not always live up to its educational potential. Rounds are time-consuming and are generally dominated by the attending physician. Individuals who are not directly involved in a case are often minimally involved. Participants felt that rounds were most useful for patient care and, contrary to expectations, students and residents viewed attending physician-dominated sessions as more educational. To improve the educational impact of rounds, the order of patient discussion should be planned to highlight specific teaching points, preceptors (teaching staff) should ensure that all team members are actively engaged in the process and learning should be made explicit.


Although there are many espoused benefits of work-based learning in medical education,1,2 there has been little systematic examination of the content and structure of these common experiences. In many medical schools, the in-patient ward is one of the main settings for the teaching of paediatrics to both medical students and postgraduate trainees. In this complex learning environment, one of the major daily clinical and educational activities is ward rounds, where the attending physician, residents and medical students meet to discuss patients admitted under their care. Relatively few studies have looked at this activity from an educational perspective. The studies that have done so describe a diverse range of round formats (from conference room discussions to traditional bedside rounds), intended purposes (primarily teaching versus ‘work’) and leadership (attending physician versus senior resident).3–7 Teaching behaviours of attending physicians have been examined with a number of methodologies, including video-recording and coding,5,8 direct observation and behaviour counting,6,8,9 and time studies.7 Learner perceptions of rounds have been examined with both questionnaires3,10–12 and qualitative methods,13 but little attention has been paid to the impact of patterns of group interaction on the process.

Despite the heterogeneity of the literature on ward rounds, several educational concerns have been repeatedly raised. Failure to ensure a common understanding regarding the purpose of rounds along with ambiguity about who will assume the leadership role may lead to confusion and frustration amongst team members.4,5 Furthermore, despite the espoused educational function of rounds, time studies show that most time is devoted to discussions of patient care.11 When teaching is observed, it tends to be instructor-centred, consisting mostly of mini-lectures on topics of the attending physician’s or senior resident’s choice.5–7,13,14 Rounds are almost always dominated by attending physicians, with varying degrees of involvement by residents, and students act primarily as passive observers.11,14 Recent studies have demonstrated that greater student engagement during discussions on rounds is associated with higher ratings of teaching quality.11,12 However, these studies relied on subjective assessments of student involvement by observers or on retrospective reports by students themselves.

In this study, we used a novel, personal digital assistant (PDA)-based data collection tool15 to record detailed observations of in-patient rounds in a tertiary care paediatric hospital. We focused on how time was allocated and on the patterns of interaction amongst team members. We combined these data with information gathered through surveys of student and resident perceptions of rounds in the hope of determining whether there are specific features or patterns of interaction that correlate with housestaff perceptions of the utility of this activity for patient care, education and ward administration.


Study design

This was an exploratory study which involved two methods of data collection: direct observation of rounds by a non-participant observer, and the administration of written questionnaires to be completed by residents and medical students.

Setting and context

This study was conducted on the general paediatric wards of a tertiary care, university-affiliated children’s hospital. There were two 20–25-bed wards, each staffed by a medical team consisting of an attending physician, a senior paediatric resident, one or more junior paediatric residents, one or more first-year family medicine residents and four to six medical students. Rounds at this institution are held in a ‘side room’, away from direct patient contact and are intended to serve both clinical and educational functions. During rounds, all patients (newly admitted, plus those already known to the team) are discussed. Teaching is generally informal and occurs during discussions related to specific patients.


Observations were conducted to give as broad a sample as possible of rounds on an in-patient paediatric service. Given the seasonal nature of in-patient paediatrics, data were collected in winter, spring and summer 2005. Observations were made of as many different teams and attending physicians as was practical during this timeframe; we ensured that each team was observed on a minimum of two occasions, separated by at least 3 days. The choice of specific observation days was largely based on convenience; however, an attempt was made to collect data on ‘typical’ weekdays, when no time pressures such as those imposed by grand rounds or protected teaching time for paediatric housestaff were anticipated.

Consent and ethics

Prior to each session, all team members were informed of the presence of the observer and the nature of the data to be collected. Questionnaire completion was voluntary and anonymous. This study was reviewed and approved by both the institutional review board of the medical school and the research ethics board of the hospital in which it was conducted.

Data collection


A single non-participant observer (JMW) performed all observations. Data were collected on a PDA device (Palm Tungsten E, Palm OS Version 5.2.1; Palm, Inc., Sunnyvale, CA, USA) running customised database forms developed with HanDBase Version 3.0 (DDH Software, Wellington, FL, USA). This method allows the observer to rapidly enter and simultaneously code data, and the automatic time stamping of each entry facilitates calculation of the duration of observations.15 The unit of data collection was the utterance, an element of conversation spoken by a single individual with a single function (e.g. to present data, to pose a question, to answer a question). An initial coding scheme specific to in-patient rounds was created from those used in other observational studies. The final coding scheme was derived following several iterations of pilot testing and modification (Table 1). The speaker, the patient under discussion, and the type and duration of utterance were recorded for every utterance. Inter-observer reliability for the instrument was determined by calculating the kappa value for two independent observers who simultaneously coded a portion of one session.

Table 1.   Coding scheme for observations
Patient no. 
  New admissionSimple
  Known to teamComplex
  Attending physician 
  Senior paediatric resident 
  Junior paediatric resident 
  Family medicine resident 
  Medical student 
  Physical findings 
  Laboratory results 
  Management plan 
  Social factors 
  Presentation of patient data ‘This is a 6-month-old girl admitted from the emergency department with bronchiolitis…’
  QuestionFactual (low level)‘What are the three most likely causes of this problem?’
‘What did the X-ray show?’
Interpretive (higher level)‘What do you think we need to do next for this patient?’
  Answer ‘The X-ray showed …’
‘I think we should …’
  Direction ‘Order an X-ray’
‘Do a blood culture then start antibiotics’
  Lecture ‘Bronchiolitis is a viral illness…’
  Feedback ‘No’
‘Good job’
‘You need to take a more thorough history here because…’


The survey component of this study consisted of a nine-item, self-administered questionnaire distributed to students and residents immediately after each observed session. Items were presented using a 5-point Likert scale-based format and asked participants to rate their perceptions of the primary purpose of that session, its utility for patient care, education and team administration, respectively, and the degree of disruption caused by the presence of the observer. Open-ended responses were also invited. Questions were pilot-tested for content and clarity by a group of medical educators prior to study commencement.

Data analysis

Once collected, data were downloaded from the PDA to spreadsheet software on a desktop computer for analysis. Descriptive statistics (mean, standard deviation [SD], mode and range) were calculated for a number of variables, including time allocation, total speaking time per individual, and the frequencies and durations of each type of utterance. Results were calculated for individual observations and individual teams, and for the dataset as a whole. anovas, chi-squared tests and frequency tables were used to compare means across teams and types of team members (attending physician, resident and student). Post hoc testing, where appropriate, was conducted using Fisher’s exact test.

Patterns of interaction between group measures were analysed using techniques of social network analysis (SNA). This emerging field aims to describe and measure relationships between individuals.16 Diagrammatic representations of verbal interaction were generated by calculating the number of ‘interactions’ that occurred between each pair of team members. An ‘interaction’ between two individuals occurred when one of the pair spoke, followed by the other. Interactions were plotted by drawing connecting lines between interacting individuals on a schematic of a table, with thicknesses proportionate to the frequency of that interaction. The initial examination of schematics from all sessions revealed three distinct patterns of interaction. The dominant interaction pattern for each session was determined by asking seven independent individuals to classify each diagram as matching one of three prototypical patterns. The modal classification for each session was used for further analyses. In addition, the Freeman normalised graph centrality index, which is used in SNA to measure the degree to which interactions are mediated through a single individual within a network,17 was calculated for each observed session using a software package for SNA (unicet 6 for Windows; Analytic Technologies, Harvard, MA, USA).

Questionnaire data were analysed by calculating mean Likert scores for each question. Correlations between observational data and survey responses were calculated using Pearson’s correlation statistic for normally distributed data, and Spearman’s rank statistic for non-parametric data.


Reliability of the data collection instrument

To determine the inter-rater reliability of the data collection system, two observers (the principal investigator, along with a doctor familiar with the usual format of rounds but naïve to the instrument and the purpose of the study) concurrently observed a portion of one additional session not included in the final dataset. The kappa value for speaker identity during this 49-minute period was 0.61, indicating substantial inter-observer agreement.18 For type of utterance, the initial kappa value was only 0.38. Closer inspection revealed that many of the disagreements resulted from confusion of similar categories (clarification with factual question and substantive with single word feedback). When data from similar categories were combined, the kappa value rose to 0.51, indicating moderate inter-observer agreement18 for type of utterance. These categories were therefore grouped together for all further analyses.


In total, six teams (each led by a different attending physician) were observed; one team was observed on four separate occasions, four teams on three occasions each, and one team on two occasions, for a total of 18 observed sessions. On average, 9.3 team members were present for each observed session (range 8–11, mode 9). Either an attending physician or senior paediatric resident was always present; both were present for 14 of the 18 (77.8%) observations. One or two (mean 1.1) first-year paediatric residents, one or two (mean 1.6) first-year family medicine residents, and four to six (mean 4.9) medical students were present for each observed round.

General characteristics of rounds

The average length of rounds was 106 minutes (range 61–158, SD 24 minutes). Rounds duration was positively correlated with the total number of patients discussed (rs = 0.48, P < 0.05), and a mean of 22.1 patients (range 18–26, SD 2.2) were discussed per session. On average, 3.4 (range 0–6, SD 1.5) of these patients were new admissions. There were no significant differences between teams with respect to round length, number of patients discussed or number of new admissions, so all further analyses were conducted on the dataset as a whole.

Allocation of time

Significantly more time was spent discussing new admissions than patients already known to the team (14.9 minutes versus 2.5 minutes per patient; P < 0.0001). In the majority of sessions (65%), all new admissions were discussed first, followed by patients previously known to the team. In the remaining sessions, patients were discussed in the order in which they appeared on the ward list. The order of discussion did not appear to be planned around specific teaching points or particularly interesting cases. However, there was a significant effect of order of discussion on the amount of time devoted to each patient. On average, the greatest amount of time was spent on the first patient discussed, and discussion time decreased with each subsequent patient until it reached a plateau at the sixth patient (Fig. 1). Nearly half the patients (49.8%) were discussed for < 2 minutes each.

Figure 1.

 Mean duration of discussion of each patient in relation to order of presentation, averaged over all 18 observations

Roles of team members during rounds

In almost all cases, the most talkative team member was the attending physician. The second most talkative individual was most often the team member (generally a family medicine resident or medical student) who had been on call the previous night and was therefore most familiar with the newly admitted patients (nine cases). Averaged over observations, the attending physician spoke for 35.8% of each session, the senior resident for 14.1% and each junior resident for 9.6%. Each family medicine resident spoke for 6.7% of a session and each medical student for 3.9%; however, one or two particularly talkative individuals positively skewed these averages (Fig. 2).

Figure 2.

 Proportion of time each team member spent talking during rounds. Solid bars represent data from individual observation sessions and shaded bars represent data averaged over all observations for each type of team member. Note that the composition of the teams varied across the sessions observed. Sr = senior paediatric resident; Jr = junior paediatric resident; FMR = family medicine resident; S = medical student

Patterns of utterances

When medical students, junior paediatric residents and family medicine residents spoke, they most commonly presented patient data and answered questions. The majority of questions (54%) were posed by the attending physician and related to factual details of the case under discussion. Only 18.4% of questions required analysis of data or the demonstration of deeper thinking processes. When attending physicians and senior residents spoke, they spent most of their speaking time delivering ‘lectures’ and directing patient care (Fig. 3). On average, there were 6.7 ‘lectures’ lasting > 30 seconds per round, 95% of which were conducted by either the attending physician or the senior resident. Very little time (30 seconds per round, on average) was devoted to the provision of feedback to students or residents, and the mean duration of instances of feedback was only 5 seconds.

Figure 3.

 Comparison of amounts of time spent on different types of utterance during observed sessions. The relative contributions of each type of team member are represented by different patterns within each column

Patterns of group interaction

Visual analysis of the interaction maps revealed three distinct patterns of interaction (Fig. 4). In the ‘attending-dominated’ pattern, the majority of interactions were filtered through the attending physician, with few direct interactions between other team members. During ‘shared-dominance’ rounds, most interactions were filtered through one or two individuals (usually the attending and a paediatric resident), with many interactions between these individuals, and few direct interactions between other team members. During ‘widely interactive’ rounds, interactions occurred between many different team members. The most common pattern observed was ‘shared dominance’, seen in 10 of the 18 sessions. Five of 18 were ‘attending-dominated’ and three were classified as ‘widely interactive’. The Freeman normalised graph centrality index, a statistical measure of the degree of inequality of interactions within a network,17 was highly correlated with subjective assessments of interaction patterns (r = 0.76, P < 0.001). The mean graph centrality indices for ‘attending-dominated’, ‘shared dominance’ and ‘widely interactive’ rounds were 12.9% (SD 1.22), 10.45% (SD 2.08) and 5.85% (SD 2.61), respectively. Individual ward teams did not demonstrate consistent patterns of interaction over multiple observations, nor were consistent pattern changes seen over time.

Figure 4.

 Examples of interaction patterns noted during observations classified as: (a) attending physician-dominated; (b) of shared dominance, and (c) widely interactive. Each diagram represents an entire observed session. Arrows denote the direction of interaction (from a speaker to the individual who spoke next), and line thickness is proportional to the number of times each interaction occurred during the observed session. A = attending physician; F = family medicine resident; Sr = senior paediatric resident; Jr = junior paediatric resident; S = medical student

Student and resident perceptions

Of the 151 questionnaires distributed to residents and students, 126 were returned, giving an overall response rate of 83.4%. The presence of the observer was felt to be minimally disruptive (mean score 1.1/5). The majority of students and residents (83%) felt that patient care was one of the primary purposes of rounds. By contrast, only 23% rated education and 19% rated ward administration as primary functions of rounds. Trainees felt that rounds were also more useful for patient care (mean score 4.4/5, mode 4) than for either education (mean 3.6/5, mode 3) or ward administration (mean 3.6/5, mode 3) (P < 0.001). There were several correlations between trainee perceptions of rounds and observed interactions and behaviours. Rounds were rated more highly for patient care when fewer interpretive questions were asked (P < 0.05) and when less time was devoted to answering questions (P < 0.01). Rounds were more useful for administration when more time was devoted to ‘directions’, less time to answering questions, and when the attending physician spoke less (all P < 0.05). Finally, rounds had higher perceived educational utility when more time was devoted to lectures (P < 0.02), when the attending physician spent more time speaking (P < 0.05), and when the interaction pattern was more centralised as measured by the Freeman centrality index (P < 0.01).


In postgraduate and clinically based undergraduate medical education, the everyday workplace, in which teachers conduct their clinical, research and teaching activities, and where they interact with faculty staff, colleagues and students, is where learning most often takes place.2 In this study, we examined a common work-based learning experience in medical education, namely, team rounds on an in-patient unit. By applying SNA techniques to observational data gathered during rounds, we were able to generate visual representations and statistical descriptions of the patterns of interactions we observed. In combination with post-observation surveys of participants, we correlated observed features of rounds with housestaff perceptions of the utility of rounds for patient care, education and administration, respectively. We found that several features of this work-based learning activity could be improved to increase its educational value.

In work-based learning environments such as in-patient teaching units, learners are socialised into communities of practice and learn to apply their knowledge and skills in a context-specific setting.1,19 It has been proposed that the most effective work-based learning occurs when students are given opportunities to have their questions answered and to be involved in discussions to stimulate active elaboration and reflection.20 Our results suggest that patterns of interaction frequently observed during rounds may not be optimal for supporting learning. The most commonly occurring pattern was one of ‘shared dominance’, in which the attending physician and another individual (usually the senior resident) dominated the discussion. There were few direct interactions between other group members and often several individuals, usually students or more junior residents, were essentially silent observers of the process. This high level of involvement by the attending physician, and only peripheral involvement of many others, is consistent with findings in other studies of rounds in teaching hospitals.5,10,13,14 If students and residents are to gain the maximum benefit from work-based experience, rounds should be structured and facilitated to allow them to be as engaged in the process as possible.

The balance between learning and task completion is one that must always be considered in a work-based learning environment. In a busy clinical environment, time is a precious commodity and it is important to consider how it may best be allocated to optimise both educational and work-related functions. Our observations suggest that there may be simple strategies to optimise learning while still attending to the task at hand (patient care). We found that the order in which patients were presented for discussion played a major role in the content of rounds. The bulk of time was spent on those patients discussed early in the session, whereas those discussed nearer to the end received only cursory attention. In many cases, the order of discussion was determined by the order in which patients appeared on the print-out of the ward census. This meant that there were often long discussions of relatively simple cases, but more complex and interesting cases that came up later in the session were discussed only briefly as a result of time pressures. A simple strategy of planning the order of discussion so that specific patients with interesting features are covered early might improve the learning experience without taking time away from essential patient care activities.

A final recommendation from this study relates to helping teachers and trainees to re-conceptualise their notions of ‘teaching’ and ‘learning’ in the clinical environment. Based on information derived from previous studies of rounds and other small-group learning situations,11,21–23 we expected that trainees would rate highly interactive sessions and those with more interpretive questioning as most educational. However, we found that highly interactive sessions were actually rated as less educationally useful than those in which the attending physician spoke more and dominated the session. Furthermore, as other studies have also reported,5–7,13,14 teaching behaviours were overwhelmingly teacher-centred and fact-based. As products of a relatively traditional educational system and medical school curriculum, students (and faculty) may perceive ‘education’ as primarily a teacher-centred activity, and thus may not recognise the learning potential of interactive discussion. In order to improve the educational experience in this work-based setting, it would be constructive to shift perceptions of the role of clinical teacher by de-emphasising the activity of knowledge transmission and placing more emphasis on the structuring and supporting of clinical experience and reflection on clinical experience.

There are several important limitations to this study. Although data were collected from multiple teams on multiple occasions, our results reflect the practice and culture of a single institution only. We did not make enough observations of each team to determine whether individual teams had characteristic patterns of interaction, or to differentiate between teams in terms of how well they fulfilled the multiple roles of ward rounds. Furthermore, we assessed only student and resident perceptions of the utility of rounds and did not attempt to objectively measure how sessions functioned in achieving their multiple purposes. Finally, although attempts were made to ensure that the coding scheme and data collection instrument had reasonable inter-observer reliability, all observations were performed by the same individual, which makes it difficult to exclude an element of observer bias.

In summary, several concrete suggestions can be made for improving the educational value of in-patient rounds based on the results of this study. Rounds should be facilitated so that as many group members are as actively involved as possible, while being sufficiently controlled to ensure that teaching points are clearly made. Whenever possible, the order of patient discussion should be pre-planned so that important clinical issues and specific teaching points are addressed earlier in the rounds, before time pressures take over. Finally, the differences and benefits of work-based learning may need to be made more explicit to students who are accustomed to more formal, classroom-based learning. This study may also serve as a starting point for several areas of further investigation. Student and teacher perceptions of the educational impact of patterns of interactions in small-group settings could be explored using qualitative methods. Examining educational outcomes in relation to observed patterns of interaction would be useful in determining whether there are ‘ideal’ patterns of interaction for learning in specific settings. Finally, providing teachers and group members with a visual picture of their group interactions may prove to be a powerful feedback tool and enable facilitators to modify patterns of interaction to better support learning.

Contributors:  JMW developed the study questions, designed the data collection instruments, collected and analysed the data, and drafted and edited the article for submission. YS provided guidance and advice regarding the study questions and methodology, assisted with interpretation of results, critically reviewed and revised multiple drafts of the paper and approved the final manuscript.

Acknowledgements:  the authors thank the members of the McGill Centre for Medical Education for their assistance in the development of the coding scheme and questionnaires and in the analysis of the interaction diagrams, and Dr Peter McLeod for his helpful comments on an earlier version of this article.

Funding:  none.

Conflicts of interest:  none.

Ethical approval:  this study was approved by the Institutional Review Board, McGill University, and the Research Ethics Board of Montreal Children’s Hospital, Montreal.