Thomas D. Fletcher is an Assistant Professor in the Department of Psychology at the University of Missouri, St. Louis, where he teaches courses in motivation theory, organizational psychology, multivariate statistics, and psychometric theory. His research interests involve the study of motivation broadly defined and research methods useful to I/O psychology. More information about Dr. Fletcher can be found at www.umsl.edu/~fletchert
Address: Department of Psychology, University of Missouri, St Louis, One University Boulevard, St. Louis, MO 63121 USA
Debra A. Major is Professor of Industrial/Organizational Psychology at Old Dominion University. Her team effectiveness research interests include situation awareness, distributed teamwork, and multidisciplinary teams. She also researches career development issues, including barriers faced by women and minorities, developmental relationships at work, and work-family conflict.
Address: Old Dominion University, Psychology Department, Norfolk, VA 23529 USA
Based on McGrath and Hollingshead’s adaptation of media richness theory and a model of team performance, a laboratory study was designed to compare the effects of three communication modalities of increasing richness on a complex psychomotor/intellective task (i.e., audio only, shared workspace, face-to-face). When teams worked face-to-face, they reported teamwork behaviors to a greater extent than when they worked via audio, and team members perceived their performance to be greater when face-to-face than when using audio alone. The use of a shared workspace enhanced some aspects of perceived team processes, such that distributed teams reported teamwork behaviors to a greater extent than when using audio alone. Teams also committed fewer errors when using a shared workspace than when using audio alone. Practical implications and limitations are discussed.
Anyone who has spent long hours on the phone with a help desk, in some unknown location, attempting to rectify technical difficulties with information technology, can appreciate the difficulty in collaborating across distance. Often IT call center or help desk technicians will have a computer to manipulate in front of them as they direct the individual in need of assistance. However, the technician cannot directly see what the individual in need is doing. Mentoring a junior teammate on a technical matter can be difficult across distance because of a similar lack of monitoring. One cannot fully direct another individual in need of assistance if the mentor cannot see mistakes the individual in need might be making. The first author witnessed precisely this phenomenon while on a consulting project with a large governmental research organization. The team leader in a project team working in a distributed environment was unable to identify and assist a team member in mistakes. Because of the team interdependence, the entire team was affected. If this team were trained in team processes (Salas, Bowers, & Edens, 2001) and were collocated, then the problems that arose would have been significantly diminished.
Relying on the extant teamwork literature, this paper considered key team processes and how they may be affected by working at a distance. Based on media richness theory, differences among three basic modes of communication (i.e., face-to-face, audio, and shared workspace) available to distributed teams were compared in terms of their influences on team effectiveness. While operating face-to-face is often considered the most effective means, it is not always practical. Therefore, viable alternatives to operating face-to-face are sought. Combining teamwork theory and media richness theory, hypotheses regarding the impact of communication modalities on teamwork processes and team performance were developed and tested.
The challenge of teamwork at a distance
A great deal of conceptual, theoretical, and empirical research concerning the processes of high performing teams has emerged in the previous decade (see Militello, Kyne, Klein, Gethchell, & Thordsen, 1999; Paris, Salas, & Cannon-Bowers, 2000). The bulk of this research has focused on collocated teams. In the present effort to examine distributed teamwork, we utilized the Teamwork Components Model (TCM) developed by Dickinson and his colleagues (Dickinson et al., 1992; Dickinson & McIntyre, 1997; Rosenstein, 1994). This input-throughput-output model seems especially relevant to distributed teamwork given its emphasis on communication. In addition, the model demonstrates that a team’s level of coordination is a function of the throughput variables, monitoring, feedback, and backup. These four processes and their role in distributed teamwork will each be described in turn.
To compensate for individual deficiencies in team performance, constant vigilance is required of team members (Militello et al., 1999). Therefore, it is not only essential that members be individually competent in their own tasks, but also proficient in understanding other team members’ responsibilities (Dickinson & McIntyre, 1997). The monitoring of others’ activities assumes that members are able to view and recognize the performance effectiveness of those monitored (Fleishman & Zaccaro, 1992; Militello et al., 1999). This becomes difficult when the members are geographically distributed.
Provided team members are able to engage in performance monitoring, it is expected that they should likewise be able to provide information about the status of other teammates’ functioning. Feedback refers to the giving, seeking, and receiving of performance-related information among the members of a team (Dickinson & McIntyre, 1997). Empirical support exists for the positive effect of feedback on team performance (Brehmer & Allard, 1991; Rasker, Post, & Schraagen, 2000).
In addition to providing feedback, team members must also be able to provide technical assistance when gaps and inefficiencies are noted (Dickinson & McIntyre, 1997; McIntyre & Salas, 1995). Likewise, team members must also be prepared to seek help when needed (McIntyre & Salas, 1995). Indeed, providing feedback and backup assistance to others depends on adequate monitoring and proficiency in the other team members’ tasks as well as a means to provide such assistance when distances are spanned.
Under the general rubric of nonverbal communication (NVC), some researchers have sought to understand the patterns of NVC in controlled settings (Bekker, Olson, & Olson, 1995; Reid et al., 1999). Others have used more qualitative techniques to understand the behaviors of collaborators in their natural environments (May & Carter, 2001; Olson & Teasley, 1996). As a result, researchers have identified relevant gestures of collaborators, the role of objects for sketching and communicating, and the problem of deictic speech (i.e., speech whose meaning depends on the context in which it is spoken). Each of these becomes a concern for enabling team process in distributed environments.
Physical gestures such as hand and arm movements are thought to increase the richness of information conveyed while communicating face-to-face. For example, discussants often point to objects they are talking about for emphasis and use big arm movements to convey information. Following a review of the NVC literature, Bekker et al. (1995) developed a taxonomy of gestures that may extend to design teams. The gestures included hand and arm movements of participants while interacting. They observed gestures for emphasis of speech and pointing to be the most frequent.
While verbal by definition, speech can often be situation specific and understood only in the context of physical gestures. The problem of deixis (i.e., pointing or specifying from the perspective of a participant in an act of speech) is considerable in distributed environments. Deictic speech involves words that alone cannot give meaning to the sentence. For instance, saying “give him that over there” is meaningless without the context. To know what him, that, or there is referring to, one must be present and see to what is being pointed. Barnard, May, and Salber (1996) demonstrated that the problem of deixis is considerable while videoconferencing.
Sketching, or the use of sparse drawings, is a function that facilitates communication in several ways (e.g., idea sharing, coordinating). Reid et al. (1999), in a study of undergraduate engineering design teams, noted the important functions of sketching for collaboration. The study also revealed that designers engage in two basic kinds of argument sequences. Highly interactive exchanges between two or more participants usually involved nonvisual communication. The second type of argument was one in which a single speaker held the floor for extended periods of time, often making use of visuals such as sketching, pointing, and gesturing. Gutwin and Greenberg (1998) discuss the role that visible objects play in communication. In consequential communication, the characteristic movements of an action communicate its character and content to others. Similarly, in feedthrough information is conveyed by the feedback produced when objects are manipulated. Therefore, communication may take place by simply observing the environment without consciously, verbally, or otherwise physically expressing an idea.
Media richness theory
Communication can occur by various means, each with varying degrees of richness (Daft & Lengel, 1984). At present, there are four basic communication modes utilized in the workplace: face-to-face meetings, audio or telephone exchanges, video-mediated conferences, and computer-mediated text transfers. Using media richness theory, McGrath and Hollingshead (1993) developed a grid of task and media fit to explain the moderating effect of task type on media richness and performance. Briefly, their model suggests that there is an optimal fit for the information richness required of a task and the media chosen to mediate that task. For example, text-based computer messaging is a “good fit” for generating ideas, but not for negotiating conflicts; likewise, video systems offer the optimal level of richness for judgment tasks but are insufficient for negotiating tasks and too rich for generating ideas. There has been some support for this model in recent years (Suh, 1999). One technology not originally incorporated in the grid, but certainly a relevant medium for many tasks, is the shared workspace. Although the term shared workspace in its most general sense refers to the total environment shared by workers (i.e., communication systems, desk space objects, etc.), the term is most often reserved for the shared object of work (e.g., a computer file or application, a model). In the present context, the shared workspace could include networked computers such that dispersed individuals could each manipulate a common file.
Face to face
The medium conveying the most information is face-to-face encounters. Although this may be the preferred method of communication for many tasks (i.e., negotiation, initial meetings), it is not always practical. For instance, by default, face-to-face encounters must occur synchronously and at the same location. This proves quite difficult for two individuals operating in different time zones across different continents (Armstrong & Cole, 1995). Barring face-to-face interchanges, the telephone offers a reliable and ubiquitous alternative.
High-quality interchanges are available via the telephone without superfluous equipment. High quality is imperative with audio transmissions, especially if the audio is not complimented with other media (e.g., video, shared workspace). Many studies of geographically dispersed collaboration have demonstrated the phone to be the preferred mode of communication (e.g., May & Carter, 2001). As such, users encountering barriers with other media will often resort to the telephone to clarify exchanges (Olson & Teasley, 1996).
In general, team process and performance will be affected by communication modality for teams performing a similar task. In particular, team members may not be able to monitor the behavior or performance of noncollocated team members. Monitoring is a prerequisite for other team behaviors in many instances. For example, providing feedback and/or backup assistance to other members often requires first observing (i.e., monitoring) the deficiencies. Face-to-face teamwork and performance is compared to the use of audio only in the present study as a baseline assessment. In distributed contexts, by definition, face-to-face encounters are not possible. It is hypothesized that there is a noticeable decrement in teamwork behaviors when the team members are geographically distributed.
H1: When working face-to-face, teams will exhibit teamwork behaviors (e.g., mutual performance monitoring) to a greater extent than when they are working via audio.
H2: When working face-to-face, teams will perform better (e.g., produce fewer errors) than when they are communicating via audio.
One potential alternative for team communication is video interchanges. However, there are a number of issues yet to be resolved with video-mediated communication before it is considered a viable option in enhancing teamwork. Poor bandwidth in sharing data across networks (Angiolillo, Blanchard, Israelski, & Mané, 1997), poor representation of reality within a two-dimensional space (Benford, Brown, Reynard, & Greenhalgh, 1996), and problems associated with deixis (Barnard et al., 1996), among others, have been noted as problems for the use of video in assisting distributed collaborators.
The shared workspace is an often-overlooked medium available to collaborators. Sharing the object of work such as simultaneously working on a computer file (e.g., a budget spreadsheet, a new product design) can address the issues noted above with respect to verbal and nonverbal communication when team members cannot be collocated. Computer whiteboards are used by collaborators as sketchpads to enhance communication of ideas (Whittaker, Geelhoed, & Robinson, 1993). Engineers rely on CAD models in the design of their products (Mills, 1998). People sharing these CAD systems may be viewing the same monitor (i.e., collocated) or may be virtually connected (i.e., proximally distal). By providing for additional visual cues (e.g., sketching, shared image), shared workspaces have increased satisfaction of users and enhanced their communication. May and Carter (2001) found that engineers were able to substantially reduce the time a team took to get a product to market by collaborating with shared images (i.e., CAD and whiteboard images). Whittaker et al. (1993) reported that while performance did not improve significantly in all tasks, users preferred the shared image to collaborating via audio alone.
A shared workspace application could aid in the nonverbal exchanges shown to be important in research on collaboration. For instance, the two most common visual gestures are pointing and emphasizing speech (Bekker et al., 1995). Both of these functions could be and have been built into shared object applications. A well-designed application will also provide feedthrough and consequential communication. A collaborator “clicking on a button” communicates that the action has been completed. This can both facilitate communication and monitoring processes. By actually seeing the pointing mechanisms in the shared object (e.g., a mouse pointer in a shared spreadsheet), the problems associated with deixis may be minimized. Because geographically distributed team members utilizing a shared workspace can visually “see” the work being manipulated, teamwork and performance may be improved. Even though team members not being collocated may initially hamper teamwork, increasing the media richness via a shared workspace should improve teamwork by enabling team members to monitor each other’s performance and therefore provide feedback and backup when necessary.
H3: When using a shared workspace application in addition to audio, teams will exhibit teamwork behaviors (e.g., mutual performance monitoring) to a greater extent than when they are working via audio.
H4: Teams will perform better (e.g., produce fewer errors) when they are using a shared workspace application in addition to audio than when they are using audio only.
Eighteen dyads (36 individuals) from the undergraduate psychology participant pool at a mid-Atlantic university in the United States participated in the study. The participants in each dyad were the same gender to reduce any concerns of cross-gendered communication to further control any potential sources of unwanted variance; three teams were male, the remaining 15 were female.
A communication modality X teams factorial design was used. Communication modality included face-to-face interactions, audio only, and audio plus a shared workspace application (e.g., shared program component of Microsoft NetMeeting®) among the members. The order in which the teams used each communication modality was counterbalanced to control for carryover, order, and practice effects. In all, there were six possible order combinations given the three levels.
In the face-to-face condition two participants worked at the same station. The audio only condition consisted of the members separated by a room divider and working on different workstations. Sound was not diminished beyond that which might be expected from a speakerphone. The spreadsheet was completed only on the computer marked Member A. Member B had the data set on a computer, but did not use it. Participants were told only computer A would be assessed for performance. Finally, the members in the audio plus a shared workspace application condition were distributed exactly as in the audio only condition with the exception that they shared the work object via NetMeeting®. This allowed both members to view the spreadsheet as it was being manipulated and both members had access to it for formula entry. The shared video component was not utilized.
The experimental task required two participants to work together using a set of directions to develop a spreadsheet. Each participant was given half of the requisite instructions. The task could not be performed individually; it was highly interdependent. The members were asked to perform various calculations (e.g., computing the volume of an object given various dimensions and formulae). The dimensions and calculations were based on randomly generated data provided in the spreadsheet. The instructions consisted of 100 formulae; there were 13 distinct formulae, randomly distributed throughout. An example of such an instruction to be entered into row 1 using data from row 1 is:
1Calculate the volume of a cylinder using the radius and height in columns A & C respectively. The formula is: πr2h, where π is a constant. Enter into L the formula using the following syntax: pi()*[radius]^2*[height]
The task involved complex entries, which could be characterized as a psychomotor task, but the task also had a high cognitive component, which could be characterized as an intellective task (McGrath, 1984). Therefore, the task type did not fit neatly into McGrath’s (1984) team task typology. As with all real-world situations, teams do not always perform tasks that fit neatly into categories.
Following a training period, participants performed the task for 15 minutes for each of three conditions (i.e., communication modalities). Each condition was followed by a battery of questionnaires (see below). Practice effects were not related to the starting point due to the assortment of the same 13 distinct formulae. Pilot work indicated that 15 minutes was an adequate amount of time to observe teamwork behaviors as well as allow the individual participants to become aware of their own team processes.
Team member A was given the odd-numbered rules to be used throughout the task conditions. Team member B was given the even numbered. This created distributed expertise between the two members (i.e., each has disparate information related to performing the task). Hollenbeck et al. (1995) manipulated redundancy in a similar fashion for decision-making tasks.
All teams were given the same level of training. The participants were trained at the same computer working face-to-face with the experimenter. The experimenter described some basic concepts related to spreadsheet applications. The experimenter then performed an example calculation. Member A then performed another example, followed by member B performing a third. When it appeared that each could correctly perform the simple task of entering the problem, the experimenter described some strategy concerns. Pilot work indicated that participants in the distributed conditions were more likely to try to develop an individual-oriented strategy (e.g., divvy up the tasks) prohibiting these groups from task completion due to the task interdependence; therefore it was necessary to control the strategy used across conditions. The expected strategy (i.e., when feasible) was for one member to read or tell the other what to type and vice-versa. This strategy was practiced for about 5 minutes, until the participants felt comfortable with the task and the experimenter agreed they were ready. Prior to each session (i.e., 15-minute task for each condition), the experimenter described the effective strategy to use (e.g., one participant reads, the other types) and ensured participant understanding.
Team-level performance was assessed objectively by the degree of accuracy for each condition. The error rate was computed as the ratio of uncorrected errors to the total number of entries; higher error-rate equals poorer performance. In addition to an objective measure of performance, a self-report measure (Rosenstein, 1994) was given to determine the members’ perceptions of performance. Rosenstein (1994) reported a reliability of .85 for the measure. An example item is: “Team members meet or exceed expectations of the team.” Evidence of construct validity for the performance scale was demonstrated. Coefficient alpha for the current study was .89.
Self-report measures of team processes (i.e., communication, monitoring, feedback, and backup) developed by Rosenstein (1994) were given to the team members following completion of the task in each of the three modalities. Rosenstein demonstrated evidence for construct validity of the scales and reported internal consistency reliabilities of .91, .73, .81, and .83, respectively. The measures provided a definition of the construct (e.g., communication) and asked each team member to rate a set of items on a scale of 1 (almost never) to 5 (almost always) how often team members engaged in each behavior. An example of a communication item is: “Team members acknowledge and repeat messages to ensure understanding.” An example of the monitoring scale is: “Team members recognize when a team member makes a mistake.” An example of the feedback scale is: “Team members use information provided by other members to improve behavior.” Finally, an example item from the backup behavior scale is: “Team members help another member correct a mistake.” Coefficient alphas for the current study are reported in Table 1. There is ample evidence to suggest that team member ratings can be useful in research on team processes, especially when the ratings are aggregated into a single team measure (see Brannick, Salas, & Prince, 1997).
Table 1. Means, standard deviations, and correlations of dyad level study variables collapsed across conditions
Notes: N= 54. Error rate is the ratio of the number of uncorrected errors to total entries made. Means are across all conditions. Coefficient Alpha is presented on the diagonal. *p < .05.
Aggregation of the individual-level data (i.e., perceptual measures) to the dyad level was justified by two statistics: rwg(j) and ICC(1). Within-group agreement (rwg(j)) was assessed using the method proposed by James, Demaree, and Wolf (1984) using 2.0 as the expected random variance. Essentially, rwg is 1 minus the ratio of the observed variance in scores to an expected variance if all responses were random rather than in agreement (i.e., a uniform distribution of responses, equal number of 1s, 2s, 3s, 4s, 5s from a 5-point response scale). Values nearer to 1.0 reflect agreement, whereas values nearer to zero reflect lack of agreement. rwg(j) is the rwg equivalent for scales with j essentially parallel items. The mean rwg(j) for each construct ranged from .55 to .94. The mean value of all rwg(j) statistics is .76. The ICC(1) is an omnibus test of perceptual agreement based on a one-way anova with group membership serving as the independent variable (James, 1982). The mean ICC(1) across constructs is .38, indicating that 38% of the total variance can be attributed to group membership. Together, these two statistics suggest that overall the team members were in agreement and assessed team-level variables (i.e., the perceptual measures of communication, monitoring, feedback, backup, and performance) in a consistent manner.
Means, standard deviations, and correlations are presented in Table 1 for the aggregated data. Normality assumptions for each of the constructs were met. The intercorrelations among the variables ranged from .29 to .85 for the teamwork measures. Error rate was statistically unrelated to all subjective ratings except performance. Internal consistency estimates are presented along the diagonal of Table 1 and range from .87 to .93.
To circumvent the assumption of sphericity, a multivariate approach to repeated measures was taken. The observation of the dependent variable for each level of the within-subjects factor (modality) was treated as three separate dependent variables (e.g., the observation of monitoring in face-to-face, shared workspace application, and audio only respectively). The dependent variables are then contrasted using a multivariate test (i.e., MANOVA). Several such dependent variables are repeatedly measured creating a doubly multivariate design (Tabachnick & Fidell, 2001). This was done once for the teamwork behaviors and then again for the performance measures (i.e., subjective rating and error rate).
Means and standard deviations for each condition are presented in Table 2. The relationship between the self-report measures of team processes across communication modalities is depicted in Figure 1. Figure 2a and 2b show the relationship of the performance measures across communication modalities. It can be readily observed that participants rated the constructs (e.g., teamwork behaviors and performance) higher and had lower error rates in both the face-to-face and shared workspace application conditions than the audio only.
Table 2. Means and standard deviations for each condition
Notes: N= 18. Each team encountered all three conditions. The order in which the teams encountered the condition was counterbalanced. The standardized difference between Audio-only and other cell means is denoted by d; Values above |.2| are considered small, above |.5| are considered moderate, and above |.8| are considered large by convention (Cohen, 1988).
(*) Indicates mean is significantly different from Audio only p < .05.
A doubly multivariate analysis of variance was performed on each of the teamwork measures. The within-subjects independent variable treated multivariately was communication modality. Simple contrasts were planned for communication modality for each of the dependent variables to specifically compare the face-to-face and shared workspace application conditions with that of audio only.
Communication modality had a significant effect on teamwork, F (8,10) = 5.40, p= .01, η2= .81. Means and standard deviations are presented in Table 2 for comparisons. In addition, Cohen’s effect size (Cohen, 1988) for standardized differences is also presented. By convention, effect sizes of 0.2 are deemed small, 0.5 are moderate, and 0.8 are large. Simple contrast comparisons indicated that participants rated monitoring higher in both the face-to-face (F [1,17] = 25.32, p= .00, η2= .60) and shared workspace (F [1,17] = 41.31, p= .00, η2= .71) conditions in comparison to using audio only. Participants also rated backup higher in both the face-to-face (F [1,17] = 8.22, p= .01, η2= .33) and shared workspace (F [1,17] = 6.26, p= .02, η2= .27) conditions in comparison to using audio only. Participants rated feedback significantly higher in the shared workspace condition than in the audio-only condition, F [1,17] = 7.00, p= .02, η2= .29. No difference was noted between the face-to-face and audio conditions for feedback, F [1,17] = 2.23, p= .15, η2= .12. There was no difference found for either of the contrasts for communication, ps > .10.
H1 predicted a significant difference between participants’ perceptions of teamwork in the face-to-face and audio-only conditions. In particular, teamwork would be perceived to be greater in the face-to-face condition. The above-mentioned analyses provide support for this hypothesis. While not all contrasts were significant, p < .05, all d values were positive (range = .1 to 1.6). The teamwork behaviors monitoring and backup were perceived to be greater when operating face-to-face, p < .05. However, for communication and feedback, the modes were not significantly different from each other. Collectively, these results lend support for hypothesis 1 in that team members experience greater teamwork when collocated rather than geographically dispersed.
H3 predicted that teamwork would be perceived to be greater in the shared workspace condition than in the audio-only condition. The above analyses provide support in that some aspects of teamwork (i.e., monitoring, feedback, and backup) were reported to a greater extent when working with a shared workspace application than when using audio only. In addition, all d values were positive (range = .2 to 1.9) indicating that the teams experienced greater teamwork in the shared workspace application condition than in the audio-only condition.
A similar data-analytic approach as that described above was taken with the performance measures (i.e., subjective performance rating and error rate). A doubly multivariate analysis of variance was performed on these dependent variables, followed by simple contrasts. Communication modality had a significant effect on performance, F [4,14] = 6.35, p= .00, η2= .65. Means, standard deviations, and standardized differences (d) are presented in Table 2 for comparisons.
Simple contrasts indicated that participants perceived their performance in the face-to-face condition to be greater than in the audio-only (F [1,17] = 5.75, p= .03, η2= .25). No statistically significant difference in uncorrected errors was found between face-to-face performance and audio only (p > .10), however, the standardized difference was deemed moderate by convention (d=−.54). Perceived performance was not statistically significantly different in the shared workspace application and audio-only conditions (p > .10), however, the standardized difference is non-zero and deemed small by convention (d= .26). Participants left more uncorrected errors in the audio only condition than in the shared workspace condition, F [1,17] = 9.02, p= .00, η2= .35. The standardized difference between the shared workspace and audio only conditions is large (d=−1.01).
H2 predicted that participants would perform better in the face-to-face modality than in the audio only. The data, as described in the preceding paragraph, indicate that participants perceived that they performed better in the face-to-face condition than in the audio-only condition. With respect to the objective measure of performance, participants did not significantly differ in the number of uncorrected errors (i.e., error rate). However, the difference cannot be dismissed given the moderate standardized difference. H2 receives some support.
H4 predicted that participants would perform significantly better using the shared workspace application than when using audio alone. This hypothesis is supported in that teams produced fewer errors when using a shared workspace as opposed to using audio only.
Using the McGrath and Hollingshead (1993) adaptation of media richness theory and a model of team performance for collocated teams (i.e., team components model; Dickinson & McIntyre, 1997) this research demonstrated that a specific technology (e.g., a shared workspace application) could be used to facilitate teamwork and improve performance for noncollocated teams. The results largely supported the study’s hypotheses. Teams suffer from diminished performance and perceived teamwork when collaborating using audio only (e.g., a medium low in richness) for a psychomotor/intellective task. Given the inability to work via face-to-face in all situations in today’s economy, alternatives are sought. A shared workspace application is one improvement in both teamwork (i.e., monitoring, feedback, and backup) and performance (i.e., reduced errors) over audio that can be used by distributed collaborators.
Three core teamwork behaviors (i.e., monitoring, feedback, and backup) were improved by using a shared workspace application. That is, team members rated themselves higher in teamwork when using the shared workspace application compared to using audio alone. However, participants rated communication highly in all three conditions. One plausible explanation for this is that the items in the communication scale reflect verbal communication between individuals; verbal communication would not be affected by modality. In fact, verbal communication is precisely the form of communication used in the audio-only condition. Communication modality would most likely affect nonverbal communication (e.g., pointing, gesturing).
Besides improving teamwork, the current study has demonstrated that using a shared workspace application in addition to audio also improves performance by reducing the number of uncorrected errors. In addition, study participants rated their performance in the face-to-face condition better than in the audio-only condition. Participants’ ratings of their performance were similar in both the face-to-face and shared workspace application conditions, implying that participants thought they performed better when working face-to-face and nearly as well when using a shared workspace application as opposed to when they worked with audio only. Improvements in objective performance for geographically distributed collaborators are of obvious importance. However, self-perceptions of performance are also important. Perceptions of high performance can lead to efficacy spirals that ultimately improve not only performance, but also motivation.
This experiment has demonstrated how improvement in the technology used to collaborate can improve teamwork and performance when the members are not collocated or are at least separated visually. When distributed collaborators share the object of work, they are better able to monitor each others’ performances and therefore are more likely to provide feedback and backup when needed. In addition, the use of the shared workspace application leads to better performance as demonstrated by the reduction in errors that are not corrected. By also improving the perceptions of performance, user satisfaction as well as motivation will likely improve as well. When geographically distributed project teams are working on a task that has a psychomotor and intellective component, their performance is likely to be enhanced by using a shared workspace application in addition to communicating via audio. Examples include project teams developing a budget, product development teams working on a product design, or a squad of infantrymen in the military canvassing a large geographic area (e.g., several city blocks). The potential list is extensive. With proper training and technological equipment, teams or collaborating dyads need not revert to the sole use of a phone when geographically dispersed.
Limitations and future research
The present study was conducted in a lab for practical reasons and to control for extraneous sources of variance. While lab studies have many benefits (e.g., controlling the task, strategy used), such designs have numerous limitations. The present study identified the effects of three different communication modes on one complex task type, a psychomotor/intellective task, simultaneously acknowledging that teams rarely perform only one task type in organizational settings. In reality, teams perform multiple tasks simultaneously. The nature of the task performed by a team largely determines the relevance of team processes. For instance, task demands are moderators of member interaction and overall team effectiveness. The greater the demand of the subtasks (i.e., the individual level contributions), the more likely there is a need for member interaction. Further, varied expertise (i.e., different amounts of knowledge) among team members may also moderate member interaction. In the present study, task type was held constant (i.e., a psychomotor/intellective task) and member expertise was manipulated by the distribution of different information to each. Further research should determine if the increase in performance found in the present study would be seen across other task types (e.g., decision making, creativity, interpersonal exchanges) and in field settings (e.g., when teams are engaged in multiple task types). In addition, future research should determine which task types if any might benefit from other modes of communication (e.g., video) and how those modes compare to a shared workspace application. The present study suffered from low power with respect to some of the analyses. The difference error rate in face-to-face and audio only was moderate, but not statistically significant. Future research should seek to replicate the effect of communication modality on performance with larger samples.
Another limitation of this study is that the teams used were in fact dyads. Additional research is needed to determine if a shared workspace application would improve teamwork when the team size is larger. For example, would the process loss from increasing team size compound the effects of the additional cognitive load of using a shared workspace, or would this further facilitate teamwork? This study relied heavily on team members’ self-perceptions of their teamwork. We do not know from this study if teamwork was objectively increased by the shared workspace. However, Brannick, Roach, and Salas (1993), found team member self-ratings to converge with objectively trained observers’ ratings in a multitrait–multimethod study of team performance indicating that the self-ratings in the current study may correlate with more objective methods for measuring teamwork.
A final limitation that should be addressed is the control of the strategy used. To enable the comparison of the effects of communication modality the strategy used needed to be held constant across sessions. Pilot work indicated that participants tried to change the strategy used depending on the communication modality being used. For example, even though the task required mutual collaboration to combine distributed information, the participants tried to work independently when working via audio only. Therefore, for the current experiment, the strategy to be used had to be reinforced prior to each session. Allowing the strategy to change may have altered the results of the study entirely. The independent strategy that participants tended to drift toward in the audio only condition in early pilot work reflects observations by Olson and Teasley (1996) in their qualitative study of virtual teams. They indicated that when teams encounter barriers, the members reformulate their strategy toward a more independent distribution of the work (i.e., the team task becomes multiple individual ones despite management’s intentions).
The results of the current study are consistent with both media richness theory as moderated by task type and the teamwork components model of team performance. The study has demonstrated that perceived teamwork and performance can be improved by a specific technology. That is, by using a simple technological advancement (i.e., a shared workspace application), teams performing tasks with a psychomotor and intellective component can improve their monitoring, feedback, and backup, which have previously been demonstrated to improve performance. In doing so, the same teams are able to minimize the errors that they commit. The present findings have promising implications for distributed collaboration. For some teams, utilizing a shared workspace may make distributed collaboration more feasible and travel to a common location less necessary without sacrificing teamwork or performance.
About the Authors
Thomas D. Fletcher is an Assistant Professor in the Department of Psychology at the University of Missouri, St. Louis, where he teaches courses in motivation theory, organizational psychology, multivariate statistics, and psychometric theory. His research interests involve the study of motivation broadly defined and research methods useful to I/O psychology. More information about Dr. Fletcher can be found at www.umsl.edu/~fletchertAddress: Department of Psychology, University of Missouri, St Louis, One University Boulevard, St. Louis, MO 63121 USA
Debra A. Major is Professor of Industrial/Organizational Psychology at Old Dominion University. Her team effectiveness research interests include situation awareness, distributed teamwork, and multidisciplinary teams. She also researches career development issues, including barriers faced by women and minorities, developmental relationships at work, and work-family conflict.Address: Old Dominion University, Psychology Department, Norfolk, VA 23529 USA