SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. Methods
  6. Results
  7. Discussion
  8. Conclusion
  9. References

This study examines the chronemics of response latencies in asynchronous computer-mediated communication (CMC) by analyzing three datasets comprising a total of more than 150,000 responses: email responses created by corporate employees, responses created by university students in course discussion groups, and responses to questions posted in a public, commercial online information market. Mathematical analysis of response latencies reveals a normative pattern common to all three datasets: The response latencies yielded a power-law distribution, such that most of the responses (at least 70%) were created within the average response latency of the responders, while very few (at most 4%) of the responses were created after a period longer than 10 times the average response latency. These patterns persist across diverse user populations, contexts, technologies, and average response latencies. Moreover, it is shown that the same pattern appears in traditional, spoken communication and in other forms of online media such as online surveys. The implications of this uniformity are discussed, and three normative chronemic zones are identified.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. Methods
  6. Results
  7. Discussion
  8. Conclusion
  9. References

Conversations are rhythmic in nature, and the rhythms of conversation have long attracted the attention of diverse communication researchers (Brady, 1965; Cappella, 1979; Jaffe & Feldstein, 1970; Sacks, Schegloff, & Jefferson, 1978). An on-off pattern determines the rhythm of a conversation, and the pauses or gaps in speech that constitute that pattern have been investigated in depth under various names, including pause, gap, and silence (McLaughlin, 1984). Temporal patterns of spoken conversation have been researched by digitizing the vocal patterns of monologues and dialogues, and by measuring the lengths of conversational categories such as a vocalization, a pause, a switching pause, length of time a speaker holds the floor, and so forth.

An interesting mathematical generalization emerging from these analyses is that when the lengths of each of these classification categories are plotted on a semi-logarithmic graph, the points tend to fall along a straight line, a phenomenon that has “generally been found to be exponential” (Jaffe & Feldstein, 1970, p. 25). This exponential relationship is a manifestation of skewed distributions in which the majority of the durations are relatively brief, and only a minority is of average or above-average length. When the results of Jaffe and Feldstein (1970) are extracted from the original graphs and re-plotted using contemporary computerized statistical tools, however, the power law distribution seems to yield an even better fit than the exponential distribution, as will be demonstrated in this article.

Inspired by the classic research on conversational rhythms, turns, and pauses in traditional conversation, our research sought to apply the same tools to the analysis of pauses and response latencies in CMC. Our work aims to demonstrate that despite the apparent differences between traditional conversation and CMC, some significant chronemic aspects of both types of communication are shared. Through this comparison, insights are obtained about some key aspects of asynchronous CMC, such as turn-taking, chronemic non-verbal cues, and the nature of interactivity and responsiveness.

Background

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. Methods
  6. Results
  7. Discussion
  8. Conclusion
  9. References

Turn-taking and conversational gap minimization

Sacks, et al. (1978) suggested a model to explain the relatively rapid turn-taking transitions, as well as many other aspects of turn-taking, in naturally-occurring spoken conversation, emphasizing that “the presence of ‘turns’ suggests an economy, with turns for something being valued…” (p. 7). The valued “good” in this economy might be attention, time, “floor time,” and so forth. The model of Sacks, et al. (1978) is based on a set of rules:

providing for the allocation of the next turn to one party, and coordinating transfer so as to minimize gap and overlap. For any turn:

  • 1
    At initial turn-constructional unit’s initial transition-relevance place:
  •  a.
    If the turn-so-far is so constructed as to involve the use of a ‘current speaker selects next’ technique, then the party so selected has rights, and is obliged, to take next turn to speak, and no others have such rights or obligations, transfer occurring at that place.
  •  b.
    If the turn-so-far is so constructed as not to involve the use of a ‘current speaker selects next’ technique, self-selection for next speakership may, but need not, be instituted, with first starter acquiring rights to a turn, transfer occurring at that place.
  •  c.
    If the turn-so-far is so constructed as not to involve the use of a ‘current speaker selects next’ technique, then current speaker may, but need not, continue, unless another self selects.
  • 2
    If, at initial turn-constructional unit’s initial transition-relevance place, neither 1(a) nor 1(b) has operated, and, following the provision of 1(c), current speaker has continued, then the Rule-set (a)-(c) reapplies at next transition-relevance place, until transfer is effected. (Sacks, et al., 1978, p. 13)

These rules, which explain how gaps between speakers and overlap of speech are minimized, form the basis of our understanding of this key attribute of human conversation.

A fundamental element of naturally-occurring conversation is the presence of conversational gaps, explained by Sacks, et al. to reflect the optional nature of turn-taking under rules 1(b) and 1(c). These gaps are classified as (1) hesitation pauses, (2) switching pauses, and (3) initiative time latencies (McLaughlin, 1984). A hesitation pause is a within-turn pause by the speaker, while a switching pause is a within-turn pause at which the floor is switched between one speaker and another. Initiative time latencies describe a gap in the speech of an individual speaker who realizes that despite her or his expectation for a response, the other side remains silent. McLaughlin and Cody (1982) define a conversational lapse as “an extended silence (3 seconds or more) at a transition-relevance place, in a dyadic encounter the focus of which is conversation.” They go on to define some obvious exceptions to this definition, and to explain the three seconds of silence as an “awkwardness limen” (p. 331). Recent neurological work corroborates this finding, identifying the range of two to three seconds as a temporal integration time range that is a general principle of the neurocognitive machinery (Poppel, 2004; Vollarth, Kazenwadel, & Kruger, 1992).

Nonverbal communication: traditional and computer mediated

Nonverbal communication (NVC) is a key channel in traditional human communication (Burgoon, Buller, & Woodall, 1996b). Some of the nonverbal codes that have been identified as involved in NVC include kinesics, physical appearance, vocalics, haptics, proxemics, and chronemics (Guerrero, DeVito, & Hecht, 1999). Research on codes such as proxemics and chronemics reveals that cultural and social norms guide our nonverbal behavior, as well as our expectations about the behaviors of others. For example, Burgoon, et al. (1996b) define three proxemic distance ranges: a narrow intimate zone (0-12 inches), a personal and social zone (1-7 feet) that is the “normal contact” zone, and a public zone (more than 7 feet) that is used for more formal encounters (p. 92). These ranges describe normative behavior in a specific culture, and these norms also form the basis for people’s expectations of those with whom they communicate. The Expectancy Violations Theory (Burgoon & Walther, 1990) investigates the way people react to the violation of these expectations. The strong reactions engendered by seemingly small violations of this normative behavior emphasize the importance of defining the ranges of normative NVC.

Little research on nonverbal cues in CMC has been published to date (Walther, Loh, & Granka, 2005). Research on nonverbal chronemic cues in CMC was performed by Walther and Tidwell (1995), showing significant interaction between time of day and response latency and the socioemotional orientation and task-orientation of email messages. The independent variables in that study included comparing a response latency of a few minutes with a response latency of 24 hours. Bays (1998) studied temporality in Internet Relay Chat, concluding that in this synchronous CMC mode “participants need to respond promtly in order that their contributions retain conversational relevance. Long pauses and lengthy reponses also cause general delays which are unacceptable” (n.p). However, Bays’ study did not quantify response latencies.

Interactivity

The organization of language is a result of an interactive process among the participants in linguistic interaction. “Rather than simply producing language and other semiotic structure, participants in interaction are attributing complex cognitive and inferential practices to their coparticipants and taking these into account in the detailed organization of ongoing social action” (Goodwin, 2002, p. S34). Interactivity refers to the extent to which communication reflects back on itself, feeds on and responds to the past. Interactivity is the degree of mutuality and reciprocation present in a communication setting. The term interactivity is widely used to refer to the way content expresses contact and communication evolves into community. Moreover, interactivity is a major option in governing the relation between humans and computers (Rafaeli, 1984, 1988, 2004). Interactivity is an essential characteristic of effective online communication, and plays an important role in keeping message threads and their authors together. Interactive communication (online as well as in more traditional settings) is engaging, and loss of interactivity results in a breakdown of the communicative process.

Research into interaction in synchronous and asynchronous CMC modes (Bays, 1998; Greenfield & Subrahmanyam, 2003; Lapadat, 2002; O‘Neill & Martin, 2003) has resulted in claims that text-only CMC is interactionally incoherent: disjointed, without clear turns, and in general chaotic. However, as noted by Herring (1999), text-only CMC is extremely popular, despite obstacles such as disrupted turn adjacency and lack of simultaneous feedback. Online interaction is highly desired and can be addictive (Caplan, 2003; Morahan-Martin & Schumacher, 2000), despite its apparent incoherence. That led Herring (1999) to claim that the unique attributes of CMC are actually leveraged by users to intensify interactivity and extend the limits of traditional, spoken, conversation. Walther (1996) has also argued that these apparent restrictions of CMC can allow users new and effective methods of communication not available in traditional modes.

Online responsiveness

Responsiveness and interactivity are closely linked. Failure to respond or to take the floor creates a breakdown of interactivity. Online interactivity and responsiveness have been studied in various contexts: responsiveness and response latencies to customers who email an organization or post an online inquiry (e.g., Customer-Respect-Group, 2004; Hirsh, 2002; Mattila & Mount, 2003; Stellin, 2003; Strauss & Hill, 2001); responses to online surveys (e.g., Lewis, Thompson, Wuensch, Grossnickle, & Cope, 2004; Sheehan & McMillan, 1999); responsiveness to business correspondence (e.g., Abbott, et al., 2002; Pitkin & Burmeister, 2002; Tyler & Tang, 2003); and work on response latencies in discussions on Usenet (Jones, Ravid, & Rafaeli, 2004) and to questions posted to the “Google Answers” website (Edelman, 2004; Rafaeli, Raban, & Ravid, 2005). These reports reveal a recurring pattern that closely resembles the findings on conversational pauses described above: Most of the responses were created within relatively short latencies, and only a minority of the response latencies are of average duration or above.

This chronemic (Walther & Tidwell, 1995) distribution was described in detail in research carried out on the responsiveness profile of email by Enron employees (Kalman & Rafaeli, 2005). In that work, the “windfall” of the confiscation and release of massive data files from the Enron Corporation made it possible to extract detailed behavioral information without privacy and other ethical limitations. The observed pattern in the Enron corpus was a concentration of most responses within a relatively short period of time, and a spread of ever-increasing response latencies at a relatively low frequency. When plotting the frequency of responses against response latency, the resultant distribution is highly skewed, with a stretched out and rapidly diminishing thin right tail. The results, which are more robust quantitatively than any previous work on online responsiveness, clearly reveal the distribution patterns of pauses previously observed in traditional conversation as well as in other online communication.

The present study was triggered by this apparent similarity between traditional spoken conversation and online, persistent conversation. Persistent conversation is computer-mediated interaction in which humans converse with one another, and which is either logged automatically by the system or can be logged by the user to create a non-ephemeral record, in contrast to more ephemeral spoken conversation (Erickson, 1999; Erickson & Herring, 2005). In this study, we set out to generalize the initial findings from the Enron Corpus by analyzing response latencies in additional datasets, and by looking for the chronemic properties that are common to these online conversations, as well as for properties shared by persistent textual conversation and ephemeral spoken conversation.

Power law distributions

The power law distribution is the distribution that is described by the relationship y = axb, and that when plotted on a log-log graph results in a straight line with a slope b. This distribution has been observed in many fields and in diverse phenomena (Axtell, 2001; Comellas & Gago, 2005; Gabaix, Gopikrishanan, Pelrou, & Stanley, 2004; Keeling & Grenfell, 1999; Qian, Luscombe, & Gerstein, 2001; Reed, 2001; Zipf, 1949). Two famous power law distributions are the Pareto distribution and Zipf’s law. The power law is also similar to the lognormal distribution. The power law distribution is expressed by the relationship y = axb, and the range of b in naturally occurring systems is often within the range of (−2) and (−3) (Goh, Oh, Jeong, Kahng, & Kim, 2002). The similarities across phenomena that are so diverse in nature are a source for confusion as well as for innovative modeling in efforts to identify common underlying mechanisms that lead to power law distributions or to distributions that are similar to them (Adamic, 2005; Goldberg, Franklin, & Roth, 2005; Mitzenmacher, 2003).

The research question

This study explores whether persistent conversation shares fundamental properties with traditional, spoken conversation. Specifically, our research question is:

RQ: Are chronemic distribution patterns similar for turn-taking pauses in spoken and persistent conversation? If so, what are the common patterns?

Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. Methods
  6. Results
  7. Discussion
  8. Conclusion
  9. References

Characterizing aggregate and individual response latencies

Three distinct datasets of asynchronous computer-mediated communication were analyzed. The first dataset, “Enron emails,” includes the response latencies of corporate email users. Data were extracted from the correspondence of Enron employees, as described in detail in Kalman and Rafaeli (2005). The dataset included email responses created between 1998 and 2002 by 144 employees of the Enron Corporation whose email records were confiscated as a part of an investigation by the Federal Energy Regulatory Commission. These records were published on the Internet (FERC, 2004; iConect, 2003). The researchers identified emails that were responses to other emails, and that included a timestamp of the original email. The timestamp of each response was subtracted from the timestamp of the quoted email, resulting in a response latency for that particular email. It is important to point out that this study, as well as those described below, looked only at the response latencies of messages that received a response, and did not look at messages that did not receive any response at all. Each response was counted only once. Since the response latencies in this dataset are based on subtractions between timestamps created by two separate computers, some negative response latencies were also recorded. In the present analysis, 15,815 registered response latencies (positive and negative) were used, with the exception of seven outliers assumed to be measurement errors: two extremely high, and five extremely low and negative results.

The second dataset, “University forum,” is described in Ravid and Rafaeli (2004). In that study, the researchers investigated discussion groups formed by users of a learning management system of a university. The university offers around 400 courses (undergraduate and graduate); the courses are supported by an Internet site that includes a discussion forum that is used for discussions among students registered in a particular course, as well as between those students and the faculty. Participation in the forums was voluntary. Data collected by the researchers included discussion groups that were active from the winter semester of 1999 to the end of the summer semester of 2002, totaling eight regular semesters and four summer semesters. Response latencies were calculated for 8,830 active members, resulting in 115,416 response latencies.

The third dataset, “Google Answers” is described in Rafaeli, et al. (2005). It contains 40,072 response latencies of answers to questions posted to Google Answers (http://answers.google.com). Google Answers is a commercial website where designated and certified responders provide paid answers to questions posted by users who pay them according to a price bid they placed with the questions. The response latencies refer to questions posted during a period of 29 months (June 2002-October 2004). This period did not include the first two months of activity of Google Answers or the last month and one week of activity before data collection. Since the average response latency is less than two hours, this truncation would have a negligible effect on a chronemic profile spanning a period of 29 months.

Each of these three aggregate response latency datasets was analyzed separately by the same methods used for identifying power law distributions (Newman, 2005) and response latencies in traditional spoken communication (e.g., Jaffe & Feldstein, 1970). The response latencies were grouped into bins and plotted on a log-log graph; regression analysis was performed for the power distribution and a coefficient of determination (R2) was calculated. Various binning methods and truncation possibilities were tried to refine the presentation, alterations that did not materially affect the R2 of the regression analysis. The bins presented here were of one day for the Enron dataset, 100 minutes for the University forum dataset, and one hour for the Google Answers dataset. The percentiles of the average response latency as well as the percentiles of ten times (10x) that average response latency were calculated. The percentile analysis was then repeated for all individual users in the Enron emails dataset, as well as for 15 individuals from the Google Answers dataset (the five users with the largest number of responses) and 10 of the users with 100-120 responses. Finally, a small sample of Enron email responses that were created after a long delay was selected, and its contents inspected.

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. Methods
  6. Results
  7. Discussion
  8. Conclusion
  9. References

Distribution of response latencies

115,416 University forum responses, 15,815 Enron email responses, and 40,072 Google Answers were analyzed. Despite the diverse sources of these responsiveness profiles, when plotted on a log-log graph, all three datasets presented a power-law distribution (Figure 1 a-c) with similar slopes: (−1.74), (−1.76), (−2.04).

image

Figure 1. Power law plots of the cumulative response latencies of the three datasets

Download figure to PowerPoint

An analysis of the distribution of each of the datasets revealed that the average response latency in each of the datasets falls at or above the 80th percentile. It also revealed that 10 times that average latency in each of the datasets falls at or above the 97th percentile (Table 1).

Table 1.  Average response latencies in each dataset, and the percentile rank of that average response latency, and of ten times (10x) that average response latency, for each dataset
DatasetAverage response latencyPercentile rank of average response latencyPercentile rank of 10× average response latency
Enron emails28.76 hours86%97%
University forum23.52 hours80%99%
Google Answers1.58 hours84%97%

This remarkable similarity across datasets comprising aggregate responses created under diverse circumstances, by diverse populations, and by many individuals, elicited the question whether this generalization about percentiles is a result of the aggregation of many response latencies, or if it is also reflected in the behavior of individual users. An analysis of the 74 Enron email users for whom more than 50 unique responses existed showed that only 65% of them (48) met the strict criterion that their average response latency was at or above the 80th percentile. However, a slight relaxation of the criterion revealed that 95% of them (70) created 70% or more of their responses within less than their average response latency. Moreover, of these 74 users, only five users’ 10x of average response latency was below the 97th percentile, and none were below the 96th percentile. The 15 users from the Google Answers database displayed a similar behavior: 93% (14) created more than 70% of their responses within their average response latency or less, and all of them created at least 96% of their responses within less than 10x their average response latency. In summary, the vast majority of the individual users created most (70% or more) of their responses within their average response latency, and almost all (96% or more) of their responses within a latency equal to 10 times their (individual) average response latency. This relaxed generalization also holds for the cumulative results.

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. Methods
  6. Results
  7. Discussion
  8. Conclusion
  9. References

The results of the distribution analyses are that all three user groups show, in aggregate, a similar mathematical distribution of response latencies. A closer inspection of the distributions shows that despite the significant differences among the types, purpose, and context of the asynchronous conversations taking place within each group, in all three of them, at least 80% of the responses were sent within the average response latency of that group, and at least 97% of the responses were sent within 10 times that average response latency. In cases where analysis was possible, even individual users show the same skew: At least 70% of almost every individual’s responses were made within that user’s average response latency, and at least 96% within ten times his or her average response latency (RL). These findings allow us to delineate three normative chronemic zones of response latencies in asynchronous CMC, based on the average response latency τ:

Zone I - quick to average (RL<τ). The majority of the responses fall in this zone

Zone II - above average (τ <RL<10τ). A minority of the responses fall in this zone

Zone III - long silence (RL>10τ). A negligible minority of the responses fall in this zone

Generalizability of the findings

The findings point to common chronemic characteristics of asynchronous CMC. The three datasets described are very diverse in their characteristics: They represent different user populations (business people, students, and varied Internet users in a public arena), assorted asynchronous text-based CMC technologies (email, discussion forum, web pages), a variety of contexts (academic education, major corporation, competitive online bidding), a range of average response latencies (from 1.5 hours to a little over one day) and of cohort sizes (more than 15,000 to more than 100,000, a total of over 170,000 responses), a period spanning at least seven years, and respondents from the U.S. as well as from other countries. Despite these differences, a recurring pattern surfaces when analyzing the aggregates: a power law distribution of the response latencies that can be described by the generalization that regardless of the average response latency (τ), most (at least 80%) of the responses are already created within that average latency, and almost all (at least 97%) of the responses are created within 10τ of the average response latency.

The strength of this generalization is further revealed when drilling down to the level of individual users. We have shown that the generalizations at the aggregate level need to be only slightly relaxed (from 80% to 70% and from 97% to 96%) in order to describe the vast majority of dozens of individual users from the two datasets in which personal identification was possible, and users for whom a sufficiently large sample of response latencies was available. This finding is an indication that users of asynchronous CMC, similar to the users of Internet Relay Chat observed by Bays (1998), tend to create responses within a relatively short time, in the order of magnitude of the average response latency, and are unlikely to respond after a duration longer than one order of magnitude higher than that average response latency.

The robustness of the generalization receives further substantiation when one looks at well established rules describing latencies and response latencies in traditional forms of communication. For example, in Jaffe and Feldstein’s work (1970) on face-to-face contexts, the quantitative results for the duration of pauses by one speaker in a face-to-face dialogue (p. 76, figure IV-9) present the same characteristics as any of the three CMC datasets described here: 70-80% of the pauses are shorter than the average pause length (τ estimated at .97 seconds), and a pause of above10τ, (9.7 seconds) did not occur even once in that 50-minute dialogue. Moreover, when the plot is reconstructed (Figure 2) using modern statistical tools and regression analysis is performed, the power law distribution gives a high R2 value of .82, even better than that for the exponential distribution reported by the authors (the calculation was not performed by the authors, but the reconstruction of the data by us gives an R2 of 0.74 for an exponential distribution). The reconstruction was carried out by scanning the graph from the original book, and using graphical software to measure the coordinates of the pixels of each data point, as well as the pixels of the marks on the axes. Similar behavior apparently appears in telephone-based conversations such as those described by Brady (1968), although precise analysis is difficult due to the partial presentation of results in Brady’s study.

image

Figure 2. Re-plotting Jaffe and Feldstein (1970, p. 76) on a log-log scale reveals a statistically significant power-law distribution of response latencies in spoken conversation

Download figure to PowerPoint

Possible explanations for the findings

Why do people create most of their responses within a relatively short period? One of the promises of online communication was thought to be its asynchronicity: the ability to respond at one’s convenience, even after a relatively long wait (e.g., Lantz, 2003; Newhagen & Rafaeli, 1996). Why then do we see that in practice most responses are created quickly, and that if a response is not created within a short period of time, the probabilities for a response drop precipitously?

One possible answer is the well-documented phenomenon of online information overload (Davenport & Beck, 2001; Shenk, 1999): As messages flow in, people either respond to them at once, or put them aside and rarely return to them. Evidence for this behavior resulting from information overload was presented by Jones, et al. (2004). This possible explanation is further strengthened by a weak but positive correlation (0.19) that we find in the Enron dataset between the total number of responses created by users and the percentage of the responses created within one day. On the assumption that users who create a larger overall number of responses experience more information overload, we observe that the behavior of these busy users tends to be even more skewed than the average user. Further evidence to support this possible explanation is the much lower average response latency in the Google Answers dataset. We can see here evidence that when there is a financial incentive for a quick response, the average response latency drops by more than an order of magnitude. Given information overload, we could expect that activities that carry the potential of immediate financial gain will be less likely to be delegated to a later time than messages that do not have this financial incentive.

Another possible explanation for the behavior pattern identified in this study is linked to the signaling power of a quick response: In asynchronous CMC, a quick response is one of the only non-verbal tools that can be used to signal immediacy, care, and presence.

Thus, there is a preference for quick replies (Aragon, 2003; Danchak, Walther, & Swan, 2001; Feldman & March, 1981; Goodwin, 2002; Walther & Tidwell, 1995). Anecdotal evidence for the positive signaling power of a quick response comes from the observation that late responses tend to include more apologies and/or explanations for the delay in responding. Initial findings reported elsewhere (Kalman, Ravid, Raban, & Rafaeli, 2006) show that responses created after a lengthy wait seem to be more likely to mention the long response latency, often apologize about the delay, and/or provide an explanation. In addition, these are sometimes not actual responses, although they were created by replying to a previous email message. They might show the user sending a reply asking about the progress of an issue mentioned in the original email, or even not connected at all to the text of the original email, possibly as a shortcut to typing an email address. Examples of these responses are reproduced in Table 2.

Table 2.  Examples of texts from email responses created after a long latency (Source: Kalman, et al., 2006)
Response LatencyQuoted TextCategory
16 dayssorry for the delayApology
14 daysSorry it has taken me so long to writeApology
18 daysi got back from almost three weeks vacation yesterday and am back at workExplanation
14 daysi just got back into town from almost 3 weeks vacation. sorry i didn’t get in touch over the holidays, but…Explanation + apology
23 daysOnly took me 3 weeks to respond. That’s pretty good for me. I think things started collapsing the day I got your original emailHumorous apology + explanation
16 daysJust following up to see if the recruiting season has started and to make sure everything is going okay. If you need anything, just sayReference to subject in original email. Not an answer to question
51 daysD, how are we coming on this project in relation to the info E sent you? Do you need anything else from E? Thanks.Reference to subject in original email. Not an answer to question
109 daysHey Mom… thought i would give a call but don’t have your number at work. send it if you get a chance. love,Email response as probable short-cut to typing email address

A fuller explanation for the rapid answers probably lies in a combination of both principles mentioned in the previous paragraph: Due to practical constraints on online communication in an age of information overload and constant interruptions (Mark, Gonzalez, & Harris, 2005), a quick response is the best way to ensure that a response will be created. Moreover, by sending a quick response, one conveys rapport, immediacy, and presence. The practicality of interactive communication depends on immediate responses. It is difficult to imagine a world in which every message, even one that was delivered a long time ago, has a high probability of receiving a response.

A third explanation could come from the logging-in habits of CMC users. A study by Dezso, et al. (2006) of an online news portal shows a visitation pattern that is similar to the chronemic pattern we identified in our datasets. Most importantly, the visitation pattern decays as a power law. Dezso, et al. show that a power law chronemic distribution pattern of the time between the posting of a news item and its reading can be explained by the power law distribution of the time intervals between consecutive visits by the same user. This interesting link between the distribution of intervals between user log-ins and the subsequent distribution of visitations to the individual news items might help explain, by analogy, the pattern we see when aggregating response latencies of many online communicative exchanges. We do not have chronemic logging-in information for any of our datasets, but it is reasonable to assume that the same power law distribution describing the frequencies of logging-in to a news portal would also describe (with different slopes) the dynamics of logging-in to check ones’ email, online classroom forum, or the Google Answers website. Thus, by drawing a possible analogy between clicking on a news item and choosing to respond to an online message, we reach another possible explanation for the power law chronemic distributions revealed in our datasets.

Another approach to explaining the results is to extrapolate from the similarity in the distribution of pauses in traditional conversation, and ask how the rules of traditional turn-taking apply to asynchronous CMC. The set of rules suggested by Sacks, et al. (1978) was structured to accommodate 14 facts about any traditional “mouth to ear” conversation (pp. 10-40). Most of these conditions also apply to asynchronous conversational CMC, for example the conditions that state that the sequence, content, distribution, and length of each turn are not specified in advance. There are, however, three important exceptions that result from the asynchronous nature of the conversation: In asynchronous CMC, conditions 2 and 3 (“overwhelmingly, one party talks at a time” and “occurrences of more than one speaker at a time are common, but brief”) do not apply, due to the strict linearity of message posting by most CMC systems (Herring, 1999). At the same time, due to the persistence (Erickson & Herring, 2005) of the conversation, in CMC the message is available as well as retained longer for further and repeated examination. Persistence of messages overcomes the aural and cognitive difficulty of synchronously processing more than one stream of talk, and allows a separation in time between the receipt of the words and their processing. In addition, rule 4 (“transitions from one turn to a next with no gap and no overlap between them are common. Together with transitions characterized by slight gap or slight overlap, they make up the vast majority of transitions”) needs to be restated in light of the findings reported in this article.

Our proposal for the restatement of conditions 2 and 3 is that “The words of each party are presented separately and linearly, and persist for a period of time.” For condition 4, the proposed restatement is “the vast majority of the transitions occur within a relatively short time.” The use of the word “relatively” is intentional. It alludes to the relativity reported in this article, that regardless of whether the average response latency in a specific conversation is a few hours or a few days, the majority of the responses are sent within that average latency, and the vast majority of the rest of the responses are sent shortly thereafter.

It is important to note here that we measured the response latency when there was a response. If there was no response, we considered that no conversation had taken place. In this article we adopt the inclusive definition of “persistent conversation” proposed by Erickson and Herring (2005), a definition that extends the notion of conversation from traditional face-to-face to computer-mediated contexts. In both cases, a conversation is no longer a conversation when silence takes over. The turn-taking rules restated below apply as long as the conversation continues.

After having restated three of the 14 conditions, the rules for turn-allocation in asynchronous written CMC, at least for those types examined in this study, can be restated, and serve to explain the chronemic distribution of asynchronous CMC:

  • 1
    At the moment a message is sent by one party (the sender) to one or more parties (the recipients):
  •  a.
    If the sender has selected the next speaker, the party so selected has rights, and is obliged, to send a response as soon as is practicable. Other recepients too have the right to send a response
  •  b.
    If the sender has not selected the next speaker, each recipient has the right but not the obligation to send a response
  •  c.
    The sender may continue with an additional message
  • 2
    If, after a message is sent by the sender, either 1(a) 1(b) or 1(c) has operated, each party who created a reply is assigned the role of sender and the rule-set (a)-(c) reapplies.

In summary, we have presented here four possible explanations for the highly-skewed distribution patterns of response latencies found in asynchronous CMC. Two of the explanations are direct, and two are based on an analogy. One of the direct explanations is positive, and suggests that a quick response is a way to signal immediacy, care, and closeness. The indirect negative explanation suggests that due to overload, users tend either to reply immediately or not to reply at all. Of the two explanations by analogy, one analogy is to traditional face-to-face conversation, which shows a very similar chronemic distribution; we explore the relation between the rules governing traditional conversational exchanges and those that apply to asynchronous CMC. The last analogy is to online behavior, suggesting that the power law distribution of accumulated CMC response latencies might be a result of the power law distribution of log-ins. None of these four explanations is a sufficient or complete explanation for the chronemic distribution of response latencies in asynchronous CMC, however, and further work will need to be devoted to finding a full explanation of the empirical regularities revealed in this study.

Unresponsiveness and silence in asynchronous CMC

These findings on responsiveness, interactivity, and the maintaining of conversational threads in CMC provide tools to investigate instances when unresponsiveness and silence disrupt a conversation. Extensive research on silence has been conducted in traditional settings, exploring issues such as psychological and ethnographic perspectives on silence, silence as a nonverbal cue, silence in court, and silence in a cross-cultural perspective (Tannen & Saville-Troike, 1985). However, little research on this topic has been carried out in online settings, although a number of studies touch on related issues. Anecdotal evidence of the need to acknowledge silence as a factor in human-computer communication was described as early as1978 by Negroponte (1994). Lurking, a special form of online silence, has been researched by Nonnecke and Preece (2000) and Rafaeli, Ravid, and Soroka, 2004). Unresponsiveness in a chat room in response to different strategies of turn allocation has been analyzed by Panyametheekul and Herring (2003). Cramton (2001) documented the disruptive effect silence can have on teams attempting to collaborate online; and there is clear evidence for the distressful effects of being ignored in online communication (Rintel & Pittam, 1997; Williams, Cheung, & Choi, 2000; Williams, et al., 2002). Williams and his colleagues coined the term “cyberostracism” to describe these distressful effects; they can occur when a person is being ignored in chat, online gaming, and even in phone text messaging (SMS) (Smith & Williams, 2004). One of the factors limiting research on online silence is the lack of a basis for the definition of the length of unresponsiveness that constitutes online silence, such as the three second or more “conversational lapse” described above (McLaughlin & Cody, 1982).

The results reported here allow a quantitative definition of online silence. We can now confidently state that “no response after a period of ten times the average response latency” constitutes silence. This definition yields a better than 95% confidence level that a response is not likely to occur in the future. We base this on our finding that only 3-4% of the responses are created after that time. An inspection of Figure 1 suggests that this is a direct result of the behavior of the power law function at the slopes relevant for our datasets (−1.7 to −2.0). When the average response latency covers 70-85% of the responses, then a move to the right on the x-axis of one order of magnitude translates to a move of roughly two orders of magnitude on the y-axis. Thus, responses that take longer than 10 times the average response latency (10τ) will number a few percentage points or less.

A key strength of this definition is context sensitivity. We believe the “above 10τx average response latency” definition to be conservative, mainly since response rates are usually less than 100%. Moreover, at least a minority of the very late responses created seems not to include actual answers to the original message (Kalman, et al., 2006).

The strength of this definition of a “CMC lapse” is that it combines the rigor of a quantitative, statistical definition with the ability to adjust for qualitative differences among datasets through its context sensitivity. Thus, when researching online silence in a specific context, researchers will identify an average response latency relevant for the context of that specific research. Once that average response latency is identified (through the analysis of a large enough dataset, or through the use of a relevant published average), it can be assumed that if a response was not created within that 10τ period of time, there is a better than 95% confidence level that a response will no longer be created, sent, and received. Nevertheless, whenever possible, it is important that researchers use diligence and look for evidence that the dataset does not show hints of an unusual distribution, especially one that is different from the power law. For example, the email responsiveness profile of an employee who has been away from email due to a two-week holiday will not show a power-law distribution in the first few days after the holiday, and in that case the above definition of online silence is not applicable.

Methodological implications

A key factor in human communication research has been the ability to obtain large amounts of naturally-occurring conversational data. The work presented here highlights the potential that CMC holds for providing such data, processed and ready for analysis. We have shown that CMC conversations (“persistent conversation”) can be analyzed using tried and proven tools used for the analysis of face-to-face conversation, and that it shares important attributes with traditional “mouth-to-ear” communication. Moreover, since the raw data of CMC are already digitized, and thus require less human effort to transform from the raw recordings to, for example, analyzable response latencies, significantly greater amounts of information can be processed and results that are more robust quantitatively can be obtained. Moreover, the unobtrusively (Webb, Campbell, Schwartz, & Sechrest, 2000) collected datasets we describe represent a natural conversation. The availability of large datasets containing digitized and ready for analysis natural conversations could revolutionize the methodology of studying human communication (Newhagen & Rafaeli, 1996).

Practical implications

The quantitative findings described here allow quantifying the probabilities of response events, based on estimated or measured average response latencies. The practical implications of these findings lie, for example, in the potential to increase social translucence in online communication. Social translucence is described as a system that makes social information visible and enables participants to be both aware of what is happening and to be held accountable for their actions as a consequence of public knowledge of that awareness (Erickson & Kellogg, 2000). For example, it is relatively simple to construct a tool that will be able to use these quantitative findings to analyze the responsiveness profiles of specific people one is communicating with via email, and estimate the probability of a response from each of them within a specified period of time. The same mechanism can also be applied by users who wish to analyze their own responsiveness profile and use the conclusions of this analysis to improve their responsiveness.

Another application is for those leading discussions of CMC, such as educators in asynchronous classes and moderators of online forums. Their challenge is to make people aware of the chronemic zones described above and ensure that users attempt to create the responses within Zone 1. For example, if in a forum τ∼1 day, then if a specific posting does not start receiving responses within a few hours, it is very likely that this posting will not develop into a “healthy” and active thread. If an asynchronous online classroom has about 15 participants and its τ is about 20 hours, then the participants should be required to post at least three to four times a week to maintain a dynamic discussion consisting of a few threads. A student who logs in only once a week will find that most of the threads are no longer active.

Future directions

Apparently, these mathematical properties of the chronemics of online and traditional communication are a universal characteristic of typical human response latencies. This finding should be corroborated by further analysis of additional datasets originating in traditional as well as online communication. For example, an additional dataset that originates in an online report (Hamilton, 2005) summarizes response latencies in 199 online surveys in which 523,790 invitations were sent and almost 70,000 responses were received. Though we did not have direct access to the dataset, the report describes a similar pattern to the one observed here, where an estimated 70% of the responses were created within the average response latency (a little less than 3 days), and where over 99% of the responses were created in four weeks (10x the average response latency). Additional published work in various disciplines suggests behavior that is in agreement with these generalizations (Jones, et al., 2004; Matzler, Pechlaner, Abfalter, & Wolf, 2005; Strauss & Hill, 2001). It would be interesting and instructive to find occasions in which the same rules apply, as well as exceptions to the rules. This can be achieved by further analysis of published data, as well as by dedicated original research that focuses on asynchronous CMC, including areas not mentioned here, such as response latencies within blogs. Furthermore, research should measure response latencies in synchronous CMC such as instant messaging, chatting, and text messaging (SMS). For a discussion of synchronous versus asynchronous CMC, see Newhagen and Rafaeli (1996).

It is now also possible to explore the implications of CMC chronemics as a nonverbal cue, in a manner similar to the way proxemics and other nonverbal cues affect interpersonal communication. For example, one could study the correlation between the normative zones described here and the expectations of users. An initial indication that these norms are reflected in the expectations of users is the often quoted (e.g., Tyler & Tang, 2003) expectation in workplace email correspondence of receiving an email reply within “24 hours.” Given the added delay caused by weekends and holidays, the average response latency measured in the Enron dataset (τ= 28.76 hours) is close enough to 24 hours, and it is at the point that separates Zone I from Zone II. Since Zone I defines the range where the majority of the responses actually occur, the 24-hour expectation is in line with the norms of workplace email chronemics revealed in this study. This relationship between chronemic norms and chronemic expectations should be further explored, possibly by leveraging the predictions of a central theory in nonverbal communication, the Expectancy Violations Theory (EVT) (Burgoon, Buller, & Woodall, 1996a). Last, further analysis should explore the distribution of the shorter and most abundant response latencies, in the manner started by Kalman and Rafaeli (2005). In the present study most of these response latencies were bundled in the largest bins.

Conclusion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. Methods
  6. Results
  7. Discussion
  8. Conclusion
  9. References

The analysis of response latencies in a range of CMC systems reveals a mathematical regularity. The significance of this regularity is in the insights it offers into the underlying uniformity of human conversation, whether computer mediated or traditional. Computer mediated communication is further established as an organic extension of traditional human communication, influenced by the constraints of technology, but ultimately shaped by human nature.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Background
  5. Methods
  6. Results
  7. Discussion
  8. Conclusion
  9. References
  • Abbott, K., Mann, S., DeWitt, D., Sales, L., Kennedy, S., & Poropatich, R. (2002). Physician-to-physician consultation via electronic mail; The Walter Reed Army Medical Center Ask a Doc system. Military Medicine, 167(3), 200204.
  • Adamic, L. A. (2005). Zipf, Power-laws, and Pareto - a ranking tutorial. Retrieved October 25, 2006, 2005, from http://www.hpl.hp.com/research/idl/papers/ranking/ranking.html
  • Aragon, S. R. (2003). Creating social presence in online environments. New Directions for Adult and Continuing Education, 2003(100), 5768.
  • Axtell, R. L. (2001). Zipf distribution of U.S. firm sizes. Science, 293(5536), 18181820.
  • Bays, H. (1998). Framing and face in Internet exchanges: A socio-cognitive approach. Linguistik Online, 1, Retrieved October 25, 2006 from http://www.linguistik-online.de/bays.htm
  • Brady, P. T. (1965). The technique for investigating on-off patterns of speech. The Bell Systems Technical Journal, 44(1), 122.
  • Brady, P. T. (1968). A statistical analysis of on-off patterns in 16 conversations. The Bell Systems Technical Journal, 47, 7391.
  • Burgoon, J. K., Buller, D. B., & Woodall, W. G. (1996a). Expectancy Violations Theory. In Nonverbal Communication: The Unspoken Dialogue (pp. 385390). NY, New York: McGraw Hill.
  • Burgoon, J. K., Buller, D. B., & Woodall, W. G. (1996b). Nonverbal Communication: The Unspoken Dialogue. New York: McGraw Hill.
  • Burgoon, J. K., & Walther, J. B. (1990). Nonverbal expectancies and the evaluative consequences of violations. Human Communication Research, 17(2), 232265.
  • Caplan, S. E. (2003). Preference for online social interaction - A theory of problematic Internet use and psychosocial well-being. Communication Research, 30(6), 625648.
  • Cappella, J. N. (1979). Talk-silence sequences in informal conversations I. Human Communication Research, 6(1), 317.
  • Comellas, F., & Gago, S. (2005). A star-based model for the eigenvalue power law of internet graphs. Physica A-Statistical Mechanics and Its Applications, 351(24), 680686.
  • Cramton, C. D. (2001). The mutual knowledge problem and its consequences for dispersed collaboration. Organization Science, 12(3), 346371.
  • Customer-Respect-Group. (2004). The Customer Respect Group. Retrieved October 25, 2006 from http://www.customerrespect.com
  • Danchak, M. M., Walther, J. B., & Swan, K. P. (2001, November). Presence in mediated instruction: Bandwidth, behavior, and expectancy violations. Paper presented at the annual meeting on Asynchronous Learning Networks, Orlando, FL.
  • Davenport, T., & Beck, J. (2001). The Attention Economy: Understanding the New Currency of Business. Boston: Harvard Business School Press.
  • Dezso, Z., Almaas, E., Lukacs, A., Racz, B., Szakadat, I., & Barabási, A.-L. (2006). Dynamics of information access on the web. Physical Review E (Statistical, Nonlinear, and Soft Matter Physics), 73(6), 066132.
  • Edelman, B. (2004, January 14, 2004). Earnings and ratings at Google Answers (preliminary draft). Retrieved October 25, 2006 from http://cyber.law.harvard.edu/people/edelman/pubs/GoogleAnswers-011404.pdf
  • Erickson, T. (1999). Persistent conversation: An introduction. Journal of Computer-Mediated Communication, 4(4). Retrieved October 25, 2006 from http://jcmc.indiana.edu/vol4/issue4/ericksonintro.html
  • Erickson, T., & Herring, S. C. (2005). Persistent conversation: A dialog between research and design. Proceedings of the Thirty-Eighth Hawaii International Conference on System Sciences. Los Alamitos, CA: IEEE Press.
  • Erickson, T., & Kellogg, W. A. (2000). Social translucence: An approach to designing systems that support social processes. ACM Transactions on Computer-Human Interactions, 7(1), 5983.
  • Feldman, M. S., & March, J. G. (1981). Information in organizations as signal and symbol. Administrative Science Quarterly, 26(2), 171.
  • FERC. (2004, May 5, 2004). Western Energy Markets: Enron Investigation, PA02-2, Data Release. Retrieved May 31, 2004 from http://www.ferc.gov/industries/electric/indus-act/wem/pa02-2/data-release.asp
  • Gabaix, X., Gopikrishanan, P., Pelrou, V., & Stanley, H. E. (2004). A theory of power-law distributions in financial market fluctuations. Nature, 423, 267270.
  • Goh, K.-I., Oh, E., Jeong, H., Kahng, B., & Kim, D. (2002). Classification of scale-free networks. PNAS, 99(20), 1258312588.
  • Goldberg, D. S., Franklin, G., & Roth, F. P. (2005). Breaking the power law: Improved model selection reveals increased network complexity. Paper presented at the 13th Annual International conference on Intelligent Systems for Molecular Biology, Ann Arbor, MI. Retrieved October 23, 2006 from http://www.broad.mit.edu/events/recomb2005/posters/posters/87067c844f955e166c63b0a09e57537f_poster_abstract.pdf
  • Goodwin, C. (2002). Time in action. Current Anthropology, 43 (Supplement, August-October), S19S35.
  • Greenfield, P., & Subrahmanyam, K. (2003). Online discourse in a teen chatroom: New codes and new modes of coherence in a visual medium. Journal of Applied Developmental Psychology, 24(6), 713738.
  • Guerrero, L. K., DeVito, J. A., & Hecht, M. L. (Eds.). (1999). The Nonverbal Communication Reader: Classic and Contemporary Readings. Prospect Hills, Il: Waveland Press.
  • Hamilton, M. B. (2005). Online Survey Response Rates and Times. Lake Oswego, OR: Tercent, Inc. Retrieved October 23, 2006 from http://www.supersurvey.com/papers/supersurvey_white_paper_response_rates.pdf
  • Herring, S. C. (1999). Interactional coherence in CMC. Journal of Computer-Mediated Communication, 4(4). Retrieved October 25, 2006 from http://jcmc.indiana.edu/vol4/issue4/herring.html
  • Hirsh, L. (2002). E-tail customer service: Finally working? Retrieved October 25, 2006 from http://www.technewsworld.com/story/19353.html
  • iConect. (2003). iConect 3.7. Redondo Beach, CA: iConect LLC. Retrieved October 23, 2006 from http://fercic.aspensys.com/iconect247/iconect247.exe
  • Jaffe, J., & Feldstein, S. (1970). Rhythms of Dialogue. New York: Academic Press.
  • Jones, Q., Ravid, G., & Rafaeli, S. (2004). Information overload and the message dynamics of online interaction spaces: A theoretical model and empirical exploration. Information Systems Research, 15(2), 194210.
  • Kalman, Y. M., & Rafaeli, S. (2005). Email chronemics: Unobtrusive profiling of response times. Proceedings of the Thirty-Eight Hawaii International Conference on System Sciences. Los Alamitos, CA: IEEE Press. Retrieved October 25, 2006 from http://csdl2.computer.org/comp/proceedings/hicss/2005/2268/04/22680108b.pdf
  • Kalman, Y. M., Ravid, G., Raban, D. R., & Rafaeli, S. (2006, June). Speak *now* or forever hold your peace: Power law chronemics of turn-taking and response in asynchronous CMC. Paper presented at the 56th Annual Conference of the International Communication Association, Dresden, Germany.
  • Keeling, M., & Grenfell, B. (1999). Stochastic dynamics and a power law for measles variability. Philosophical Transactions of the Royal Society of London Series B-Biological Sciences, 354(1384), 769776.
  • Lantz, A. (2003). Does the use of e-mail change over time? International Journal of Human-Computer Interaction, 15(3), 419431.
  • Lapadat, J. (2002). Written interaction: A key component in online learning. Journal of Computer-Mediated Communication, 7(4). Retrieved October 25, 2006 from http://jcmc.indiana.edu/vol7/issue4/lapadat.html
  • Lewis, C. E., Thompson, L. F., Wuensch, K. L., Grossnickle, W. F., & Cope, J. G. (2004). The impact of recipient list size and priority signs on electronic helping behavior. Computers in Human Behavior, 20(5), 633644.
  • Mark, G., Gonzalez, V. M., & Harris, J. (2005). No task left behind? Examining the nature of fragmented work. Proceedings of CHI 2005: Technology, Safety, Community: Conference Proceedings - Conference on Human Factors in Computing Systems. Retrieved October 25, 2006 from http://www.ics.uci.edu/~gmark/CHI2005.pdf
  • Mattila, A. S., & Mount, D. J. (2003). The impact of selected customer characteristics and response time on e-complaint satisfaction and return intent. International Journal of Hospitality Management, 22(2), 135145.
  • Matzler, K., Pechlaner, H., Abfalter, D., & Wolf, M. (2005). Determinants of response to customer e-mail enquiries to hotels: Evidence from Austria. Tourism Management, 26(2), 249259.
  • McLaughlin, M. L. (1984). Conversation: How Talk is Organized. Beverly Hills: Sage Publications.
  • McLaughlin, M. L., & Cody, M. J. (1982). Awkward silences: Behavioral antecedents and consequences of the conversational lapse. Human Communication Research, 8(1), 299316.
  • Mitzenmacher, M. (2003). A brief history of generative models for power law and lognormal distributions. Internet Mathematics, 1(2), 226251.
  • Morahan-Martin, J., & Schumacher, P. (2000). Incidence and correlates of pathological Internet use among college students. Computers in Human Behavior, 16(1), 1329.
  • Negroponte, N. (1994). Talking with computers. Wired, 2(3). Retrieved October 25, 2006 from http://www.wired.com/wired/archive/2.03/negroponte.html.
  • Newhagen, J. E., & Rafaeli, S. (1996). Why communication researchers should study the Internet: A dialogue. Journal of Communication, 46(1), 413.
  • Newman, M. E. J. (2005). Power laws, Pareto distributions and Zipf’s law. Contemporary Physics, 46(5), 323351.
  • Nonnecke, B., & Preece, J. (2000). Lurker demographics. Proceedings of CHI’2000. The Hague, Netherlands. Retrieved October 25, 2006 http://portal.acm.org/citation.cfm?id=332409&coll=portal&dl=ACM
  • O’Neill, J., & Martin, D. (2003). Text chat in action. Proceedings of the 2003 International ACM SIGGROUP Conference on Supporting Group Work, 4049.
  • Panyametheekul, S., & Herring, S. (2003). Gender and turn allocation in a Thai chat room. Journal of Computer-Mediated Communication, 9(1). Retrieved October 25, 2006 from http://jcmc.indiana.edu/vol9/issue1/panya_herring.html
  • Pitkin, R. M., & Burmeister, L. F. (2002). Prodding tardy reviewers: A randomized comparison of telephone, fax, and e-mail. JAMA, 287(21), 27942795.
  • Poppel, E. (2004). Lost in time: a historical frame, elementary procssing units and the 3-second window. Acta Neurobiologiae Experimentalis, 64, 295301.
  • Qian, J., Luscombe, N., & Gerstein, M. (2001). Protein family and fold occurrence in genomes: Power-law behaviour and evolutionary model. Journal of Molecular Biology, 313(4), 673681.
  • Rafaeli, S. (1985, May). If the computer is the medium, what is the message? Paper presented at the Annual Conference of the International Communication Association, Honolulu, Hawaii.
  • Rafaeli, S. (1988). Interactivity: From new media to communication. In Sage Annual Review of Communication Research: Advancing Communication Science (Vol. 16, pp. 110134). Beverly Hills, CA: Sage.
  • Rafaeli, S. (2004). Constructs in the storm. In M.Consalvo, N.Baym, J.Hunsinger, K. B.Jensen, J.Logie, M.Murero & L. R.Shade (Eds.), Internet Research Annual, Volume 1 (pp. 5564). New York, NY: Peter Lang.
  • Rafaeli, S., Raban, D. R., & Ravid, G. (2005). Social and economic incentives in Google Answers. Paper presented at the ACM Group 2005 conference, Sanibel Island, Florida. Retrieved October 25, 2006 from http://jellis.org/research/group2005/papers/RafaeliRabanRavidGoogleAnswersGroup05.pdf
  • Rafaeli, S., Ravid, G., & Soroka, V. (2004). De-lurking in virtual communities: A social communication network approach to measuring the effects of social and cultural capital. Proceedings of the Thirty-Seventh Hawaii International Conference on System Sciences. Los Alamitos, CA: IEEE Press. Retrieved October 25, 2006 from http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/proceedings/&toc=comp/proceedings/hicss/2004/2056/07/2056toc.xml&DOI=10.1109/HICSS.2004.1265478
  • Ravid, G., & Rafaeli, S. (2004). A-synchronous discussion groups as small world and scale free networks. First Monday, 9 (9). Retrieved October 25, 2006 from http://firstmonday.org/issues/issue9_9/ravid/index.html
  • Reed, W. J. (2001). The Pareto, Zipf and other power laws. Economics Letters, 74(1), 1519.
  • Rintel, E. S., & Pittam, J. (1997). Strangers in a strange land: Interaction management on Internet Relay Chat. Human Communication Research, 23(4), 507534.
  • Sacks, H., Schegloff, E. A., & Jefferson, G. (1978). A simplest systematics for the organization of turn taking for conversation. In J.Schenkein (Ed.), Studies in the Organization of Conversational Interaction (pp. 755). New York: Academic Press.
  • Sheehan, K. B., & McMillan, S. J. (1999). Response variation in e-mail surveys: An exploration. Journal of Advertising Research, 39(4), 4554.
  • Shenk, D. (1999). The End of Patience: Cautionary Notes on the Information Revolution. Bloomington, IN: Indiana University Press.
  • Smith, A., & Williams, K. D. (2004). R u there? Ostracism by cell phone text messages. Group Dynamics, 8(4), 291301.
  • Stellin, S. (2003, June 30). Most wanted: Drilling down/Company web sites; customer care, online. The New York Times, p. 8.
  • Strauss, J., & Hill, D. J. (2001). Consumer complaints by e-mail: An exploratory investigation of corporate responses and customer reactions. Journal of Interactive Marketing, 15(1), 6373.
  • Tannen, D., & Saville-Troike, M. (1985). Perspectives on Silence. Westport, Connecticut: Greenwood Publishing Group.
  • Tyler, J. R., & Tang, J. C. (2003). When can I expect an email response? A study of rhythms in email usage. Paper presented at the ECSCW 2003. Retrieved on October 25, 2006 from http://www.hpl.hp.com/research/idl/papers/rhythms/index.html
  • Vollarth, M., Kazenwadel, J., & Kruger, H. P. (1992). A universal constant in temporal segmentation of human speech. Naturwissenschaften, 79(10), 479480.
  • Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 343.
  • Walther, J. B., Loh, T., & Granka, L. (2005). Let me count the ways the interchange of verbal and nonverbal cues in computer-mediated and face-to-face affinity. Journal of Language and Social Psychology, 24(1), 3665.
  • Walther, J. B., & Tidwell, L. C. (1995). Nonverbal cues in computer-mediated communication, and the effect of chronemics on relational communication. Journal of Organizational Computing, 5, 355378.
  • Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (2000). Unobtrusive Measures; Revised Edition. Thousand Oaks, CA: Sage Publications.
  • Williams, K. D., Cheung, C. K. T., & Choi, W. (2000). Cyberostracism: Effects of being ignored over the internet. Journal of Personality and Social Psychology, 79(5), 748762.
  • Williams, K. D., Govan, C. L., Croker, V., Tynan, D., Cruickshank, M., & Lam, A. (2002). Investigations into differences between social- and cyberostracism. Group Dynamics, 6(1), 6577.
  • Zipf, G. K. (1949). Human Behaviour and the Principle of Least-Effort. Cambridge, MA: Addison-Wesley.
About the Authors
  1. Yoram M. Kalman is a doctoral student at the University of Haifa’s Center for the Study of the Information Society – InfoSoc. His research topic is “online silence.” Other research interests include CMC, non-verbal cues in online communication, and e-learning.

    Address: InfoSoc, Jacobs Building, Haifa University, Mount Carmel, 31905, Israel

  2. Gilad Ravid is a post doctorate fellow at the Annenberg Center for Communication, USC and lecturer at the Industrial Engineering Department, Ben Gurion University of the Negev, Israel. He researches in the areas of computer-mediated communication, distance education, supply chain management simulations, social networks, and online group communication.

    Address: Annenberg Center for Communication, 734 W. Adams Blvd., Los Angeles, CA, 90089 USA

  3. Daphne R. Raban is a lecturer in the Graduate School of Management and a Fellow of CRI and InfoSoc, University of Haifa. Her research interests are in the value of information, information sharing, and games and simulations.

    Address: InfoSoc, Jacobs Building, Haifa University, Mount Carmel, 31905, Israel

  4. Sheizaf Rafaeli is director of InfoSoc - the Center for the Study of the Information Society, and head of the Graduate School of Management, at the University of Haifa. He is interested in computers and networks as media.

    Address: InfoSoc, Jacobs Building, Haifa University, Mount Carmel, 31905, Israel