SEARCH

SEARCH BY CITATION

Keywords:

  • Massively multiplayer online games;
  • playing time;
  • self-report errors;
  • unobtrusive measures;
  • large-scale data

Abstract

  1. Top of page
  2. Abstract
  3. Method
  4. The Virtual World Observatory
  5. Results
  6. Discussion
  7. References
  8. Biographies

Using Cognitive Dissonance and Balance Theory, this study investigates factors that predict how and why MMO players inaccurately report their game playing time. It was hypothesized that players belonging to categories other than the stereotypical game player (e.g. younger, less educated, male) would be likely to underreport playing time. It was also hypothesized that those players who held less positive attitudes toward the game would be more likely to underreport their playing time. Comparing people's self-reported weekly usage of an MMO, EverQuest II, with their actual average weekly usage of the game, data showed that age, education, lack of enjoyment playing the game, and lack of an online sense of community predicted greater levels of underreporting.

Social scientific research often relies on self-report measures of behaviors. However, these measures are rarely externally validated because a direct measure is usually unavailable (Shoemaker & McCombs, 1989). Thus, self-report errors may affect results, but researchers are often unable to examine the underlying causes or incorporate an interpretation of such errors into their analyses. New communication technologies, such as virtual worlds and online games, facilitate the measurement of a variety of behaviors. They serve as ideal laboratories for social scientific research because they do not rely on self-report measures of behavior (Blascovich et al., 2002). Further, these environments present the potential to study the self-report errors by comparing them to direct measures of behavior.

The present research utilizes one such environment, a Massively Multiplayer Online game (MMO), to examine the potential causes of self-report errors in hours played. In a previous content analysis of nearly 600 articles from prominent journals in the field of communication that operationalized the term of “media use,” amount of time of exposure to media was by far the most widely used variable and was most often operationalized through a self-report measure (Ratan, Kahn, & Lee, 2011). Studies of video game use are part of the larger media-use research paradigm and thus likely follow a similar pattern. In an informal citation analysis conducted to confirm this assertion and complement the present article, a Google Scholar search for “‘hours played’ video game' ‘online game’” yielded 53 journal articles, with 28 including motivation as a key term and 20 including addiction as a key term (with seven overlapping), both of which are important and enduring topics within video game research. And drawing from a closer reading of the 24 articles published from 2010 to 2012, 16 presented results from an empirical study with a self-report measure of time played, while only two used a measure of actual playing time. These data suggest that research on video game use should consider the potential of self-report errors of hours played.

As noted in Williams, Consalvo, Caplan, and Yee (2009), the nature of self-report errors raises questions of Type I and Type II errors, i.e., studies that report findings that might not be truly significant and studies that do not report findings that are actually significant, respectively. Game researchers have taken players' self-reported playing time as a matter of faith. However, systematic inaccuracies in self-reported playing time would call marginal findings into question. Errors in self-report might produce a p = .04 that, if measured with true playing time, would move to a p > .05. Alternatively, a p = .09 might move to a p < .05 if measured with actual playing time. The latter produces a “file drawer” problem, where researchers are unable to publish findings that would have otherwise been acceptable had the playing time been measured accurately. Given the controversial nature of video game effects, existing findings damning or exonerating games may have been discounted or overestimated based on errors in the measurement of playing time as an independent variable.

An important example of this would be the domain of aggressive effects of violent video games. Meta-analyses exist that both support (C. A. Anderson et al., 2010) and refute (Ferguson & Kilburn, 2010) the assertion that exposure to violent video games leads to aggressive thoughts and behaviors, and Ferguson and Kilburn (2010) say that “file drawer” problems may be at the heart of the debate. Similarly, time of exposure has been identified as an important moderator in such research. A meta-analysis by Sherry (2001) found that studies with longer exposure times had smaller effect sizes, and Anderson et al.'s (2010) meta-analysis of found that self-reported frequency of playing specific violent video game titles were associated with stronger effects on aggression than self-reported frequency of exposure to violent scenes within all video games, suggesting that the latter measurement type may be more susceptible to self-report error.

Building upon Cognitive Dissonance and Balance Theories, the present research provides insight into how and why such self-report errors occur. The findings highlight the usefulness of behavioral measures over self-reports and furthermore, invite future researchers to incorporate this understanding into their interpretation of self-report measures of behavior.

Media Use and Self-Report

Self-reporting may require complex processing involving the encoding of a question, making a judgment about what the question means, recalling or estimating an answer, judging the answer, and finally reporting the answer (Jobe, 2003). People are “cognitive misers” when it comes to answering survey questions (Sudman, Bradburn, & Schwartz, 1996), and so a variety of factors may induce misestimation, especially for behaviors that are difficult to recall, such as time of media use. For example, one study found that when people were told at the beginning of the study that they would be asked to estimate the length of a play session during the study, they reported longer and less accurate play durations than when they reported playing time without receiving a prepatory prompt (Tobin, Bisson, & Grondin, 2010).

The validity of self-report measures of media use has been previously addressed by examining correlations between self-report measures and diary tracking, and the findings suggest that there is substantial error in nearly all self-driven methods. Diaries provide the baseline because they have been found to be fairly accurate in measuring actual media use (D. R. Anderson, Field, Collins, Lorch, & Nathan, 1985). When comparing the two measures, self-report data have been shown to be correlated with diary measures, but still much less accurate. van der Voort and Voojis (1990) found a correlation of .54 between the self-report and diary data. For older children with higher education and greater family income, this correlation increased to .77. For Internet usage, Larose, Eastin, and Gregg (2001) found a correlation of .65 between self-reported Internet usage and diary data. In a study comparing diary reports with an online survey, Greenberg et al. (2005) found statistically significant, though somewhat small, correlations for all media they surveyed, with r's ranging from .20 for offline music listening to .58 for number of e-mails sent. In the case of offline video games, the correlation was .21, and for online video games, the correlation was .29 (both significant at the .05 level). For all media examined, the self-reported time using an online survey was significantly greater (at the .05 level or less) than the time found in diary reports. The one exception was online video games, which had nonsignificant underreporting. It should be noted that the sample population was college students and the study only looked at a single day's usage.

Although diary measures are thought to be the most accurate, actual behavioral measures would obviously provide a more valid assessment of self-report data. The absence of demand characteristics and Hawthorne effects have long been understood to increase the validity of behavioral data (Webb, Campbell, Schwartz, & Sechrest, 1966). Few studies have examined the validity of self-report measures of media use by comparing such measures with actual behavioral data. Williams et al. (2009) were able to compare self-reported playing time with actual playing time in EverQuest II (EQII), a massively multiplayer online game (MMO). Its owner, Sony Online Entertainment, provided access to unobtrusive player data and assisted in the collection of survey data from players. The present study relies on the same data set, which will be further explained in the Methods section.

In a previous study using the same data, Williams, Yee, and Caplan (2008) found that among all EQII players, the mean hours played per week was 25.86, with females playing approximately 4 more hours per week than males. In a follow-up study looking at gender differences, they found that while all players reported playing fewer hours than they actually played, females underreported their playing time more than males (Williams, et al., 2009). In looking at this underreporting, Williams et al. only provided the data by gender. Males underreported by approximately 1 hour per week, whereas females underreported by approximately 4 hours per week. They attributed this underreporting to social-desirability bias in survey responses, given the historically slow acceptance of video games as socially acceptable (Williams, 2006a). They attributed the gender differences in underreporting to also be a result of social-desirability bias given that video games have been seen as more of a masculine activity, despite the empirical findings to the contrary (McQuivey, 2001; Royse, Lee, Undrahbuyan, Hopson, & Consalvo, 2007).

One minor shortcoming of Williams et al. (2009), in the context of the present study, is that while they published the data on underreporting by gender, they omitted the data for the entire population across genders. Thus, the following question remains open:

RQ1: How much do players underreport their playing time in EQII?

In addition, actual media use is accurately reflected by diary measures (D. R. Anderson, et al., 1985), and diary measures correlate with survey-based self-report measures (Greenberg, et al., 2005; LaRose, et al., 2001; van der Voort & Voojis, 1990), and thus actual media use should correlate with self-report measures, though the size of this correlation may not be large. Indeed, the correlations between diary-tracked and self-reported survey measures of video game playing have been found to be small (Greenberg, et al., 2005), suggesting that self-reported playing time may be an inaccurate measure of actual behavior. Thus, we hypothesize that a correlation exists and then examine the size of this correlation in a research question:

H1: There will be a positive correlation between actual playing time and self-reported playing time in EQII.

RQ2: Is the correlation in H1 large enough to indicate that self-reported playing time accurately reflects actual playing time?

Social Desirability and Self-Report Errors

Williams et al. (2009) postulated that social-desirability bias leads to players to underreport their playing time. The mode of data collection can have an effect on the level of social-desirability bias (Holbrook, Green, & Krosnick, 2003; Kreuter, Presser, & Tourangeau, 2008; Tourangeau & Smith, 1996; Tourangeau & Yan, 2007). Research on this topic has found that the further a mode is from face-to-face interaction, the less the extent of social-desirability bias. In comparing three electronic modes of data collection, computer-assisted telephone interviewing, interactive voice recognition, and web survey, people were most likely to disclose sensitive information in web surveys (Kreuter, et al., 2008). Given that Williams et al. (2009) collected data using web surveys, it is less likely that social desirability influenced the errors in self-reported hours of game play.

Cognitive Dissonance, Balance Theory, and Self-report Errors

The present paper offers a related but alternative explanation to this systematic underreporting. Whereas social-desirability bias occurs because of the way respondents want to be perceived by others, the present explanation is based on the way respondents perceive their own playing habits in relation to a variety of factors that are likely orthogonal to their own video game playing time. These factors include personal social categorization, enjoyment of the game, and the sense of community experienced with others online. The theoretical framework driving this explanation is drawn from Cognitive Dissonance (Festinger, 1957) and Balance Theory (Heider, 1946). Together, these theories explain how such factors influence an individual's attempt to estimate the difficult-to-recall behavior of video game playing time, resulting in systematic inaccuracies.

Festinger's (1957) classic theory of cognitive dissonance posits that when cognitive elements (beliefs, attitudes, or values) are inconsistent with one another, people feel uncomfortable, and so they attempt to resolve this dissonance. This act of resolving dissonance may influence self-reports about behavior. For example, if a person wants to think of herself as a voracious reader, she may overestimate her self-reported reading time to maintain consonance with the idea of herself as a voracious reader.

Balance Theory is an extension of this concept. Like Cognitive Dissonance Theory, Balance Theory is based on the premise that people prefer a state in which cognitive elements are consistent—“in balance”—with one another (Heider, 1946; Petty & Cacioppo, 1996), but the theory allows for the simultaneous consideration of the relationship between three cognitive elements, typically represented within a triangle. These cognitive elements are in balance when all three are positive or when one is positive and the other two are negative. With respect to attitudes about time spent playing video games, the three elements in the triangle would be the person, the time spent playing, and a cognitive element that is perceived as relevant to both the person and time spent playing (see Figure 1).

image

Figure 1. General cases for balanced triads with negative attitudes toward hours played

Download figure to PowerPoint

Social Categorization and Self-Report Errors

The triad cases for explaining a negative attitude toward time spent playing an online game can be applied using numerous types of cognitive elements that are perceived as relevant to both the person and playing time. The first type relates to social categorization, the use of social attributes (behavioral norms) to categorize the self and others as in-group or out-group members (Hogg & Reid, 2006; Turner, Hogg, Oakes, Reicher, & Wetherell, 1987). Reid and Hogg (2005) found that social categorization theory explains how people perceive certain media to influence people within certain social categories. A similar use of social categorization may help explain errors in self-reported hours of video game play. The difference in underreporting between females and males in Williams et al. (2009) may relate to the salient social categories that people consider when estimating their hours of play. Although there is a significant amount of both female and male gamers from a wide variety of age ranges (Williams, et al., 2008), the stereotypical online gamer is an adolescent male (Griffiths, Davies, & Chappell, 2003). The further a social category is from the stereotypical gamer – the adolescent male – the more negative the association between this social group and hours played (see Figure 2). Thus, assuming that people have a positive attitude toward their own social groups, people who belong to a social group that is more distant from the adolescent males are more likely to have a negative attitude toward spending time playing online games, and thus underreport their playing time. Returning to Williams et al.'s (2009) findings, males, as a whole, are likely to perceive themselves as more similar to adolescent males than females would. Thus, females are more likely to have a negative attitude toward spending time playing online games and therefore underreport more than males.

image

Figure 2. Balanced triad: as personal social categories diverge from those of stereotypical gamers, attitudes toward hours played become more negative

Download figure to PowerPoint

This reasoning can be extended to other social categories. The stereotypical gamer is younger and less educated than the actual population of online gamers (Williams, et al., 2008; Yee, 2006). Thus, as people become older and more educated, they are likely to see themselves as less similar to the stereotypical gamer. When these people self-report the number of hours they play, this distinction between their own social identity and that of the stereotypical gamer affects their estimation. The more dissimilar from the stereotypical gamer, the more this distinction contributes to underreporting. Therefore, the following predictions are presented for age and education to accompany the gender differences previously found in Williams et al. (2009).

H2: Age is positively related to underreporting of hours played.

H3: Years of education is positively related to underreporting of hours played.

Enjoyment and Self-Report Errors

Balance Theory reasoning can also be used to explain how enjoyment of the game relates to an estimation of hours played. If someone does not enjoy playing an online game but plays it anyway, the person must then deflate how much he plays in order to maintain balance. Otherwise, he would have to more openly acknowledge that he is spending many hours doing something that he doesn't enjoy but is paying for with money and leisure time. It is easier (if somewhat self-delusional) to perceive that he is not playing as much as he is (see Figure 3).

image

Figure 3. Balanced triad: lack of enjoyment leads to negative attitudes toward hours played

Download figure to PowerPoint

Thus, the more negative the attitude and less enjoyment with the game, the more underreporting of time spent playing the game.

H4: People who report not enjoying the game will underreport more than people who enjoy the game.

Feelings of Community and Self-Report Errors

Given that belonging to a community is a significant and positive aspect of many MMOs (Steinkuehler & Williams, 2006; Williams, 2006b), feelings of community are likely to influence attitudes about spending time playing the game. Because these communities exist within the game, there is most likely a positive relationship between such communities and spending time playing the game. If someone experiences a strong feeling of community from other players in an online game, this person is likely to have a positive attitude toward that community. Alternatively, if someone does not gain a strong feeling of community from other players of an online game, this person is likely to have a negative attitude toward that community. According to Balance Theory, the person in this latter example is also likely to have a negative attitude toward spending time playing the game. An illustration of this relationship can be found in Figure 4. Thus, the less people experience a sense of community from the online game, the more they will underreport hours played.

H5: People who report experiencing a sense of community from others online will underreport less than people who do not experience a sense of community from others online.

The Virtual World Observatory

  1. Top of page
  2. Abstract
  3. Method
  4. The Virtual World Observatory
  5. Results
  6. Discussion
  7. References
  8. Biographies

The Virtual World Observatory (VWO) is a multiuniversity research project taking theories from psychology, sociology, economics, and other social sciences and testing whether they hold true for behaviors within MMOs. The VWO project is unique because for the first time researchers have access to unobtrusive, precise (in this case, second-by-second) behavioral data of game players. Up until now, this has not been possible given the immense size, not to mention the proprietary nature, of such data. Previously, researchers had to rely solely on self-reported surveys (i.e., Griffiths, Davies, & Chappell, 2004; Yee, 2006). VWO obtained survey data for game players that could be associated with the behavioral data, allowing us to learn the relationship between real-world behaviors and in-world behaviors.

Currently, the VWO project's data come from Sony Online Entertainment's MMO, EverQuest II (EQII). EQII was chosen in part because it is representative of mainstream fantasy role-playing MMOs, and until the release of Blizzard's World of Warcraft, EQII and its predecessor, EverQuest, held a large portion of the MMO market share. In addition, EQII was chosen because Sony Online Entertainment cooperated in providing us with their proprietary data. It should be noted that all data utilized for this paper as well as the entire VWO project are anonymous and privacy protected.

Sampling and Procedures

In EQII, players have the ability to create multiple characters. A previous VWO study found that the average player had more than six different characters (Williams, et al., 2008). Thus, the player was used as the unit of analysis and all characters from a given player were collapsed into aggregated values associated with individual user accounts.

All players who logged into EQII between January 13 and January 17, 2007, were invited to participate in the survey. If they agreed, they were directed to a separate website. The link was unique for each participant, allowing actual player data to be associated with survey answers. However, once associated, the survey answers and player data were recorded anonymously so that participants' identity would remain anonymous.

As compensation for participating in the study, players received the “Greatstaff of the Sun Serpent,” an in-game item designed by Sony for the specific purposes of this study. The item's rarity and usefulness in all combat situations made it very desirable. As such, nearly all participants who were invited to participate did so. In just 2 days, 7,129 players filled out this survey, representing a large majority of those who logged in during that time. It should be noted that while Williams et al. (2009) used the same dataset as is being used in the present study, Williams et al. used a much smaller sample size of N = 2,400 because they eliminated participants who did not answer questions relevant to a larger set of independent variables.

Server-Side Measures

Sony Online provided four terabytes of player data from the beginning of 2006 through September 17, 2006. Among the information in the player data were the dates players created their accounts (including creation dates before September 17, 2006), the players' last login before September 17, 2006, and the number of seconds a player had played EQII since their account creation. From this information, the “account age” was calculated by subtracting their first login from their last login. The average hours per day played was calculated by dividing seconds played by “account age.” This was multiplied by seven to obtain the average hours per week played by a player. Weekly average served as the measure of actual playing time.

Survey Measures

Playing time

The survey included the question “About how many hours per week do you usually play EQII?” Players were allowed to respond with an integer between 0 and 100.

Age

Players indicated their age in years. Responses were limited to integer values, and all players under the age of 13 were coded as 12 and all players 65 and older were coded as 65.

Education

Players were asked, “What is the highest grade of school or year of college you have completed?” with seven possible responses: less than high school; high school diploma (including GED); some college; associates degree (2 years) or specialized technical training; bachelor's degree; some graduate training; and graduate or professional degree. In order to make this variable ordinal, some college and associates degree (2 years) were collapsed into one category because some college might refer to more or less than 2 years.

Enjoyment

Players were asked, “How much would you say you've enjoyed playing this game?” with four possible responses: very much, somewhat, not much, and not at all.

Sense of online community

Players were asked if they agreed or disagreed with the following statement: “The people I have met online give me a sense of community” with three possible responses: no, they don't; it depends/I have no strong feelings; and yes, they do.

Calculated Measures

The underreporting of playing time was calculated by subtracting the number of hours reported played per week from the number of hours actually played per week. Positive values for this variable indicate underreporting, whereas negative values for this value indicate overreporting.

Preanalysis Data Screening

Initially, 449 survey responses were eliminated because either the survey was unable to calculate the average weekly playing time or because the average weekly playing time was greater than 100 hours per week. The reason for eliminating the latter was that players were forced to choose an integer between 0 and 100 in reporting their playing time. Therefore, those who played more than 100 hours per week would be forced to underreport their playing time. This reduced the sample size to 6,680.

In visually examining the original distribution of self-reported hours played, the extreme value of 100 hours stuck out in the tail. In addition, one could theoretically not play 0 hours per week. In separating self-reported playing time into three categories—those who reported playing 0 hours per week, those who reported playing between 1 and 99 hours per week, and those who reported playing 100 hours per week—the underreporting in these categories were compared, and those in the first category underreported on average by 20.01 hours per week, those in the second category underreported on average by 1.26 hours per week, and those in the third category overreported on average by 66.45 hours per week. A one-way ANOVA with post hoc LSD tests found these differences to be statistically significant, F(2, 6667) = 331.98, p < .001, partial η2 = .09. Therefore, those who reported 0 hours played per week (n = 29) and those who reported 100 hours played per week (n = 53) were eliminated from the sample, leaving a final sample size of N = 6598.

A sample this large is likely to produce statistically significant results for most inferential statistics, leading to a potential Type I error in parameter estimation. In addition, some variables were not normally distributed. Therefore, in addition to reporting estimates based on the full sample data, 1000 bootstrapped samples were selected, and Bias Corrected accelerated 95% confidence intervals (BCa 95% CI) were reported for the bootstrapped samples. BCa bootstrapping is a robust method of resampling (Efron, 1987; Efron & Tibshirani, 1993).

Results

  1. Top of page
  2. Abstract
  3. Method
  4. The Virtual World Observatory
  5. Results
  6. Discussion
  7. References
  8. Biographies

Players reported playing 23.94 hours per week (SD = 15.96, skew = 1.28, kurtosis = 2.12, first quartile = 12, median = 20, third quartile = 30) but actually played 25.20 hours per week. (SD = 17.56, skew = 1.11, kurtosis = 1.27, first quartile = 11.93, median = 21.55, third quartile = 34.52). This difference of 1.26, BCa 95% CI [0.768, 1.734], was significant t(6597) = 5.42, p < .001, d = .07. So addressing RQ1, the 1-hour and 15-minute difference is statistical significant, although with a very small effect size.

Supporting H1, the correlation between the reported playing time and actual playing time was r(6596) = .365, BCa 95% CI [.342, .390], p < .001. This means that reported playing time predicts only 13.32% of actual playing time, suggesting that the response to RQ2 is no, self-reported playing time is not an accurate measure of actual playing time. If the previously excluded participants who self-reported playing times of 0 or 100 were included in the analysis, the correlation only decreases slightly, r(6678) = .351, BCa 95% CI [.324, .379], p < .001.

To test the remaining hypotheses, bivariate correlations were conducted for the different predictors of self-report errors. Age was positively related to underreporting, r(6290) = .121, BCa 95% CI [.342, .390], p < .001. Education was positively related to underreporting, r(6290) = .100, BCa 95% CI [.074, .126], p < .001 (when controlling for age, the correlation was reduced to .062, BCa 95% CI [.037, .088]). Enjoyment was negatively related to underreporting, r(6290) = −.074, BCa 95% CI [−.101, −.046], p < .001. Experiencing a sense of community from online was negatively related to underreporting, r(6290) = −.068, BCa 95% CI [−.084, −.046], p < .001. Thus, H2 through H5 were all supported.

Post Hoc Analysis

In screening the data, it was observed that while participants had the ability to report their playing time as any integer between 1 and 100 when self-reporting their playing time, 5,124 of the original respondents (72%) chose to answer with a number that was a multiple of five. Implications of this will appear in the Discussion.

Discussion

  1. Top of page
  2. Abstract
  3. Method
  4. The Virtual World Observatory
  5. Results
  6. Discussion
  7. References
  8. Biographies

This study examined self-report error in a measure of playing time from players of a popular MMO, EQII. Players were found to underreport an average of 1.26 hours per week and the correlation between reported and actual playing time was significant but effectively small. This reaffirms Williams et al.'s (2009) claim that EQII players underreport their playing time, which is not surprising given that the same dataset was used, but Williams et al. did not conduct this test for the entire population as was done in this study. Further, Williams et al. (2009) explained the underreporting as an effect of social-desirability bias, but this explanation is unlikely given that the measure of playing time was reported in a web-based survey. Instead, the present article used Cognitive Dissonance and Balance Theory to explain how and why cognitive elements that are orthogonal to an individual's playing time influence the estimation of this behavior. Namely, players who belong in social categories farthest away from the “adolescent male” gamer stereotype, who did not enjoy playing the game, or who did not experience a sense of community with others in the game were found to be more likely to underreport their playing time. These findings have implications for research that uses measures of game playing time.

The investigation of the relationship between reported and actual playing time found a significant moderate correlation, which is consistent with Greenberg et al. (2005). However, their study used a pool of college students (as opposed to regular gamers) and asked about a specific day (as opposed to using a longitudinal measure). It was still surprising to find such a weak correlation in the data given the similar means of these two groups. With r = .365, that means only 13.32% of the variance in reported hours can be predicted by actual hours played. This confirms the assertion that amount of time played is difficult to estimate and thus self-reported data is subject to influences that are orthogonal to the actual amount of playing time, as explained by Cognitive Dissonance and Balance Theory. The predictions based on these theoretical foundations were supported. As predicted, older and more educated individuals, as well as those who reported not enjoying the game or not experiencing a sense of community from others online, were more likely to underestimate their playing time (although the effect sizes were quite small).

While the amount of underreporting is small, the variables that are associated with underreporting are consistent with the expectations provided by Cognitive Dissonance and Balance Theory. The association between underreporting and player age and education level, in the present study, as well as with being female, in Williams et al.'s (2009) study, supports the notion that the further people perceive themselves to be from the social category of stereotypical gamer, the more difficult it is to reconcile the amount of time they spend playing the game. In other words, because the stereotypical gamer is associated with high amount of playing time, when an individual perceives herself to be unlike a stereotypical gamer, then in order for her to balance these cognitive elements she must underestimate the amount of time she spends playing the game. Similarly, people who do not enjoy playing the game or do not derive a sense of community from the people they know online balance their negative feelings toward these cognitive elements that should be associated with more playing time by underestimating their own playing time.

This understanding has notable implications for video games researchers, given that self-reported time played is such a widely used measure. First, these findings suggest that researchers should attempt to use actual time played instead of a self-reported measure when possible. Of course, this is often impossible. Research that relies on self-reported measures of playing time could also include measures that would help account for systematic inaccuracies in such self-report through the consideration of Cognitive Dissonance and Balance Theory. The social category measures included here could be used to identify people who fall into the stereotypical gamer category. If the results of a test for a relationship between hours played and some variable of interest is significant for the stereotypical gamers, but only nearly significant for participants outside of this social category, the researcher could argue that this latter finding is still worth interpreting because the hours played measure is more likely to be inaccurate for the latter group. A similar approach could be used for measures of enjoyment, sense of community, or any other affective state that would likely be perceived as positively associated with higher playing time. Another approach could be to ask participants more directly about their associations with playing the game, e.g., “Do you see yourself as the type of person who plays video games frequently, i.e., a typical gamer?” This type of measure could be used to segment the population sample, as described above, or could even be included as a covariate in the analysis. And aside from implications for new research, such considerations could be used to reinterpret existing research that has been relegated to the file drawer because of nearly significant findings, assuming that it includes measures that could be used to reanalyze and reinterpret the data with the effects of Cognitive Dissonance and Balance Theory on self-report error of playing time in mind.

There is one other implication worth attention based on the finding that participants reported playing time in multiples of 5 hours. This confirms that playing time is difficult to recall and thus respondents estimate to the nearest interval of five. One implication may be that instead of using open-ended questions of self-reported playing time, research could offer categorical responses in multiples of five, thereby reducing cognitive load requirements of the participants without sacrificing accuracy of the data, with the caveat that such self-report data is likely to be inaccurate regardless.

Limitations

There are some limitations to the study and the results. First of all, given the small effect size, it would be dangerous to make any strong claims about the data. From a methodological perspective, a major problem was that the server-side data were collected over a long span of time—the entire existence of a player's account. However the survey data were collected at a single point in time. Playing habits may change over time. Players may increase or decrease their average weekly play as they play the game more. This could have contributed to the low correlation between actual and reported playing time.

Another problem is that all of our independent variables were single questions. In the case of enjoyment and sense of community, they were on 4- and 3-point scales, respectively. This brings into question the reliability and validity of these measures. Further, some participants may not have been comfortable responding truthfully to the measures of age and education. Unfortunately, there was no way to directly validate their responses. Thus, somewhat ironically, this measure was also subject to self-report error. However, there is no reason to believe that there would be systematic inaccuracies in this data with respect to the variables of interest, i.e., actual or reported playing time. In other words, it does not seem likely that people who have similar amounts of playing time would lie about these characteristics in similar, systematic ways. Therefore, these inaccuracies were more likely to create noise in the data and reduce the strength of effects than to influence the effects in a misleading way.

Finally, while Cognitive Dissonance and Balance Theory were used to form the hypotheses, the data did not directly show that these were the mechanisms at play. The data were simply consistent with these mechanisms. However, it is this consistency that suggests that the results are not Type I errors.

Conclusion

This study suggests that self-report measures in game research, and possibly in other media work, are influenced by systematic mechanisms guided by predictions from Cognitive Dissonance and Balance Theory. Future game research should consider such social categories as age, education, and gender, though there are likely to be additional social categories, such as socioeconomic status, ethnicity, and in-media group (e.g., guild) membership that might also systematically influence self-report errors. This study also illustrates the relevance of attitudes, such as enjoyment and feelings of community, to self-report errors, but future research should consider additional psychological characteristics, such as extraversion, need for cognition, and attention. Regardless of the specific constructs that lead to self-report errors, this study has shown Cognitive Dissonance and Balance Theory can contribute to the interpretation of self-report variables.

When behavioral data and other objective measures are unavailable, these theories may be especially useful when researchers interpret marginally significant results. P-values at the threshold of significance, such as .04 or .06, may be misleading because of self-report errors. Researchers can incorporate potential causes for self-report error into their interpretation of data, and thus be more skeptical of marginally significant results or more accepting of marginally non-significant results. In the latter case, this may reduce occurrences of the “file drawer” problem, allowing researchers to present theories on findings with marginally significant results. This additional tool for data interpretation is especially important because of the controversial, policy-related nature of video game effects.

In addition to Cognitive Dissonance and Balance Theory, there may be other classic cognitive theories from psychology that can be used to predict systematic self-report errors. In addition to playing time, there may be other self-reported measures for which error can be systematically predicted. This study should also encourage researchers to utilize new technologies as tools to compare self-report with actual behavior, thereby contributing to our theoretical understanding of why such errors occur.

References

  1. Top of page
  2. Abstract
  3. Method
  4. The Virtual World Observatory
  5. Results
  6. Discussion
  7. References
  8. Biographies
  • Anderson, C. A., Shibuya, A., Ihori, N., Swing, E. L., Bushman, B. J., Sakamoto, A., et al. (2010). Violent video game effects on aggression, empathy, and prosocial behavior in Eastern and Western countries: A meta-analytic review. Psychological Bulletin, 136(2), 151173.
  • Anderson, D. R., Field, D. E., Collins, P. A., Lorch, E. P., & Nathan, J. G. (1985). Estimates of young children's times with television: A methodological comparison of parent reports with time-lapse video home observation. Child Development, 56(5), 13451357.
  • Blascovich, J., Loomis, J., Beall, A. C., Swinth, K. R., Hoyt, C. L., & Bailenson, J. N. (2002). Immersive virtual environment technology as a methodological tool for social psychology. Psychological Inquiry, 13(2), 103124.
  • Efron, B. (1987). Better bootstrap confidence intervals. Journal of the American Statistical Association, 82(397), 171185.
  • Efron, B., & Tibshirani, R. (1993). An introduction to the bootstrap. Boca Raton, FL: Chapman & Hall.
  • Ferguson, C. J., & Kilburn, J. (2010). Much ado about nothing: The misestimation and overinterpretation of violent video game effects in Eastern and Western nations: Comment on Anderson et al. (2010). Psychological Bulletin, 136(2), 174178.
  • Festinger, L. (1957). A theory of cognitive dissonance. Stanford, CA: Stanford University Press.
  • Greenberg, B. S., Eastin, M. S., Skalski, P., Cooper, L., Levy, M., & Lachlan, K. (2005). Comparing survey and diary measures of Internet and traditional media use. Communication Reports, 18(1), 18.
  • Griffiths, M. D., Davies, M. N. O., & Chappell, D. (2003). Breaking the stereotype: The case of online gaming. CyberPsychology & Behavior, 6(1), 8191.
  • Griffiths, M. D., Davies, M. N. O., & Chappell, D. (2004). Demographic factors and playing variables in online computer gaming. CyberPsychology & Behavior, 7(4), 479487.
  • Heider, F. (1946). Attitudes and cognitive organization. Journal of Psychology, 21, 107112.
  • Hogg, M. A., & Reid, S. A. (2006). Social identity, self-categorization, and the communication of group norms. Communication Theory, 16(1), 730.
  • Holbrook, A. L., Green, M. C., & Krosnick, J. A. (2003). Telephone versus face-to-face interviewing of national probability samples with long questionnaires: Comparisons of respondent satisficing and social desirability response bias. Public Opinion Quarterly, 67(1), 79125.
  • Jobe, J. B. (2003). Cognitive psychology and self-reports: Models and methods. Quality of Life Research, 12(3), 219227.
  • Kreuter, F., Presser, S., & Tourangeau, R. (2008). Social desirability bias in CATI, IVR, and web surveys. Public Opinion Quarterly, 72(5), 847865.
  • LaRose, R., Eastin, M. S., & Gregg, J. (2001). Reformulating the Internet paradox: Social cognitive explanations of internet use and depression. Journal of Online Behavior, 1(2).
  • McQuivey, J. (2001). The digital locker room: The young, white male as center of the video gaming universe. In E. L. Toth & L. Aldoory (Eds.), The gender challenge to media: diverse voices from the field (pp. 183–214).
  • Petty, R. E., & Cacioppo, J. T. (1996). Attitudes and persuasion: Classic and contemporary approaches. Boulder, CO: Westview Press.
  • Ratan, R. A., Kahn, A. S., & Lee, K. M. (2011, May). Media use, explicated. Paper presented at the 61st annual conference of the International Communication Association, Boston, MA.
  • Reid, S. A., & Hogg, M. A. (2005). A self-categorization explanation for the third-person effect. Human Communication Research, 31(1), 129161.
  • Royse, P., Lee, J., Undrahbuyan, B., Hopson, M., & Consalvo, M. (2007). Women and games: Technologies of the gendered self. New Media & Society, 9(4), 555576.
  • Sherry, J. L. (2001). The effects of violent video games on aggression. Human Communication Research, 27(3), 409431.
  • Shoemaker, P., & McCombs, M. (1989). Survey research. In G. Stempel & B. Westley (Eds.), Research methods in mass communication (2 ed., pp. 150172). Englewood Cliffs, NJ: Prentice-Hall Inc.
  • Steinkuehler, C., & Williams, D. (2006). Where everybody knows your (screen) name: Online games as “third places”. Journal of Computer-Mediated Communication, 11(4).
  • Sudman, S., Bradburn, N. M., & Schwartz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass.
  • Tobin, S., Bisson, N., & Grondin, S. (2010). An ecological approach to prospective and retrospective timing of long durations: A study involving gamers. PLoS ONE, 5(2), e9271.
  • Tourangeau, R., & Smith, T. W. (1996). Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opinion Quarterly, 60(2), 275304.
  • Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859853.
  • Turner, J. C., Hogg, M. A., Oakes, P. J., Reicher, S. D., & Wetherell, M. S. (1987). Rediscovering the social group: A self-categorization theory. Oxford, UK: Blackwell.
  • van der Voort, T. H. A., & Voojis, M. W. (1990). Validity of children's direct estimate of time spent viewing television. Journal of Broadcasting & Electronic Media, 34(1), 9399.
  • Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (1966). Unobtrusive measures: Non-reactive research in the social sciences. Chicago, IL: Rand McNally and Company.
  • Williams, D. (2006a). A (brief) social history of gaming. In P. Vorderer & J. Bryant (Eds.), Playing video games: Motivations and consequences of use. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Williams, D. (2006b). Groups and goblins: The social and civic impact of an online game. Journal of Broadcasting & Electronic Media, 50(4), 651670.
  • Williams, D., Consalvo, M., Caplan, S., & Yee, N. (2009). Looking for gender (LFG): Gender roles and behaviors among online gamers. Journal of Communication, 59(4), 733758.
  • Williams, D., Yee, N., & Caplan, S. E. (2008). Who plays, how much, and why? Debunking the stereotypical gamer profile. Journal of Computer-Mediated Communication, 13(4), 9931018.
  • Yee, N. (2006). The demographics, motivations and derived experiences of users of massively-multiuser online graphical environments. PRESENCE: Teleoperators and Virtual Environments, 31(3), 309329.

Biographies

  1. Top of page
  2. Abstract
  3. Method
  4. The Virtual World Observatory
  5. Results
  6. Discussion
  7. References
  8. Biographies
  • Adam S. Kahn is an Assistant Professor in the School of Communication at Western Michigan University. Adam earned his Ph.D. and M.A. in communication from USC's Annenberg School for Communication and Journalism and an M.A. in media studies, B.S. in computer science, and B.A. in history from Stanford. His research focuses on the psychology of human-computer interaction, computer-mediated communication, and video games. He is also interested in shared cognition, small-group processes, and the history of new media. Recently, he has been looking at applying theories of computer-mediated communication to the study of transactive memory system in video game teams.

    Address: School of Communication, Western Michigan University, 210 Sprau Tower, 1903 W. Michigan Ave., Kalamazoo, MI 49006-5318

  • Rabindra Ratan is an Assistant Professor and AT&T Scholar at Michigan State University's Department of Telecommunication, Information Studies and Media. He received his Ph.D. from USC's Annenberg School for Communication and Journalism. His research focuses on the social and psychological implications of media use, with an emphasis on interactive environments that facilitate the use of mediated self-representations (e.g., avatars). He is also interested in the topics of games for education, virtual world economies, gender in online games, online communities, and carmunication (communication between drivers). Recently, his work includes experiments that utilize video game-based stimuli with psychophysiological measures, as well as analyses of large-scale back-end databases provided by game publishers that are linked to survey responses from players.

    Address: Department of Telecommunication, Information Studies & Media, Michigan State University, 428 Communication Arts & Sciences Building, 404 Wilson Rd., East Lansing, MI 48824–1212

    E-mail: rar@msu.edu

  • Dmitri Williams is an associate professor at the USC Annenberg School for Communication & Journalism, where he is a part of the Annenberg Program on Online Communities (APOC). He received his Ph.D. from the University of Michigan in 2004. His research focuses on the social and economic impacts of new media, with a focus on online games. Williams was the first researcher to use online games for experiments, and to undertake longitudinal research on video games. He continues to study the psychology of online populations, with projects involving community, identity, sexuality, economics and neuroscience. His research often draws from collaborations with game publishers in order to examine real-world datasets.

    Address: Annenberg School for Communication and Journalism, University of Southern California, 312 Kerckhoff Hall, 734 W. Adams Boulevard, Los Angeles, CA, 90089