By continuing to browse this site you agree to us using cookies as described in About Cookies
Notice: Wiley Online Library will be unavailable on Saturday 7th Oct from 03.00 EDT / 08:00 BST / 12:30 IST / 15.00 SGT to 08.00 EDT / 13.00 BST / 17:30 IST / 20.00 SGT and Sunday 8th Oct from 03.00 EDT / 08:00 BST / 12:30 IST / 15.00 SGT to 06.00 EDT / 11.00 BST / 15:30 IST / 18.00 SGT for essential maintenance. Apologies for the inconvenience.
There is little doubt that human behavior is affected by genetic and biological characteristics (e.g., Bouchard & McGue, 2003; Dick & Rose, 2002; Plomin, DeFries, Craig, & McGuffin, 2003; Sherman et al., 1997). Heritable traits, attitudes, values, and interests influence a variety of behaviors, many of which are demonstrated in the workplace. To date, research in behavioral genetics has advanced our understanding of between-individual differences in a number of organizationally relevant domains such as leadership (Arvey, Zhang, Avolio, & Krueger, 2007; Zhang, Ilies, & Arvey, 2009), vocational interests (Lykken, Bouchard, McGue, & Tellegen, 1993), entrepreneurship (Zhang, Zyphur, et al., 2009), and job satisfaction (Arvey, Bouchard, Segal, & Abraham, 1989). Such knowledge is needed to guide the development of nomological models explaining attitudes and behaviors at work (Ilies, Arvey, & Bouchard, 2006).
Survey response and nonresponse are practically important work behaviors which could benefit from an expanded research agenda in general, and a behavioral genetic examination in particular. Research and practice in organizational behavior (OB) often hinges on self-reports of attitudes, beliefs, behaviors, and personal characteristics. New technologies (e.g., web-based surveys, machine scannable paper forms, and text mining software) have enabled notable advances in survey design, distribution, and analysis (Poncheri, Lindberg, Thompson, & Surface, 2008; Thompson, Surface, Martin, & Sanders, 2003). As a result, surveys are an increasingly popular data collection tool in organizations and elsewhere (Church & Waclawski, 1998; Kraut, 1996). Unfortunately, this growing reliance on surveys appears to be accompanied by declining response rates (Baruch, 1999; Dey, 1997; Schwartz, Groves, & Schuman, 1998). Poor response rates can lead to a variety of problems (Rogelberg, Luong, Sederburg, & Cristol, 2000). For example, large numbers of nonrespondents can produce small sample sizes, resulting in a lack of statistical power needed to perform needed analyses. More importantly, survey nonresponse raises concerns about nonresponse bias, which occurs when survey requests are ignored by people who differ from respondents on the study variables of interest. The result is data that paint an inaccurate picture of the overall population's standing on the variables studied (Luong & Rogelberg, 1998; Rogelberg et al., 2000). In short, the increasing reliance on surveys coupled with the serious consequences of nonresponse creates a pressing need to better understand the tendency to comply with or ignore requests for survey participation.
Research has uncovered several individual difference variables that help explain why some people fail to complete surveys when asked to do so. Work in the behavioral genetics domain suggests that many of these individual differences are heritable. The purpose of the present study is to test whether a genetic component underlies survey response behavior. Drawing from the behavioral genetics and survey nonresponse literatures, we synthesize two distinct research streams which have never been considered in tandem. As a result, this study contributes to both bodies of work by expanding what is known about the genetic bases of OB-relevant behavior while increasing our understanding of the underpinnings of survey response.
Heritable determinants of survey response
Beyond the situational determinants of survey response (e.g., Rogelberg & Stanton, 2007; Yammarino, Skinner, & Childers, 1991), several dispositional traits, attitudes, and perceptions have been shown to influence compliance with survey requests. Because these personal characteristics are to some extent genetically influenced, they may carry these influences through to survey response behavior. With regard to traits, research has suggested that high achievers are especially inclined to respond to voluntary surveys in an academic setting (Dey, 1997; Sax, Gilmartin, & Bryant, 2003). Other personality variables have been shown to account for both passive and active forms of nonresponse (Rogelberg et al., 2003). Passive nonresponse occurs due to happenstance, such as when survey recipients misplace or forget to complete surveys they may have otherwise intended to fill out. Rogelberg et al. (2003) have shown that passive nonrespondents are less conscientious than those who complete surveys upon request.
Active nonrespondents make an overt, conscious, a priori decision to withhold their participation at the time in which they receive a survey (Rogelberg et al., 2003). Compared to those who complete surveys, active nonrespondents are more “reciprocation wary”—that is, they are more likely to have a personality disposition which prompts them to feel exploited in social exchange relationships (Spitzmüller, Glenn, Barr, Rogelberg, & Daniel, 2006). Furthermore, active nonrespondents tend to be less conscientious than survey respondents, and some evidence suggests they are also less agreeable (Rogelberg et al., 2003; Rogelberg, Spitzmüller, Little, & Reeve, 2006). Personality characteristics, including conscientiousness and agreeableness, have been shown to be substantially heritable (Loehlin, 1992). A genetic component to active and passive survey nonresponse therefore appears likely.
Another factor driving active nonresponse is attitudes toward the survey sponsor. Active nonrespondents tend to be less satisfied than respondents with the institution or organization sponsoring the survey (Rogelberg et al., 2003). Such satisfaction may have genetic underpinnings because personality, which is heritable, is thought to predispose individuals to particular interpretations of events (Judge, Heller, & Mount, 2002). In effect, genetically influenced interpretations of previous encounters with the survey sponsor may impact satisfaction which can in turn affect survey participation. People also have a predisposition toward evaluating their environment (e.g., interactions with the survey sponsor) in ways that are consistent with their affective disposition (Hershberger, Lichtenstein, & Knox, 1994). Affectivity, which has been shown to be heritable (Finkel & McGue, 1997; Tellegen, Lykken, Bouchard, Wilcox, Segal, & Rich, 1988), may thus shape perceptions of the survey sponsor's shortcomings and subsequently discourage survey response behavior.
Similarly, job satisfaction has been linked to the willingness to respond to OB surveys. Research addressing this point has focused on the attitudes of employees who indicate they would refuse to complete a work-related survey if asked to do so. These “noncompliants” hold negative attitudes—not only toward their organizations, but also toward their jobs (Rogelberg et al., 2000). Meanwhile, data from several samples have indicted that genetic factors may explain as much as 30% of the variance in job satisfaction (Arvey et al., 1989; Arvey, McCall, Bouchard, Taubman & Cavanaugh, 1994).
In general, OB researchers have successfully argued that survey participation is a form of helping behavior (e.g., Rogelberg et al., 2006; Spitzmüller, Glenn, Sutton, Barr, & Rogelberg, 2007, Spitzmüller et al., 2006). Often, applied OB surveys are initiated by a prospective respondent's employer and specifically designed for the good of the organization. In such cases, survey response can be considered a form of organizational citizenship behavior (e.g., Youssefnia, 2000). Other times, surveys are initiated by OB researchers who are external to the organization for the purpose of scientific inquiry (e.g., Allen, 2003; Major, Fletcher, Davis, & Germano, 2008). Responses to such surveys may be considered a more general form of prosocial or helping behavior which contributes to the well-being of the researcher, science, and society. Behavioral genetics research has found that genetic factors influence the propensity of people to help (e.g., Knafo & Plomin, 2006; Matthews, Batson, Horn, & Rosenman, 1981; Rushton, Fulker, Neale, Nias, & Eysenck, 1986). As a form of helping behavior, responses to OB surveys conducted for research and/or practice should thus be genetically influenced.
In summary, many of the traits and attitudes that have been empirically linked to survey participation have been shown to be heritable. Genetics could influence survey response through factors such as personality (e.g., conscientiousness, agreeableness), affectivity, attitudes toward the sponsoring organization, and job satisfaction. Moreover, voluntary survey participation can be conceptualized as a helping/prosocial behavior, and prosocial behavioral tendencies have been shown to be heritable. Thus, genetics are expected to affect survey response. The present study tests the hypothesis that survey response behavior is genetically influenced.
Sample and procedure
The pool of potential participants for this study was obtained from the Minnesota Twin Registry (MTR), a birth-record-based registry of intact identical (i.e., monozygotic or MZ) and fraternal (i.e., dizygotic or DZ) twin pairs born within the state of Minnesota. The database from which our sample was drawn documented each participant's gender and indicated whether each twin pair was identical or fraternal. The twins in our study were reared together.
A 16-page paper-and-pencil survey of leadership activities was sent to 558 male twin pairs (half identical, half same-sex fraternal) and 500 female twin pairs (half identical, half same-sex fraternal). The survey instruments sent to the male and female samples were highly similar. They included identical demographic items as well as identical questions about the kinds and types of leadership positions each twin held at different times (e.g., leadership roles at work). In addition, the male survey asked about decisions to buy or sell stocks in several situations (i.e., financial risk-taking). The female survey included measures of transformational leadership and dispositional hope in lieu of the financial risk-taking items. Overall, 90% of the questions/items were the same, and the total work load for respondents across the two surveys was quite similar.
The cover letters accompanying the two surveys were identical. They were sent by the faculty members directing the leadership study and appeared on university letterhead. They promised confidentiality, explained that the survey was being conducted for research purposes, and indicated that participation is important for knowledge accumulation and beneficial to society. Each packet included a $5 bill and a pre-addressed, postage-paid return envelope with the survey. Those who did not return the survey still collected the $5; as such, surveys were not completed for personal financial gain.
Like many people, the individuals examined in this study had prior experience answering paper-and-pencil surveys. However, they had not received any MTR-related surveys for at least 6 years before the leadership survey was administered. The leadership survey was administered to the male sample in 1999 and to the female sample in 2004. The 558 male twin pairs selected for this survey represented an entire cohort of twins born between 1961 and 1964. The 500 female twin pairs were randomly selected from a larger cohort of female twins born between 1936 and 1955. Table 1 reports the details of the pool and sample characteristics. The fraternal twins in a pair have the same gender. With regard to race, all sample members were Caucasian.
Table 1. Characteristics of the study sample
χ2 b/w males and females
χ2 b/w MZ/DZ
χ2 b/w MZ/DZ
Note: None of the χ2 tests in this table were significant. All participants are Caucasian/White. MZ stands for monozygotic or identical twins and DZ stands for dizygotic or fraternal twins.
Cases excluded from analyses
Sample size for individual-level analysis
Number of individuals
Number of pairs
Individual level response rate
Although there were 5 years between the male and female leadership survey administrations, we did not find significant differences in the individual-level response rate between the two gender groups. As Table 1 shows, 646 men and 581 women completed and returned the survey, yielding individual-level response rates of 57.9% and 58.1%, respectively. Excluding individuals to whom surveys were undeliverable due to wrong mailing addresses, the effective pool size for the analyses of response behavior included 1100 men and 988 women. Table 1 reports the breakdown of cases omitted due to incorrect mailing addresses. Excluding undeliverable surveys, the individual level response rate was 58.8% for the overall sample. This rather high participation rate by both twin types is not uncommon in twin research, where participants tend to be relatively cooperative—perhaps recognizing the uniqueness and value of their data. Table 1 also reports the sample sizes for the quantitative genetic analyses before controlling for potential confounds, i.e., 550 pairs of male twins and 494 pairs of female twins.
Survey response was coded as 0 if an individual did not complete and return the questionnaire (for reasons other than delivery failure). It was coded as 1 if he/she completed and returned the survey. This 0/1 variable was analyzed using an underlying latent variable approach described later in detail. An underlying continuous variable is assumed to account for the response or nonresponse to the survey. Previous research has used the same approach to study 0/1 variables using behavioral genetics models (e.g., entrepreneurial status, Nicolaou, Shane, Cherkas, Hunkin, & Spector, 2008).
The twins' zygosity was determined by their response to a background questionnaire administered 6 years prior to the male leadership survey and 20 years prior to the female leadership survey. Approximately 78 and 80% of the male and female twins who were contacted completed this zygosity measure. This measure has demonstrated a 95% accuracy rate when compared with elaborate serological analysis (e.g., Lykken, Bouchard, McGue, & Tellegen, 1990; Sarna, Kaprio, Sistonen, & Koskenvuo, 1978). The zygosity measure was used as a grouping variable in the two-group structural equation modeling analysis discussed below.
Gender, age, and education
Participants' gender (female = 1, male = 0) and age (in years) were derived from their birth records. Educational level was measured in the background questionnaire as the number of years of education.
Twin closeness could be a potential confound in the estimation of genetic influences on survey response. As shown in the analysis section, genetic relatedness is assumed to be the only explanation for twins' concordance in survey response behavior. If MZ twins are closer to their co-twins (e.g., talk more frequently with each other) than DZ twins are, MZ twins may have answered the survey in a more coordinated manner for reasons other than genetic relatedness. Thus, there is a need to control for the potential confounding effects of twin closeness. Twin closeness was assessed in the background questionnaire used to measure zygosity by asking each individual to indicate his/her contact frequency with the twin partner. Participants were asked how often they talk to their twin using a seven-point Likert scale (1 = never, 2 = seldom, 3 = on holidays, 4 = monthly, 5 = weekly, 6 = daily, and 7 = we live together). The intraclass correlation (ICC) for this measure is 0.74, showing a high level of within-pair agreement. We averaged the two scores of a twin pair to represent the closeness of the pair.
We used behavioral genetics methodology to estimate the genetic influences on survey response behavior (Plomin, DeFries, McClearn, & McGuffin, 2008). This methodology utilizes the difference in genetic relatedness between MZ twins (who share all of their genetic material) and DZ twins (who share on average 50% of their genes) to estimate the relative genetic and environmental contributions to the observed variance of a phenotype (in this case, survey response behavior).
A series of two-group structural equation models (SEM) were estimated with a set of constraints on path coefficients and latent factor correlations. In both the MZ group and DZ group, the variance of survey response behavior is parsed into three components: Additive genetic variance, shared environmental variance, and nonshared environmental variance plus measurement error. Additive genetic effects (i.e., latent variable A) refer to the effects of the summation of genes across loci, while shared (i.e., latent variable C) and nonshared (i.e., latent variable E) environmental effects refer to environmental effects that contribute to twin similarity and differences, respectively. Measurement error also contributes to nonshared environmental variance. The three latent variables (i.e., A, C, and E) are standardized variables so that their corresponding path coefficients represent the strength of their influences. Figure 1 shows the path diagram for the model for one group in the two-group SEM analysis.
According to behavioral genetics theory, greater similarity between the two members of a MZ twin pair relative to those in a DZ twin pair is indicative of additive genetic contributions. In particular, the structural relationships represented by Figure 1 can be written as the following structural equations (control variables not shown):
where Pij is the measure of survey response behavior of the ith individual in the jth pair (i = 1, 2; j = 1….n), Aij, Cij, and Eij are standardized latent variables, and their coefficients represent the additive genetic influence (a), shared environmental influence (c), and nonshared environmental influence (e). Vp is the total variance of survey response behavior and is typically standardized as having a value of 1. Heritability is estimated as h2 = a2/Vp. Because A, C, and E are assumed to be independent with each other, Vp can be decomposed to the additive genetic variance (a2), shared environmental influence (c2), and nonshared environmental influence (e2).
In the two-group SEM, the path coefficients a, c, and e in the MZ group were held to be equal to the corresponding path coefficients in the DZ group. As Figure 1 indicates, the cross-twin correlations of the genetic factors are fixed at 1.0 for MZ group and 0.5 for DZ group. This is because behavioral genetics methods and theories show that MZ twins share all of their genetic material and DZ twins share on average 50% of their segregating genes (Plomin et al., 2008). The cross-twin correlations of the shared-environmental factors are both fixed at 1.0 for MZ and DZ groups because by definition, they are shared by the two members in a twin pair. Based on the tracing rules for path diagrams (Kline, 1998), the predicted variance-covariance matrices for the full ACE models are as follows:
The path coefficients a, c, and e were estimated in the SEM models using a latent variable approach to present survey nonresponse. In other words, an underlying continuous variable was assumed for response/nonresponse. When this variable exceeds a threshold value, survey response is manifested as 1. Models were fit to the cross-twin variance–covariance matrices using asymptotically distribution free weighted least squares (Browne, 1984; Neale, 2004).
We first conducted the analyses without controlling for potential confounds. We then re-ran the analyses after partialling out the influence of twin closeness, age, and education. The logic of partialling out control variables in twin models is the same logic as in regression analysis. In models without control variables, the total variance in the dependent variable was decomposed into the A, C, and E components. In models with control variables, the variance after partialling out the control variables' contribution was then decomposed into A, C, and E components. Previous research has utilized similar methods for partialling out potential confounds (e.g., Kohler & Rodgers, 1999; Nicolaou et al., 2008). Due to missing data in the control variables, the sample size was smaller for the second set of analyses with control variables. Model fit in all analyses was evaluated using the chi-squared (χ2) fit statistic and a variety of model fit indices. A series of nested models were compared and the best-fitting model was chosen to calculate heritability of survey response. In the nested models, parameters (a or c or both) were dropped (i.e., fixed to zero) from the full ACE model to test if their removal resulted in a significant decline in model fit. In addition, we examined gender as a potential moderator by conducting and comparing separate analyses on female and male samples.
Table 2 provides the individual-level means, standard deviations, and correlations among the variables examined in this study for the total sample and for each gender group separately. In the total sample, MZ twins are slightly more likely to respond to the survey than DZ twins (r = 0.07, p < 0.05). Age and gender are almost perfectly correlated because, as indicated earlier, the female and male twins belong to two different age cohorts. The male and female samples showed similar patterns of relationships among the majority of variables.
Table 2. Means, standard deviations, and correlations of the variables
Note: For the whole sample, N varies from 1733 to 2088 due to missing data on education and twin closeness. For the male sample, N varies from 957 to 1100; for the female sample, N varies from 776 to 988. MZ refers to monozygotic twins and DZ refers to dizygotic twins. Tetrachoric or polyserial correlations are reported for dichotomous variables.
Our hypothesis predicted that response behavior is genetically influenced. Table 3 provides the results for model fitting before partialling out potential confounds. As shown in Table 3, the shared-environment components in the ACE model failed to exert significant influence. The estimated c2 values were not significantly different from zero (0.00 for male twins, 0.10 for female twins, and 0.02 for the total sample; the 95% confidence intervals all include zero). After restricting the corresponding paths to zero, the AE models showed better fit than CE models for both the male and female samples as well as for the total sample as a whole. For example, the AE model for the male sample has satisfactory fit indexes (CFI = 0.98, TLI =0 .99, and RMSEA = 0.02), whereas the CE model has much worse fit (CFI = 0.67, TLI = 0.75, and RMSEA = 0.13). The AE models are more parsimonious than their corresponding ACE models, and did not exhibit worse fit than the full ACE models (i.e., nonsignificant Δχ2 with Δdf = 1). Consequently, the AE models were chosen as the best fitting model.
Table 3. Model fit results for genetic influences on survey response behavior (without control variables)
# of pairs
Variance components (95% CI)
Model Fit indexes
Note: A, additive genetic; C, shared-environment; E, nonshared environment. MZ stands for monozygotic or identical twins and DZ stands for dizygotic or fraternal twins. 95% confidence intervals are reported in parentheses.
Table 4 provides the results after partialling out the influence of twin closeness, age, and education. The results are highly similar to those in Table 3 but the sample size is smaller due to missing values in the control variables. Based on χ2 difference tests, the AE models were again the best-fitting models for the male, female, and the total sample. The heritability estimates are similar to those in Table 3 before controlling for the confounds, and the males and females have similar heritability estimates (i.e., 0.46 and 0.49, respectively).
Table 4. Model fit results for genetic influences on survey response behavior (after controlling for twin closeness, age, and education)
# of pairs
Variance components (95% CI)
Coefficients for control variables
Note: Sample sizes (reported in number of twin pairs) are smaller than those in Table 3 due to missing values in the control variables. A, additive genetic; C, shared-environment; E, nonshared environment. MZ stands for monozygotic or identical twins and DZ stands for dizygotic or fraternal twins. 95% confidence intervals are reported in parentheses. None of the coefficients for control variables were significant at p < .05.
Given the fact that the two sets of analyses yielded similar results, and following the practice of previous research (e.g., Nicolaou et al., 2008), we rely upon the model estimation results without control variables (a larger sample) for interpretation. As Table 3 shows, based on the whole sample, 45% of the variance in survey response behavior was explained by genetic influences, whereas 55% of the variance was explained by nonshared environmental factors as well as measurement error. Thus, the study hypothesis was supported. This estimate of genetic influence remains similar for the male and female samples, indicating that gender does not moderate the strength of genetic influence on survey response. Because we can safely assume that the genetic influence on survey response behavior is exogenous (i.e., genetic factors influence survey response behavior but not vice versa) we can conclude a somewhat strong causal relationship based on the results.
Surveys are a popular data collection tool for OB research and practice, yet relatively little is known about the factors driving compliance with requests for survey participation (Spitzmüller et al., 2007). Our knowledge of how to design, deliver, and analyze surveys has outpaced our understanding of the factors that encourage prospective respondents to complete questionnaires. There is a clear need for research of this nature due to its implications for sample sizes and nonresponse bias in surveys conducted for OB research and practice. Drawing upon the behavioral genetics and survey nonresponse literatures, this study demonstrates that survey response behavior is substantially heritable (h2 = 0.45). The current study is the first to examine the genetic components of survey participation. As Ilies et al. (2006) suggested, findings from behavioral genetics research might have profound implications for examining constructs central to the study of behavior in organizations. This study not only helps illuminate the underpinnings of survey response tendencies, but it also expands what is known about the genetic influences driving helping behavior since survey response is considered one type of helping behavior (Rogelberg et al., 2006; Spitzmüller et al., 2007; Spitzmüller et al., 2006).
The magnitude of the genetic influence on survey response behavior is worth considering. Results showed that genetic influences explained 45% of the variance in survey response after partialling out potential confounds. Taking the square root of this value indicates a 0.67 correlation with survey response behavior. This is considerably larger than the effects that have been found for other antecedents of survey response (e.g., response facilitation techniques such as preliminary notification, incentives, and so forth). To put this in context, we could compare it to the correlations resulting from Yammarino et al.'s (1991) meta-analysis of survey response predictors. According to this meta-analysis, the two most powerful predictors of survey response (preliminary notification and $0.50 incentives) yielded average correlations of only 0.176 and 0.184, respectively. Admittedly, the studies comprising Yammarino et al.'s (1991) meta-analysis showed some variability in the individual effect sizes obtained. Even so, 95% of the 184 correlations meta-analyzed were at or below 0.30. Studies examining personality predictors of passive and active survey nonresponse offer additional points of comparison. Using the formulas provided by Becker (2000) to convert published means and standard deviations to correlation coefficients, results show correlations between conscientiousness and passive nonresponse ranging from r = −0.08 to −0.15 (Rogelberg et al., 2003). The influence of conscientiousness on active nonresponse is characterized by correlation coefficients from −0.22 to −0.27 (Rogelberg et al., 2003). By comparison, the genetic influence shown in the current study (r = 0.67) substantially exceeds the influence of other antecedents typically examined in the literature.
While the results of this research are noteworthy, they should be interpreted in the context of several limitations. It is important to acknowledge that this study was not conducted on a completely random sample. Instead, an initial survey, used to obtain zygosity information 6–20 years prior to the present study, determined the population whose response behaviors were investigated. As such, we essentially selected out those who had not previously participated in a survey. The impact of this limitation is likely minimized by the high rate of response to the initial zygosity measures administered prior to this study. Nevertheless, it is important to consider its implications. Using the zygosity measure as a “prescreen” may have caused us to achieve a higher response rate to the leadership survey than would have been obtained if we had sent the leadership survey to the entire population that received the initial zygosity measure. To help alleviate concerns about this, we should point out that the present study was not designed for the purpose of estimating survey response rates among populations of twins. Rather, it was designed to look at the relationships between genetics and survey response. Because the analyses used to test our hypothesis were based on the comparison of pair concordance between identical and fraternal twins, this prescreening issue should not result in serious bias on the heritability estimates obtained.
Looking at this limitation from another angle, the fact that everyone in our study population (whether they completed the leadership survey or not) had completed at least one earlier survey (i.e., the zygosity measure) limits our ability to confidently generalize to people with no prior survey experience. We presume that many members of the working population to which we wish to generalize have completed one or more surveys (e.g., organizational climate surveys re-administered annually) at various points in their lives. For this reason, we do not expect prior survey experience to pose a major threat to the external validity of our findings.
It should be noted that this study was restricted to Caucasian twins born in Minnesota between 1936 and 1955 as well as those born between 1961 and 1964. The degree to which our findings characterize people who are not twins as well as individuals from other races, regions, countries, and generations is simply unknown. Cross-cultural and cross-generational replication would help increase confidence in the external validity of the findings.
Twin closeness, which was examined as a control variable, was measured 6–20 years prior to the administration of the leadership survey. This is another limitation. To gauge the seriousness of this concern, we examined the stability of twin closeness via archival data. The twin registry included closeness ratings provided by 276 female twins whose initial rating was followed by a second rating provided 8 years later. The correlation between the two measures of closeness conducted 8 years apart was r = 0.82 (N = 276, p < 0.001). This high correlation may reduce some concerns regarding the accuracy of our closeness measure. The fact that the conclusions of this study remain the same regardless of whether the closeness control variable was included may also help alleviate concerns about potential inaccuracies in the closeness measure.
Research in the social and organizational sciences is often limited by cross sectional designs which do not examine behavior over time. For example, studies of the antecedents of prosocial behavior in general (e.g., Batson, Bolen, Cross, & Neuringer-Benefiel, 1986; Bierhoff & Rohmann, 2004) and survey response behavior in particular (e.g., Sax et al., 2003) commonly evaluate the relationship between participants' scores on an initial personality inventory and their subsequent responses to a single helping or survey opportunity. Our research suffers from this limitation as well. Although practical constraints precluded an examination of responses to multiple survey administrations, a longitudinal design would have strengthened this study by enabling us to examine research questions pertaining to patterns of survey response over time.
Finally, we should also point out that this study did not include potential mediators (e.g., individual differences, affect, etc.) which might have helped clarify why survey response is heritable. Hopefully, the new discovery uncovered in this study will serve as both a catalyst and a compass which stimulates and guides follow-up research that digs deeper to address this issue.
Despite its limitations, this study contributes to the emerging body of knowledge pertaining to individual differences in survey response behavior (e.g., Rogelberg et al., 2003; Rogelberg et al., 2006; Sax et al., 2003). If attitudes/traits and survey response are heritable (potentially due to common genetic influences), and surveys are used to assess employee attitudes/traits, researchers should be concerned about whether data obtained from a sample that opted to submit a given survey generalize to the broader population of interest. Using a survey on conscientiousness as an example, if the study's goal is to estimate the overall level of conscientiousness within a given population, the results obtained from those who volunteer to complete the measure may reveal an artificially high mean level of the trait due to the nonresponse of those low in conscientiousness. If the survey's goal is to examine the impact of conscientiousness on an outcome of interest, the predictor data obtained may suffer from a restriction of range, which can also bias study results. Depending on the nature of the data, range restriction can distort (i.e., decrease or increase) the correlation between a predictor and an outcome measured by a survey (Zimmerman & Williams, 2000). Beyond bivariate correlations, more complex statistics (e.g., regression, SEM) are also affected by the type of nonresponse bias suggested above, but in less straightforward ways (Dey, 1997).1
Thus, this study suggests that there is a biological basis for nonresponse to result in systematic bias in certain studies unless carefully controlled for. Dealing with nonresponse is henceforth not something that scholars can easily dismiss. The current study strongly reinforces the need for researchers to deal with this problem in a constructive fashion. Under certain circumstances, this may entail collecting data through means other than voluntary surveys. Recognizing that in practice surveys are often the only viable data collection option, this study also underscores the need to identify and implement creative methods for encouraging participation from those predisposed to nonresponse.
Future research directions
As the role and nature of surveys in OB research and practice continue to evolve, there is no reason to believe that we have conceived and tested all possible ideas for improving response rates. In all likelihood, a host of useful techniques awaits empirical discovery. Hopefully, this study will encourage future research aimed at developing and testing new response facilitation techniques. Studies designed to determine what kind of incentives or interventions best “overcome” the predisposition to not return surveys would be of particular value.
There is also a need for basic research designed to identify the specific components that make up the environmental influence on response behavior. By capturing and modeling these environmental influences, researchers can disentangle them from measurement error, identify their unique contribution in explaining survey response behavior, and then use this information to inform the development of response facilitation techniques.
As Ilies et al. (2006) point out, the field of OB would benefit from additional research on how genotype-environment interactions affect outcomes of interest. Future research should seek to identify environmental factors that moderate the extent to which survey response is genetically based. Perhaps organizational factors (e.g., the degree to which prospective respondents feel their employers have followed up on past survey results) reduce the strength of the genetic influence on survey response in applied settings.
One important aspect of this study is its potential to stimulate research designed to pinpoint the trait and attitudinal variables that mediate the effect of the genetic influence on survey response. It is possible that genetics predispose people to particular attitudes, cognitions, and affective states that inhibit survey response. Potential attitudinal mediators include attitudes toward surveys in general (Rogelberg et al., 2006), attitudes toward the survey sponsor (Rogelberg et al., 2003), and trust that the survey sponsor will act on the data provided (Thompson & Surface, 2007, 2008). Another possibility involves perceptions of oversurveying. Currently, it is not uncommon for individuals to receive requests for survey participation from a host of organizations (e.g., employers, churches, clubs, marketers, political organizations). This may cause people to feel oversurveyed. Perhaps certain personality traits (e.g., reciprocation wariness; Spitzmüller et al., 2006) lower the threshold (i.e., the number of surveys) required before an individual begins to feel oversurveyed. Advanced modeling techniques which allow for the examination of individual differences in growth trajectories over time would be particularly useful in research designed to examine this issue. Overall, studies investigating perceptions of surveys and oversurveying, as well as other attitudinal mediators of the effect uncovered in this study, could begin to inform the development of targeted interventions encouraging participation from those prone to nonresponse. The caveat, however, is that these potential mediators (e.g., traits, attitudes, and perceptions) may need to be measured using methods other than voluntary self-reported surveys. To this end, some personality instruments provide a format for other-ratings. Furthermore, independent observers in assessment centers could provide measures on focal individuals' attitudes.
Finally, research refining the outcome examined in this study would be informative. Rogelberg et al. (2003) maintain that nonresponse can be active or passive in nature. These two forms of nonresponse may exhibit different degrees of heritability because they may be influenced by distinct sets of traits and attitudes. This possibility awaits empirical investigation.
Surveys play a critical role in OB research and practice alike. Practitioners use them to accomplish a variety of objectives, such as diagnosing organizational problems, assessing climate, and measuring the impact of change initiatives. Meanwhile, researchers rely on surveys, which are often administered outside of the workplace, to gather data for studies designed to generate new knowledge in the field of OB. In both research and applied contexts, problems stemming from low response rates and nonresponse bias create a need to better understand the decision to comply with or ignore appeals for survey participation. The dearth of research addressing this need has provoked the criticism that “survey nonresponse is a rather neglected stepchild in OB research” (Spitzmüller et al., 2006: p. 19). However, it has also stimulated studies on the antecedents of response behavior, which have appeared in the OB literature in recent years (e.g., Rogelberg et al., 2003; Rogelberg et al., 2006; Spitzmüller et al., 2006; Spitzmüller et al., 2007). The current study is the first to consider the role genetics plays in survey response behavior. Hopefully, future studies will build off of this one to increase what is known about the underpinnings of survey response. Ultimately, such work can be used to improve response rates as well as the accuracy of the conclusions drawn from the survey data collected from voluntary respondents.
While it is difficult to know the precise impact of nonresponse bias in practice, strategies for assessing the likely influence of nonresponse on survey results have been offered in the literature (Viswesvaran, Barrick, & Ones, 1993).
Lori Foster Thompson, is an associate professor in the Industrial/Organizational Psychology program at North Carolina State University. Her research, teaching, and consulting pertain to employee reactions to emerging technologies, organizational surveys, and humanitarian work psychology. She has co-authored a book, book chapters, and various articles on these topics and currently serves on the editorial board of The Industrial-Organizational Psychologist (TIP), the Journal of Organizational Behavior, and Ergometrika, where she is associate editor.
Zhen Zhang, is an Assistant Professor of Management at Arizona State University. His research focuses on leadership process and development, the biological basis of organizational behavior, and research methods. His work has appeared in several journals, including Journal of Applied Psychology, Organizational Behavior and Human Decision Processes, the Leadership Quarterly, and Organizational Research Methods.
Richard Arvey, is currently the Head of the Department of Management and Organization, National University of Singapore. He received his PhD from the University of Minnesota and has taught and conducted research at the Universities of Tennessee, Houston, and California-Berkeley. He conducts research on issues pertaining to job satisfaction, leadership, motivation, as well as recruitment and staffing areas.