The relationship between cognitive ability and personality scores in selection situations: A meta‐analysis

Faking—the intentional distortion of answers by applicants—is a frequently occurring phenomenon found when personality tests are used for personnel selection (e.g., Anglim et al., 2018; Birkeland et al., 2006; Galić et al., 2012; Griffin & Wilson, 2012). In this context, interindividual differences in faking behavior are particularly problematic, as they can affect the applicants’ rank order and thus the validity of selection decisions (König et al., 2011; McFarland & Ryan, 2000; Mueller-Hanson et al., 2006; Raymark & Tafero, 2009). The majority of faking theories attribute these differences in applicants’ faking to two factors in particular (Ellingson & McFarland, 2011; Marcus, 2009; McFarland & Ryan, 2006; Snell et al., 1999): (a) applicants’ motivation to present themselves in a highly favorable way in order to improve their chances within the selection process, and (b) the abilities needed to manage the image they convey to the organization by distorting the answers in the required direction. Regarding the abilities aspect, some authors (e.g., Johnson & Hogan, 2006; Kleinmann et al., 2011) even suggested that part of the criterion validity of personality tests may be attributed to the fact that such abilities necessary for faking are also of great relevance in today's working world. In line with this argument, several theoretical models have identified applicants’ cognitive ability as a crucial determinant of the occurrence and magnitude of faking behavior (e.g., Marcus, 2009; Snell et al., 1999; Tett & Simonet, 2011). However, previous empirical results were inconclusive, while a substantial proportion of studies found a corresponding effect (Grubb & McDaniel, 2007; Levashina et al., 2014; Pauls & Crost, 2005), others did not (Furnham et al., 2008; Levashina et al., 2009; Mudgett, 2000; Schilling et al., 2020). Not only are results inconclusive, it is also unclear why there are such inclusive results. For example, faking of personality tests has been studied in the field and in the lab, and both research strategies have their advantages and disadvantages that could also matter for the relationship of cognitive ability (e.g., van Hooft & Born, 2012; Ryan Received: 26 February 2019 | Revised: 2 September 2020 | Accepted: 5 September 2020 DOI: 10.1111/ijsa.12314


| INTRODUC TI ON
Faking-the intentional distortion of answers by applicants-is a frequently occurring phenomenon found when personality tests are used for personnel selection (e.g., Anglim et al., 2018;Birkeland et al., 2006;Galić et al., 2012;Griffin & Wilson, 2012). In this context, interindividual differences in faking behavior are particularly problematic, as they can affect the applicants' rank order and thus the validity of selection decisions McFarland & Ryan, 2000;Mueller-Hanson et al., 2006;Raymark & Tafero, 2009). The majority of faking theories attribute these differences in applicants' faking to two factors in particular (Ellingson & McFarland, 2011;Marcus, 2009;McFarland & Ryan, 2006;Snell et al., 1999): (a) applicants' motivation to present themselves in a highly favorable way in order to improve their chances within the selection process, and (b) the abilities needed to manage the image they convey to the organization by distorting the answers in the required direction.
Regarding the abilities aspect, some authors (e.g., Johnson & Hogan, 2006;Kleinmann et al., 2011) even suggested that part of the criterion validity of personality tests may be attributed to the fact that such abilities necessary for faking are also of great relevance in today's working world. In line with this argument, several theoretical models have identified applicants' cognitive ability as a crucial determinant of the occurrence and magnitude of faking behavior (e.g., Marcus, 2009;Snell et al., 1999;Tett & Simonet, 2011).
However, previous empirical results were inconclusive, while a substantial proportion of studies found a corresponding effect (Grubb & McDaniel, 2007;Levashina et al., 2014;Pauls & Crost, 2005), others did not (Furnham et al., 2008;Levashina et al., 2009;Mudgett, 2000;Schilling et al., 2020). Not only are results inconclusive, it is also unclear why there are such inclusive results. For example, faking of personality tests has been studied in the field and in the lab, and both research strategies have their advantages and disadvantages that could also matter for the relationship of cognitive ability (e.g., van Hooft & Born, 2012;Ryan Received: 26 February 2019 | Revised: 2 September 2020 | Accepted: 5 September 2020 DOI: 10.1111/ijsa.12314

F E A T U R E A R T I C L E
The relationship between cognitive ability and personality scores in selection situations: A meta-analysis et al., 1998). Similarly, some researchers have used Likert scaled personality tests, whereas others have used forced-choice tests (e.g., Hausknecht, 2010;MacKenzie et al., 2010), and cognitive abilities have been measured with different kinds of operationalizations (e.g., verbal vs. nonverbal cognitive ability tests, e.g., Hale & Padgett, 2014;Klehe et al., 2012). Thus, it remains open how much which cognitive ability is related to faking and what variables moderates this relationship. An answer to these questions will help to provide a better understanding of the phenomenon of faking and thus of the consequences of faking for the construct and criterion validity of personality tests.
The goal of the current study was, therefore, to provide aggregated results concerning the relationship of participants' cognitive ability and their faking in personality tests during selection situations. Given that only a minority of studies directly report a corresponding effect size, our meta-analysis focused on the correlation between cognitive ability and personality scores. We compared this meta-analytic correlation between selection samples and nonselection samples because it can be found in different kinds of studies, irrespective of whether they were conducted in the laboratory or in the field or how they operationalized faking. In the following, we explain how the relationship between cognitive ability and faking should affect corresponding correlations, and we introduce possible moderating variables in this context.

| Personality tests in selection and cognitive ability
The majority of faking theories have agreed that successful faking requires not only the motivation to present oneself in a highly favorable way, but also the ability to behave and answer appropriately (Levashina et al., 2009;Marcus, 2009;Mueller-Hanson et al., 2006;Roulin et al., 2016;Snell et al., 1999). Applicants, for instance, have to identify the expectations of hiring organizations (i.e., the criteria on which they are being assessed) to subsequently show self-benefitting behavior (Kleinmann et al., 2011;König et al., 2006). For personality tests, this means that applicants could try to figure out which pole of a personality test item the hiring organizations consider the positive one so that they can move their response toward this pole (Klehe et al., 2012). Applicants with higher cognitive ability should handle this mainly analytical task more easily, which may, in turn, lead to more faking behavior among these applicants (Tett & Simonet, 2011).
However, when empirically analyzing the relationship between applicants' cognitive ability and their faking in personality tests in the form of a meta-analysis, the diversity of previous studies posed a particular challenge. Thus, large differences in the study design and in the operationalization of faking posed hurdles that prevent a direct aggregation of the previous study results. Part of the research in this area consisted of field studies which examined faking only at a group level, thus not allowing for the calculation of individual-level correlations between cognitive ability and faking (e.g., MacKenzie et al., 2010). The other part of the studies were laboratory studies, which predominantly used a within-person design, where the participants took the personality test twice, once honest, and once with the instruction to respond as an applicant.
However, even within this group of laboratory studies, aggregation of results turned out quite hard as the operationalizations of faking differed substantially (for a detailed overview of different operationalizations of faking, see Burns & Christiansen, 2011).
Thus, some studies operationalized faking as the difference between the participant personality scores in both conditions (e.g., Peterson et al., 2009), others measure faking as the within-person correlation on item level (e.g., Mersman & Shultz, 1998) or modeled faking as a latent factor in a structural equation model, which loads on all personality dimensions under selection condition but not under honest condition (e.g., Wrensen & Biderman, 2005). As a result, only a small fraction of the laboratory studies report the direct relationship between cognitive ability and faking required for meta-analytic calculations (for instance, less than 20 percent of the laboratory studies included in this meta-analysis reported a corresponding effect-size). In addition, there were also a number of laboratory studies that did not use within-person design and instead measured faking with social desirability or impression management scales (e.g., Robie et al., 2010).
In order to aggregate the results of as many of these studies as possible, our analysis is based on what, we believe, is the most basic indicator of the corresponding relationship: the correlation between applicants' cognitive ability and their personality scores in selection situations. If cognitive ability is a determinate of faking, it should also be a predictor of personality scores in selection situations. Thereby, the necessary correlations were provided in almost all studies, whether they were field or laboratory studies, whether it was a within or between-person design, whether faking was operationalized as difference between personality scores under two conditions, as a score on an impression management scale or was modeled as a latent variable in a structural equation model. Finally, we will compare our results with those from samples in which the personality test was not completed under the pressure of a selection situation (abbreviated asnonselection samples) so that we can conclude the relationship between cognitive ability and faking. Previous meta-analyses without a focus on selection situations showed no, or rather low, correlations between cognitive ability and personality scores (Lange, 2013;Poropat, 2009). 1 Assuming that applicants with higher cognitive ability are more successful at faking (e.g., Marcus, 2009;McFarland & Ryan, 2006;Snell et al., 1999) and that successful faking usually leads to higher scores on personality tests, we expect to find higher correlations between cognitive ability and personality scores in selection samples than in nonselection samples.
Hypothesis 1 Correlations between cognitive ability and personality scores are higher in selection samples than in nonselection samples.
Based on the diversity of faking research, there is much to suggest that the relationship between cognitive ability and personality also varies systematically. In the following sections, we introduce three meta-analytic moderators, which address the diversity of study designs, differences in the personality tests used, and differences in the type of cognitive ability tests employed.

| Study design (laboratory vs. field)
In general, most studies in the area of faking research can be assigned to two categories (Birkeland et al., 2006): (a) studies conducted in the field, with real applicants in actual selection situations and (b) studies conducted in the laboratory with participants who are put in a simulated selection situation or who are instructed to fake. Previous meta-analyses about faking in field and laboratory studies found significantly higher faking effects in the laboratory than in field studies (Birkeland et al., 2006;Hooper, 2007). Regarding this difference, some authors have argued that the processes underlying faking likely differ between the two types of studies (Ones et al., 1996): In field studies, the applicants' motivation to fake depends on many individual factors, including subjective considerations and situational circumstances.
In laboratory studies, the faking motivation arises rather from the concrete instruction or from the cover story that is used to induce the application situation; this should lead to a similarly high faking motivation for all participants, which, in turn, may lead to individual differences in the ability to convert this motivation into faking behavior becoming more evident. In this line, we, therefore, expect higher correlations between cognitive ability and personality scores in laboratory studies than in field studies.
Hypothesis 2 Correlations between cognitive ability and personality scores are higher in laboratory studies than in field settings.

| Type of personality test
There are two main types of personality tests used in personnel selection (Vasilopoulos et al., 2006): (a) forced-choice personality tests, in which participants have to choose between statements representing different personality dimensions for each single item, and (b) single-stimulus personality tests, in which each item belongs to one personality dimension, for which the participants have to express their rejection or approval or something in between. Forced-choice tests can be considered as fairly robust against faking (Cao & Drasgow, 2019;Martin et al., 2002), mainly because it should be more difficult to answer in a socially desirable manner if one item includes two equally desirable dimensions (Vasilopoulos et al., 2006). In this case, applicants who are motivated to fake are faced with the task of determining which of the corresponding dimensions is most relevant for a future employer.
This analytical task is difficult because in contrast to the singlestimulus personality tests, the social desirability of the items provides the applicants with fewer hints for successful faking. Given this increased difficulty with regard to faking in forced-choice personality tests, cognitive ability should be even more important when this type of test is used. Therefore, we expect higher correlations between cognitive ability and personality scores in samples completing forced-choice tests than in samples completing singlestimulus tests.

Hypothesis 3 Correlations between cognitive ability and personality
scores are higher in studies employing forced-choice personality tests than in studies employing single-stimulus personality tests.

| Type of cognitive ability test
The type of cognitive ability test is also a potential moderator of the correlation between cognitive ability and personality scores.
Previous studies showed higher correlations between verbal cognitive ability and faking than between nonverbal cognitive ability and faking (Grieve & Mahar, 2010;MacCann, 2013). The authors of these studies argued that a deeper understanding of the items is beneficial for effective faking, which underlines the importance of verbal cognitive ability. Following MacCann (2013) as well as Grieve and Mahar (2010), we thus expect higher correlations in samples completing verbal cognitive ability tests than in samples completing nonverbal cognitive ability tests. Studies that cannot be classified into one of the aforementioned categories will be summarized in a mixed category. We have no further hypotheses regarding this mixed category.

| Literature search
Four strategies were used to identify studies for this meta-analysis:  Table 1

| Inclusion and exclusion criteria
To be included in our current meta-analysis, studies had to meet the following criteria: (a) Studies had to include some kind of selection situation. This was the case for field studies with a real selection situation (e.g., applying for a job or to a university) or laboratory studies with a simulated selection situation (e.g., induced by an instruction such as 'Imagine you are applying for a job as a…'). This criterion was also applied during our search for control data from nonselection samples. Even though this approach drastically limited the number of corresponding samples, it was the only practicable way to follow a uniform and consistent search strategy. (b) There had to be some motivation for the participants to fake and present themselves in a favorable way. Accordingly, we excluded field studies in which it was clear to the participants that the personality test score would not be used for selection purposes (e.g., Merkulova et al., 2014) and studies in which it was unclear whether the participants would be motivated to present themselves favorably (e.g., in a compulsory military service examination; Boss et al., 2015). Furthermore, we excluded laboratory studies in which some tests were filled out under selection conditions but the personality test was not (e.g., Peeters & Lievens, 2005). Moreover, we excluded studies that measured faking solely as overclaiming (e.g., Ackerman & Ellingsen, 2014) or as a fraud in objective tests (e.g., Wright et al., 2014). (c) Personality had to be measured by self-report, and the personality scales must belong (or at least be assignable to) the Five-Factor model of personality. (d) Studies had to include some objective measurement of cognitive ability in the form of an intelligence or ability test. We also included studies reporting college admission test scores, for example from the Scholastic Aptitude Test (SAT) or American College Testing (ACT) as both have been found to be valid measures of cognitive ability (Frey & Detterman, 2004;Jackson & Rushton, 2006;Koenig et al., 2008). However, studies which only reported academic achievements, such as the grade point average (GPA), were excluded as these measurements are rather considered to be the outcome of cognitive ability, personality, and other factors and not as a measure of cognitive ability itself (e.g., Komarraju et al., 2013;Poropat, 2009).
(e) Furthermore, studies had to report the correlation between personality test scores and cognitive ability, as we used this correlation as the effect size. If the latter precondition was not met, we contacted the author(s) and requested the corresponding data. (f) In a final step, we carried out a sensitivity analysis, comparing the results with and without the largest five percent of the samples. In this way, we identified seven individual samples (e.g., Arthur et al., 2014;De Fruyt et al., 2006;Levashina et al., 2014), that were at least three times larger than the largest remaining study and at least 40 times larger than the average sample size of the remaining studies. The seven samples consisted exclusively of field data from personnel selection providers or were directly taken from the personnel selection of large companies or public authorities. Thereby, the meta-

| Final data set
The  Figure 1. Table S1 of the Supporting Information gives an overview of all studies included, Table S2 gives an overview of the resulting independent samples.

| Coding of studies
Personality scales not based on the Five-Factor model of personality were grouped into the model based on the work of Salgado and Táuriz (2014). If a specific dimension was not mentioned in their overview, we used a strategy developed by Barrick and Mount (1991 Raven, 1938), and mixed tests (e.g., Wonderlic Personnel Test; Wonderlic, 1996). Categorization occurred primarily according to the information provided in the corresponding article and was carried out by two raters independently (both with a Master's degree or equivalent). If the authors did not provide the relevant information in the article, the categorization was conducted with the help of the test manuals. To be able to compare effect sizes in studies that provided correlations for verbal and nonverbal cognitive ability tests, we calculated-as far as all required data were available-the composite scores and the corresponding reliabilities according to Schmidt and Hunter (Schmidt & Hunter, 2014, p. 442).
If the required correlations between the variables that should be aggregated were not documented, we calculated the arithmetic mean using Fisher's Z-values.
Some studies also reported several independent correlations between the variables of interest for a single sample (e.g., there were two or more correlations between personality scales that were categorized as the same personality dimension and the cognitive ability measurement, or studies provided only the correlation of two or more cognitive ability subtests that were both categorized as verbal/ nonverbal). In these cases, we also used the aggregation procedure laid out in the last paragraph.

| Meta-analytic procedures
We followed the procedures for psychometric meta-analysis described by Schmidt and Hunter (2014). Mean correlations between cognitive ability and personality dimensions were estimated by sample size-weighted individual correlation coefficients (see equation 3.1 in Schmidt & Hunter, 2014, p. 95). These 'bare bones' correlations are comparable with the results from methods in the tradition of Hedges and colleagues (Hedges & Vevea, 1998). Furthermore, psychometric meta-analysis provides the option to correct for the unreliability of measurement scales and range restriction, yielding the population correlation ρ. As not all studies reported the required information, we were unable to correct correlations individually and thus used artifact distribution meta-analysis (Schmidt & Hunter, 2014) instead.
Unreliability of measurement scales was corrected for cognitive ability (the predictor) and for the personality scales (the criterion). We also corrected for the indirect range restriction of cognitive ability, since many samples may have already been preselected on the basis of cognitive ability or related constructs (e.g., students in laboratory studies who are selected based on their Scholastic Assessment Test in the meta-analytical calculation. If no information at all was available for a meta-analytical calculation regarding predictor reliability, criterion reliability or range restriction, the corresponding artifact distribution was specified as a uniform distribution with a mean of 1.00 and no variance (Schmidt & Hunter, 2014). In such a case, the corresponding aspect could not be corrected in the meta-analytic calculation. For the meta-analytic calculations, we used the metafor package (Viechtbauer, 2010) in R 3.3.3 (R Core Team, 2017) and the Schmidt and Le meta-analysis program (Schmidt & Le, 2005).
We report 80% credibility intervals around ρ to provide analysis of the homogeneity of the corrected effect sizes as well as the percentages of the variance in effect size explained by artifacts (Schmidt & Hunter, 2014). In this regard, based on the '75% rule' (Schmidt & Hunter, 2014), less than 75% variance reduction by artifact correction indicates the presence of additional moderators. For moderator analysis, we calculated 95% confidence intervals around ρ to locate meaningful moderating effects (Schmidt & Hunter, 2014;Whitener, 1990), using the formula reported by Whitener (1990).
In order to test the robustness of our findings, we calculated fail-safe Ns as the number of null results that would have to be added to the studies in our data set to reduce the meta-analytic outcome to a trivial average effect size (Orwin, 1983).
Following the recommendations of Schmidt and Hunter (2014) as well as McNatt (2000), we regarded correlations of r = 0.05 and below as trivial. For additional analysis of file drawer bias (Light & Pillemer, 1984), we created funnel plots of the included effect sizes for all of our meta-analytic calculations using the R-package metafor (Viechtbauer, 2010). These funnel plots were adjusted for missing studies using the trim and fill method (Duval & Tweedie, 2000). All plots were relatively symmetrical, indicating that our meta-analysis did not seem to prioritize the consideration of statistically significant effects over nonsignificant effects. As an example, Figure 2 shows the funnel plots for the nonselection samples, field samples, and laboratory samples.

| Validation of the representativeness of our control data
First, we checked whether the meta-analytic outcome for our nonselection samples corresponds with the findings of other meta-analyses on the relationship between personality and cognitive ability (i.e., with the results of Lange;Lange, 2013;Poropat, 2009). Table 2 shows the results of this comparison. As can be seen, the confidence intervals for all five dimensions show substantial overlap, indicating that our meta-analytic data for nonselection samples replicate the current state of research. Table 3 shows the meta-analytic results regarding the differences be- Confidence intervals did not include zero for any of the dimensions, and for all dimensions, the confidence intervals did not overlap with the corresponding confidence intervals of the nonselection samples.

| Main analysis
As assumed in Hypothesis 1, significantly higher correlations were found in the selection samples for the remaining four dimensions. In the selection samples, only 29%-62% of the variance in effect size was explained by artifacts, which indicates that further moderator effects are likely (Schmidt & Hunter, 2014).   still did not meet the 75% criterion, which hints at further moderator effects. Table 6 summarizes the meta-analytic results separately for the use of verbal, nonverbal, or mixed cognitive ability tests. With regard to Hypothesis 4, we did not find a higher effect of verbal than of nonverbal cognitive ability tests for any of the personality di-

mensions. However, Conscientiousness, Emotional Stability, and
Agreeableness showed the reversed pattern. For Extraversion and Openness to Experience, there was no difference between the two types of ability tests. The results of the mixed category tended to lie between those for verbal and nonverbal cognitive ability tests.
In summary, these results do not provide any evidence for the moderation hypothesis specified under Hypothesis 4, but rather suggest that the relationships between different types of cognitive ability and faking may be more complex than hitherto assumed. Like the preceding moderation analysis, this analysis was not able to explain the majority of variance in the corresponding effect sizes: Only one of the 15 separate meta-analytic calculations fulfilled the 75% criterion for variance reduction.

| D ISCUSS I ON
Our meta-analysis showed, for the first time, that the relationship between cognitive ability and personality scores differs between selection situations and nonselection situations. The correlations for selection situations were significantly positive for all dimensions of the Five-Factor model of personality, and we found significantly higher meta-analytical correlations for selection samples (ρ = 0.105-0.262) than for nonselection samples (ρ = 0.001-0.158). In other words, personality test scores share more variance with cognitive ability when measured under selection conditions.
Assuming that applicant faking is primarily responsible for this change at the construct level, our results provide evidence to support those faking theories which argue that cognitive ability is a determinant of the ability to fake (e.g., Marcus, 2009;Snell et al., 1999;Tett & Simonet, 2011).
This pattern becomes even clearer when the results are considered separately according to the study design. Our results revealed significantly higher correlations between cognitive ability and personality in laboratory studies than in field studies. The proportion of variance in personality that can be explained by cognitive ability is particularly high in laboratory studies. These results fit in with the arguments put forward by some authors (e.g., Ones et al., 1996) that the mental processes involved in answering personality tests in a real application situation or in a laboratory study are hardly comparable. At this point, it can be stated that even if the correlations between field studies and nonselection studies differed, the results from these two study designs showed more similarity with each other than with the results of laboratory studies.
Indeed, there may be major motivational differences between the laboratory versus field situations. According to most current faking models, the relationship between faking motivation and faking behavior is moderated by the ability aspect of faking (e.g., Ellingson & McFarland, 2011;Goffin & Boyd, 2009;McFarland & Ryan, 2006;Roulin et al., 2016). The individual faking motivation in real application situations varies greatly due to individual differences, concrete subjective considerations, and situational circumstances. In contrast, participants' motivation to draw an improved picture of themselves in a laboratory study results from a well-controlled indirect (or sometimes direct) instruction to fake. This may result in a more uniform faking motivation in laboratory studies than in field studies. In line with an assumed moderating effect of cognitive ability, these limited motivational differences between participants in laboratory studies may lead to the more pronounced link between cognitive ability and actual faking behavior. At the same time, such differences in motivation may also be a reason why differences between field and nonselection samples emerge solely regarding the personality dimensions of Conscientiousness, Emotional Stability, and Extraversion. These dimensions likely have particularly high face validity for the work context-applicants might consider them to be especially important for future employers (see Jansen et al., 2012). Accordingly, the motivation of most applicants to present themselves in a better light regarding these dimensions should be uniformly high, which, in turn, should increase the relevance of cognitive ability for successful faking behavior.
Our findings regarding different types of personality tests, in particular single-stimulus and forced-choice, were less clear, mainly due to the small number of studies that actually used forced-choice

tests. However, it is noteworthy that the correlations between
Conscientiousness and cognitive ability in samples utilizing forcedchoice tests were among the highest of all meta-analytic calculations in this study (ρ = 0.310). This may also be attributable to the fact that applicants consider this dimension to be particularly important for a future employer (cf. Jansen et al., 2012). In forced-choice tests, applicants usually have to choose between several response options that belong to different personality dimensions. Applicants with high cognitive ability might excel in recognizing the importance of Conscientiousness for the world of work, and therefore be more likely to choose answers corresponding to this dimension than applicants with lower cognitive ability. As such, our findings support many authors' claims that forced-choice personality tests appear to be harder to fake than single-stimulus tests (e.g., Christiansen et al., 2005;Jackson et al., 2000), but for this reason, they may also lead to a bias in favor of applicants with higher cognitive ability (Rothstein & Goffin, 2006;Vasilopoulos et al., 2006).
With regard to the type of cognitive ability tests, our meta-analytic results contradicted the findings of previous research (Grieve & Mahar, 2010;MacCann, 2013). Our findings concerning this moderator analysis did not show a higher effect in the samples in which verbal cognitive ability was measured; rather, they indicated a stronger effect of nonverbal cognitive ability on faking in personality tests. Moreover, we even found a negative relationship between Conscientiousness and verbal cognitive ability in selection samples (ρ = −0.054). A possible explanation for this counterintuitive finding might be that merely understanding the items can be accomplished equally well by all applicants and is not the main hurdle for faking in personality tests. Instead, nonverbal abilities such as the ability to see patterns behind items (i.e., being able to detect the corresponding dimension) and to conclude the required characteristics for a job (e.g., Kleinmann et al., 2011;König et al., 2007) might be more important for successful faking.

| Theoretical implications and future research directions
This study contributes to the theoretical understanding of faking in several main aspects. First, our results help to clarify the question of the role of cognitive ability in the process of faking in personality tests. We were able to show that personality tests share a higher proportion of their variance with cognitive ability in selection situations than in nonselection situations. In contrast to the basic assumptions regarding the psychological construct of personality (Allport & Odbert, 1936;McCrae & Costa, 1985), our findings suggested that cognitive ability does play a role in personality assessment in selection situations. This supports the idea already put forward by previous researchers (Klehe et al., 2012;Wrensen & Biderman, 2005) that filling out a personality test in a selection situation is driven by a slightly different underlying process than filling out such a test in a nonselection context.
Second, our findings support an additional explanation of the criterion validity of personality tests in personnel selection, which has also been discussed in previous faking research (e.g., Johnson & Hogan, 2006;Kleinmann et al., 2011). In general, cognitive ability is one of the best predictors of work performance, which may also explain at least some part of the criterion validity of personality tests through the relationship studied in this meta-analysis. Although the variance in personality tests is likely dominated by personality constructs, it also seems to be influenced by variance in cognitive ability, the questions arise to which extent this is the case.
Third, the discrepancies we found between different study designs also indicate that the construct captured in laboratory studies does not fully correspond to the construct captured in real selection situations. Although Ones et al. (1996) had already pointed out that the mental processes underlying the filling out of a personality test may differ between laboratory and field situations, our results even indicate that this discrepancy may be greater than that between selection and nonselection situations. This, in turn, raises the question of to what extent results from laboratory studies can be generalized to real selection situations, and whether recommendations for personnel selection should be derived from such results at all.
For further research, we would, therefore, like to encourage a stronger focus on field studies wherever possible. We also call for a stronger verification of the construct validity of the personality tests used in the selection context, and above all, we recommend that this psychometric property is evaluated in the actual selection context.
Most importantly, in our opinion, faking research should focus more on the mental processes, strategies, and objectives of applicants in selection situations (cf. König et al., 2012;Ziegler, 2011). Only through a better understanding of what is going on in the mind of applicants when they fill out personality tests can we fully understand the phenomenon of faking.
Furthermore, we would like to encourage all researchers in the field of faking to publish more information in their papers to facilitate meta-analytical research. In this meta-analysis, we would also have liked to more directly analyze the relationship between cognitive ability and faking, but far too few studies reported the required correlations (e.g., between cognitive ability and the raw difference scores between honest and faking condition). Some of the primary studies included in our meta-analysis also lacked correlation tables for the study variables and we thus had to request this very basic static information from the authors. In our opinion, it is, therefore, essential for the aggregability but also for the replicability of faking research that all further studies report the following information: (a) a detailed description of the faking instruction, ideally in the original wording, (b) reliabilities, means, and standard deviations for all study variables, individually for all groups and conditions, and the corresponding correlation tables, and (c) for within-person studies the correlations of the raw as well as the regression adjusted difference faking scores (see Burns & Christiansen, 2011) with all study variables.

| Implications for personnel selection
In the real world of personnel selection, many organizations are concerned that applicants' faking behavior might seriously undermine the usefulness and validity of personality tests. Therefore, persons in charge of personnel selection may be greatly interested to know that cognitive ability plays a major role when applicants fill out a personality test and that more intelligent applicants also tend to have higher scores on such personality tests. In our opinion, these findings may inform the use of personality tests in personnel selection in at least two aspects. (a) Our findings raise the question of to which extent the intended personality constructs are being measured in selection situations and to which extent personality tests in assessments measure cognitive ability. At this point, a company may argue that as long as employees perform well, it does not matter whether they are doing so because they are truly conscientious or because they are conscientious and smart, but the answer to this question may affect organizations' internal justification and selection of personality tests as a personnel selection tool. (b) As a further practical implication of our findings, we recommend caution when using forced-choice tests to measure personality. Forced-choice tests are considered harder to fake (e.g., Christiansen et al., 2005;Jackson et al., 2000) but also showed a fairly large proportion of shared variance with cognitive ability. Especially for the dimension of Conscientiousness, which has the highest predictive validity for work performance (Barrick & Mount, 1991), we found high correlations with cognitive ability. Organizations should, therefore, be aware that forced-choice tests likely have the advantage of being less prone to faking and simultaneously the disadvantage of measuring the actual construct of personality to an even smaller extent than single-stimulus tests.

| Limitations
Three main limitations of the present meta-analysis need to be mentioned: First, we were unable to analyze the relationship between cognitive ability and faking in a direct manner-our approach only allowed us to compare correlations of personality and ability in selection and nonselection situations and to conclude the effect on faking from the corresponding discrepancies. The main reason for this limitation is that there was an insufficient number of primary studies that reported the correlations between cognitive ability and some direct measure of faking (e.g., the difference between an honest condition and an 'as applicant' condition). Hopefully, more researchers will report such information in the future, enabling such correlations to be summarized in future meta-analytic work.
Second, many of our analyses did not fulfill the 75% rule for variance reduction, suggesting room for other moderators (Schmidt & Hunter, 2014), which should be explored by future research. Third, it must be pointed out that the correlations found between cognitive ability and personality in selection situations were significantly higher than those in nonselection samples, but were rather small in effect size (Cohen, 1992;Hemphill, 2003;Paterson et al., 2016). In general, cognitive ability plays a meaningful role in the assessment of personality in selection situations, but not the most influential role.

| CON CLUS ION
Personality tests are considered to be a valid instrument for predicting work performance but are often criticized for their susceptibility to faking. In this context, the role played by applicants' cognitive ability in faking remains controversial. The results of this meta-analysis shed some light on this issue by revealing substantially higher correlations between cognitive ability and personality in selection situations than in nonselection situations. Thus, our findings suggest that other mental processes take place when filling out personality tests in selection situations and that accordingly, a somewhat different psychological construct might be captured compared to nonselection situations. Viewed as a whole, this also provides indirect evidence for a link between cognitive ability and faking. Moderator analyses showed that the correlations with cognitive ability are particularly high in laboratory studies, whereas the correlations in field studies differ from nonselection situations to a considerably lesser degree.
These findings suggest that the response behavior of participants in laboratory studies may be less representative of applicants in real selection situations than expected. Accordingly, the results obtained in the laboratory should only be generalized with the utmost caution.
To gain a more holistic view of faking, future research may also be well served by shifting the focus somewhat away from predictors of this phenomenon and moving toward mental processes, strategies, and objectives of applicants in selection situations.

ACK N OWLED G M ENT
We would like to thank all authors who responded to our e-mails and who pointed out further studies to us. In particular, we would like to thank the 22 authors who provided us with additional data and calculations for their studies, some of which had already been published  1 We are very grateful that Arthur Poropat provided us with the previously unpublished results on the relationship between personality and cognitive ability, which were part of his meta-analysis regarding the relationship between personality and academic performance (Poropat, 2009).