Learning Facts About Migration: Politically Motivated Learning of Polarizing Information About Refugees

Information processing during heated debates on asylum and immigration may often be influenced by prejudice rather than a desire to learn facts. In this article, we investigate how people process empirical evidence on the consequences of refugee arrivals through a novel survey experiment that disentangles politically motivated learning from other forms of learning and expressive responding. Specifically, we ask respondents to interpret a 2×2 table about the relationship between asylum seekers and crime rates. Crucially, respondents are randomly allocated to evaluate a conclusion that triggers their identity- protective stakes or not. In addition, we test for motivated responding as an alternative explanation by randomly providing some respondents with a response format that motivates them to report their inference truthfully. We find that information processing changes substantially when new information challenges existing asylum attitudes. Politically motivated learning is strongest among voters with strong negative prior attitudes towards asylum seekers. Our results also indicate that expressive responding can only partially account for this gap in correctly reported inferences. Our research has important implications for research on the consequences of refugee migration, theories of motivated reasoning, and survey methodology.

Over the last years, Europe has experienced almost unprecedented numbers of refugee arrivals. Since the summer of 2015, more than three million people have applied for asylum in European countries, marking the highest level of refugee movement since the aftermath of World War II (Hangartner, Dinas, Marbach, Matakos, & Xefteris, 2019). Asylum, migration, and integration policies has profoundly impacted political conflict in Europe (Bansak, Hainmueller, & Hangarnter, 2016;Hangartner et al., 2019;Mader & Schoen, 2019). In many European countries, hostility towards refugees and Muslim minorities as well as social tension and polarization between citizens has risen, followed by at times violent conflicts in Germany, France, and other countries (Czymara & Schmidt-Catran, 2017;Hangartner et al., 2019;Jäckle & König, 2018). Vote shares of anti-immigration parties have increased in most European countries , while mainstream parties openly debate how to approach asylum and migration in the future.
Importantly, the polarization of public opinion around migrants and refugees occurs on factual as well as attitudinal aspects. It is not surprising that people disagree about whether allowing migration and being generous towards refugees is desirable or not: These questions are related to fundamental political and ideological orientations (Davidov, Meuleman, Billiet, & Schmidt, 2008;Vecchione, Caprara, Schoen, Gonzalez Castro, & Schwartz, 2012). However, whether the consequences of immigration are objectively positive, neutral, or negative in terms of the economy, crime, or schooling are complex, yet essentially empirical questions. Disagreements about these factual questions are often just as stark as they are concerning more normative questions. Hence, when we think about opinion polarization on migration, we also need to consider how individuals think and learn about facts and evidence.
In this study, we therefore ask how individuals interpret and process empirical evidence on migration and its impact on their societies. Ideally, we would want citizens to learn about empirical evidence in neutral, dispassionate ways, as this might provide a solid foundation for normative debates and thereby increase the quality of collective decisions. However, research in political psychology suggests that individuals often assess new evidence in a biased manner, interpreting facts based on political orientations, values, and identities (Gaines, Kuklinski, Quirk, Peyton, & Verkuilen, 2007;Kahan, 2016a;Lodge & Taber, 2013;Lord, Ross, & Lepper, 1979;Taber & Lodge, 2006). In particular, the politically motivated-reasoning paradigm argues that individuals interpret new information selectively, depending on whether it matches existing political beliefs, a tendency termed politically motivated learning (Flynn, Nyhan, & Reifler, 2017;Kahan, Peters, Dawson, & Slovic, 2017). Politically motivated learning offers a powerful account for why the debate on refugee migration is so polarizing: Not only do citizens hold different values or opinions regarding refugees, they might also interpret challenging empirical evidence in a manner that fosters and reinforces polarizing and contrasting views.
We provide experimental evidence on how voters' views on asylum policies shape their information processing on these topics. We rely on a novel and powerful experimental design recently introduced by Kahan et al. (2017): the covariance detection task. This design allows us to study how voters process new information and to separate politically motivated learning from other types of behavior.
Our results are clear: We find substantively and statistically significant evidence of politically motivated learning. We also find that politically motivated learning varies considerably between respondent groups, as it is higher among respondents with strong negative prior views towards asylum seekers than among other respondents, that is, those with weak negative or positive prior attitudes. We also show that our results are not produced by other "errors" in survey response such as satisficing, guessing, or simply reporting prior beliefs. We also provide a further test that shows that our findings are not solely due to politically motivated responding (Sood & Khanna, 2018).
Our research adds to a growing literature that aims to understand the short-and long-term implications of immigration and asylum on political conflict in Europe (Bornschier, 2010;Dennison & Geddes, 2019;Green-Pedersen & Otjes, 2019;Van Spanje, 2010). As commentators and politicians often point out, empirical evidence and unbiased information might play a central role in providing a sound basis for democratic deliberation and may help to reduce social tension and polarization between citizens. Motivated reasoning has been studied in several areas, including climate change (Kahan et al., 2017), gun control (Sood & Khanna, 2018;Taber & Lodge, 2006), and the Iraq war (Berinsky, 2018;Nyhan, Reifler, & Ubel, 2013). However, there has been to our knowledge no study that aims to understand the psychological drivers of fact polarization due to immigration in general or refugee migration in particular. Our research shows that citizens are prone to arrive at an interpretation of new evidence on the consequences of migration that is consistent with views they already held.
Given the complexity of the experimental logic in the covariance detection task, we first present the study design before discussing prior research into politically motivated learning. We then present the data and results before addressing alternative explanations for our findings.

Study Design
In our study, respondents were asked to perform a covariance detection task (Kahan et al., 2017), as shown in Figure 1. This experimental design is based on insights into how people interpret numerical information in a cross-table. As anyone who has taught introductory statistics knows, understanding such a table can be a challenging task, as many people interpret the table in a fast-and-frugal, heuristic-driven way. It is variation in this tendency that the experiment exploits.

The Covariance Detection Task
In our version of this task, respondents were told that a study recently examined whether the local presence of refugee housing leads to an increase in crime and that the study compared crime rates in municipalities with and without refugee housing. The results of this study were presented in Figure 1. Covariance detection task. Group A: The correct conclusion is that crime increases in counties with refugee housing and the fast-and-frugal conclusion is that crime increases in counties without refugee housing. Group B: The correct conclusion is that crime increases in counties without refugee housing and the fast-and-frugal conclusion is that crime increases in counties with refugee housing.
As you probably now, there is a lot of debate about taking in refugees in Germany. A study recently examined whether accepting and housing refugees increases crime or not.
Researchers studied crime rates in German municipalities with refugee housing and municipalities without refugee housing (these municipalities were chosen so that they were similar in other ways). In the following table you will see the result of the study. We are interested in how you interpret the results of this study. We would ask you to look at this table carefully and consider which conclusions can be drawn from the study: Does crime on average increase more in municipalities with or more in municipalities without refugee housing?  Figure 1). We did not show column percentages, as this would give away which cells need to be compared; however, we kept calculations simple by choosing easy-to-spot ratios. We told respondents that we were interested in how they interpret the results. On the next screen, respondents were then asked whether the study showed whether municipalities with or without refugee housing were more likely to experience a rise in crime. Turning first to Experimental Group A, how might respondents go about interpreting the table? The correct answer is that crime increased in municipalities with refugee housing. The proportion of municipalities without refugee housing where crime increased is 0.75 (or 3/4), while it is 0.83 (or 5/6) in municipalities with refugee housing.

(a) Experimental Group
However, respondents answering using fast-and-frugal techniques might simply look at the largest number in the table (240). They could also simply compare the entries in the first row (240 vs. 100 municipalities). Or, they could compare entries in the first column (240 vs. 80 municipalities). Each technique will lead respondents to reach the incorrect conclusion, namely that crime increases in municipalities without refugee housing. Research on cognitive abilities suggests that individuals are prone to this kind of information processing (Kahan et al., 2017;Stanovich & West, 1998). This mode of reasoning reflects so-called System I reasoning, which is prone to cognitive biases despite being the predominant mode of human reasoning (Lodge & Taber, 2000Taber & Lodge, 2006).
The crucial characteristic of the covariance detection task is that the information in the table is designed so that fast-and-frugal techniques such as those describe above always lead respondents to the wrong conclusion. By design, the fast-and-frugal conclusion is thus always the opposite of the correct conclusion. In the following, we refer to the wrong conclusion as the "fast-and-frugal" conclusion, that is, the interpretation that is based on heuristic, System I reasoning (Krosnick, 1991;Zaller, 1992).
Note that our study design therefore focuses only on the interpretation of evidence presented to respondents. It does not examine whether this evidence is seen as credible, nor does it examine whether the evidence then changes overall perceptions of the association between migration and crime rates or on attitudes towards immigration. Hence, the aim is not to study the credibility or the effects of corrective information but to examine how new evidence is understood (Flynn et al., 2017;Gaines et al., 2007).

Politically Motivated Learning: Existing Research
The key prediction in Kahan et al. (2017) is that the extent to which people only engage in fastand-frugal thinking in the above task is related to their prior political beliefs and identities. In the motivated-reasoning paradigm, individuals are generally motivated by accuracy as well as directional goals: They want to arrive at the answer, but they also want that answer to fit with their political beliefs and identities (Taber & Lodge, 2006). 1 The strength of both goals may vary depending on the context (Flynn et al., 2017). In recent research, the role of social identities, political affinities, and prior opinions has been highlighted as a particular driver of directional goals (Flynn et al., 2017). Individuals selectively give credence to new information based on its consistency with their social identities or prior views. This tendency can be termed politically motivated learning. 2 Directional motivations are expected to dominate information-processing tasks (Kunda, 1990;Lodge & Taber, 2000. The drivers of directional goals are various. One influential account of politically motivated learning has focused on social identities, including partisanship (Bolsen, Druckman, & Cook, 2014;Gaines et al., 2007). Hence, selective information processing will occur "where positions on some policy-relevant fact have assumed widespread recognition as a badge of membership within identity-defining affinity groups" (Kahan, 2016a, p. 2). The debate about immigration may relate particularly strongly to such social identities. As an outgroup, refugees and asylum seekers may also be groups whose people have particularly strong positive or negative affect towards, and as such, have also been found to be strong shapers of directional goals (Lodge & Taber, 2013). Moreover, the debate about immigration may also have defined new or at least reinforced existing identity groups, also relating to partisanship. Hence, partisanship in many countries might now be tied strongly to stances on immigration and asylum. Positions on related factual questions, such as whether refugee migration increases crime, might be prone to be interpreted in reference to these in-and outgroups.
A related, somewhat simpler account has focused more on prior attitudes, values, or issue opinions (Taber & Lodge, 2006), with people willing to find arguments against evidence that goes against their prior beliefs or stances. Again, the debate on refugees and asylum seekers fits the type of issue that should generate strong directional reasoning. Debates on immigration are often centered around essential cultural and human values and beliefs (Davidov & Meuleman, 2012;Davidov et al., 2008;Hainmueller & Hiscox, 2007;Vecchione et al., 2012). Information that relates to such fundamental tenets may be more likely to elicit directional reasoning. In addition, immigration was a particularly important political topic in the period after 2015. The high salience of the immigration debate may have established, solidified, and clarified issue attitudes among many voters (Dennison & Geddes, 2019). Overall, immigration as a topic should lead to strong directional motivations due to the strong social identities and the fundamental beliefs to which this topic relates.
Politically motivated learning also makes predictions about when individuals should engage carefully with new information (Flynn et al., 2017). When new information affirms prior views and identities, individuals will spend little time examining it.
Hence, fast-and-frugal System I reasoning may dominate. However, when new information goes against prior views and identities, then individuals will be motivated to examine the new information carefully, for instance in order to find counterarguments or to discredit the information. This pattern of behavior is termed "disconfirmation bias" (Taber & Lodge, 2006) and means that more careful System II reasoning might be activated.

Politically Motivated Learning in the Covariance Detection Task
How people study the table in our task will therefore likely be affected by prior beliefs and identities. To see why this occurs, take as an example a person who holds positive sentiments towards asylum seekers. In Experimental Group A, she might quickly browse the table and reach the fast-andfrugal conclusion that crime increases in municipalities without refugee housing. Her prior beliefs appear to be confirmed, so she might not engage in a more thorough information search and might therefore report the wrong conclusion.
In contrast, a person who holds negative sentiments towards asylum seekers will browse the table and note that the fast-and-frugal conclusion goes against her prior beliefs. This conclusion motivates her to examine the table more closely, perhaps in order to disconfirm this conclusion (Taber & Lodge, 2006). She might then figure out how to correctly interpret the data and reach the correct conclusion, which in Experimental Group A is that crime increased more where there was refugee housing.
The key expectation in the task is that those presented with a threatening fast-and-frugal conclusion are more likely to engage in careful System II thinking and thereby reach the correct conclusion than those presented with an affirmative fast-and-frugal conclusion. This is what Kahan et al. (2017) treat as politically motivated learning.
Of course, the fast-and-frugal conclusion in Experimental Group A is only threatening for those who hold negative attitudes towards asylum seekers. If we only had this group, we could only gain insights into politically motivated learning among one subset of the population, those who view asylum seekers negatively.
The solution to this, and the centerpiece of the covariance detection task, is random variation in the correct conclusion of the fictional study. This is achieved simply by changing the heading of the columns in the table (Kahan et al., 2017). In our study, there were thus two experimental conditions, labeled A and B in Figure 1. The correct conclusion in condition A is that crime increases in municipalities with refugee housing. In condition B, the correct conclusion is that crime increases in municipalities without refugee housing. The fast-and-frugal conclusion also changes between the two experimental groups.
The key design feature is thus that we have two types of experimental groups: one where the fast-and-frugal conclusion is threatening and the correct conclusion affirmative and one where the fast-and-frugal conclusion is affirmative and the correct conclusion threatening. Whether the fastand-frugal conclusion is threatening or affirmative depends on one's prior attitudes towards refugees, which is why two experimental groups are needed.
To simplify, the percentage of correct responses should be higher when the fast-and-frugal conclusion is threatening than when it is affirmative, as the former will trigger respondents' attitude-or identity-protective stakes. Given the symmetry of our treatments, respondents with anti-asylumseeker priors are threatened in Group A, so they should be more likely to find the correct answer in this group as well. Pro-asylum-seeker respondents should be more threatened in condition B, and so correct answers should be higher in this Group for these respondents. Hypothesis 1 summarizes this expectation.

H1 (Politically Motivated Learning):
The proportion of correct answers will be higher if the fast-and-frugal conclusion is threatening to the prior beliefs of a respondent than if the fast-andfrugal conclusion is affirmative.

Description of the Survey
We implemented the covariance detection task in an online survey fielded in Germany in October 2018. Germany was one of the countries most impacted by the refugee crisis in 2015, taking in almost 900,000 asylum seekers in that year alone. Immigration remained a key topic of political debate in the subsequent years (Czymara & Schmidt-Catran, 2017;Dennison & Geddes, 2019), although its overall salience naturally declined somewhat after 2015. The topic was given additional political relevance by the rise of the Alternative for Germany (AfD), the first radical-right party to become established in the German party system after World War II. We see Germany as a country where immigration is a salient issue that may be strongly linked to overall attitudes and identities. Yet, Germany should not be overly distinct from other European countries such as Austria, Denmark, the Netherlands, or Sweden whose politics and parties have been similarly shaped by debates about immigration over the past years.
Respondents were quota-sampled from a large German online-access panel. Quotas were based on gender, age, and education and resembled distributions of the German population aged between 18 and 69 derived from the last census. Of 5,563 panelists who followed the survey invitation, 824 were screened out because of full quotas, and 4,609 started the survey. The break-off rate was 5% (Callegaro & DiSogra, 2008). The final sample consists of 4,371 respondents who completed the survey, yielding a participation rate of 92% (AAPOR, 2016). All respondents received a small monetary incentive by the panel provider.
The questionnaire asked about political attitudes and behavior (see Appendices S2-S6 in the online supporting information for question wording and details on the questions used in this article). Respondents were not forced to answer these questions and were free to refuse answers. Our task was located in the last quarter of the survey. After a final section of questions regarding survey evaluation and enjoyment, the respondents were debriefed and told that they were presented a fictional study and had participated in an experiment. On average, it took respondents 21.7 minutes to complete the questionnaire (Median = 18.7). In Table S1.1 in the online supporting information, we present the sample composition with respect to relevant variables.

Key Variables
Dependent variable. After the screen with the covariance detection task (see Figure 1), respondents were asked to indicate what they think the correct conclusion is. The response categories were "Crime increases in municipalities with refugee housing" and "Crime increases in municipalities without refugee housing." To avoid primacy or recency effects, we randomized the ordering of the two answers. Our dependent variable is coded as 1 if a respondent reported the correct conclusion and 0 otherwise.
Prior beliefs. Respondents' prior beliefs are measured with the following item: "Because of asylum seekers crime is increasing in Germany," which respondents could answer on a 4-point scale. We did not provide respondents with a neutral middle category because we wanted to be able to make clear group distinctions. Table 1 shows the distribution of this item: Roughly 70% of respondents agree or agree strongly with the statement, indicating that the majority of respondents held pessimistic views about the consequences of refugee migration on crime. We recoded respondents into three groups: (1) respondents with pro-asylum-seeker priors (strongly disagree and disagree), (2) respondents with weak anti-asylum-seeker priors (agree), and (3) respondents with strong antiasylum-seeker priors (strongly agree). We chose this coding to (a) preserve statistical power among respondents with positive beliefs and (b) to study potentially interesting variation within respondents with negative beliefs. In Appendix S2 (Table S2.4 in the online supporting information) we provide results when categories strongly disagree and disagree are not combined. We also provide results for different measurements of these prior attitudes (e.g., a combination of various asylum-seeker items or of political preferences).
Fast-and-frugal conclusion: Our main independent variable is whether the fast-and-frugal conclusion is threatening or affirmative to the respondent. This variable is a combination of the experimental Note: Distribution of responses to item "Because of asylum seekers crime increases in Germany." condition (A or B) to which a respondent was randomly allocated and respondents' prior beliefs. For respondents who reported that they agree or strongly agree with the above statement, this variable was thus coded as "0 = affirmative" if they were allocated to Table B and as "1 = threatening" if they were allocated into Table A. For respondents who indicated that they disagree or strongly disagree, this variable was coded as "0 = affirmative" for Table A and as "1 = threatening" in Table B.

Results
Do voters process information differently when it threatens prior beliefs than when it affirms them? Figure 2 shows the results from our covariance detection task. On the x-axis we display the percentage of correct conclusions; the y-axis shows either the full sample (N = 1,431) or the results decomposed into three subgroups: pro-asylum-seeker respondents (disagree partly or strongly) (N = 399), weak anti-asylum-seeker respondents (agree partly) (N = 512), and strong anti-asylumseeker respondents (agree strongly) (N = 520). Gray dots indicate the proportion of correct responses in the experimental condition where the fast-and-frugal conclusion is threatening to prior beliefs (i.e., Figure 2. Empirical results for politically motivated learning (PML) in the baseline condition. Percentage of correct answers on the x-axis and prior attitudes on y-axis. The full sample combines these three subgroups. The treatment is coded as "affirmative" if a respondent partly or strongly agrees with the statement that "asylum seekers increase crime in Germany" and was assigned to Table A and if a respondent partly or strongly disagrees with the statement that "asylum seekers increase crime in Germany" and was assigned to Table B. It is coded as "threatening" in the remaining cases. Horizontal bars are 95% confidence intervals. Politically motivated learning is defined as the difference between the percentage of correct answers in the threatening group relative to the affirmative group and indicated above for each group. Table A if a respondent has anti-asylum seeker beliefs or Table B if he or she is pro-asylum seeker); white dots indicate the proportion of correct responses if the fast-and-frugal conclusion affirms prior beliefs. Horizontal bars are 95% confidence intervals. Furthermore, the amount of politically motivated learning, defined as the difference between the percentages in each group, is indicated above the dots. In Table S2.1 in the online supporting information, we show these results as a 2×2 table.
For the full sample, we estimate the effect of politically motivated learning at 21 percentage points. That is, respondents are 21 percentage points more likely to report the correct conclusion if the fast-and-frugal conclusion is threatening (gray) than if is affirmative (white). Given the complexity of the covariance detection task, this is an effect of substantial magnitude, comparable or even larger than results reported in Kahan et al. (2017) and Sood and Khanna (2018), providing strong evidence that respondents process information on refugee migration differently depending on its content.
We also observe substantial differences between subgroups: Respondents with positive priors towards refugees are the least likely to engage in politically motivated learning. The difference between being challenged or confirmed by new evidence leads to an increase of roughly 10 percentage points of correct answers. Politically motivated learning increases among respondents with negative priors: Respondents who agree with our initial statement are 16 percentage points more likely to report the correct conclusion when challenged by the information they received. Politically motivated learning is particularly prevalent among respondents with strong negative priors. These respondents are 34 percentage points more likely to figure out the correct answer if they are challenged. This effect remains statistically significant even when controlling for pretreatment variables in a binary logistic regression (see Table S2.2 in the online supporting information). We also find that the differences between respondents with strong negative priors (reference category) and each of the remaining two respondent groups are statistically significant (see Table S2.3 in the online supporting information). The difference between the remaining two groups is not statistically significant, indicating that respondents with strong negative beliefs process information substantially differently.
How do these group differences arise? One important factor is that respondents with strong negative priors are only slightly more likely to give the correct response when challenged by new information (see the gray dots in Figure 2). However, when the fast-and-frugal conclusion appears affirmative, as indicated by the white dots, they are roughly 20 percentage points less likely to give the correct answer. Responses of the other two groups are almost identical when receiving an affirmative fast-and-frugal conclusion. Thus, what sets citizens with strong negative priors apart is that they are particularly likely to accept affirmative new information.

Accounting for Alternative Response Behaviors
So far, we have presented one way in which respondents might engage with the information they are presented with. However, there are of course other ways in which respondents could tackle our task. The power of the covariance detection task is that almost all of these response patterns lead to different empirical expectations than politically motivated learning. It is thus straightforward to empirically separate politically motivated learning from these other approaches.

Alternative Explanations Excluded by Design
Specifically, there are four other plausible modes of reasoning that might explain respondents' information processing that can be excluded as explanations due to the task's experimental design: 1. Unbiased learning (consistent System II processing). Respondents might actually be both willing and capable of processing the information and might be solely motivated to reach an accurate interpretation. This mode of reasoning requires System II reasoning, which is a deliberate and reflective, yet demanding, way of performing mental tasks (Lodge & Taber, 2000Taber & Lodge, 2006). If respondents consistently processed information this way, there would no significant differences between experimental conditions A and B, as respondents would be motivated to figure out the correct conclusion in both groups. Yet, this is not what we find above. 2. Satisficing (consistent System I processing). It may also be that respondents always engage in System I processing (satisficing), no matter which conclusions this leads them to report. If respondents engage in such satisficing, we should also observe no meaningful differences between treatment A and B, because respondents would be led to the wrong conclusion in both treatment versions. Yet in our survey, respondents do not always report the fast-and-frugal conclusion. Generally, respondents tend to be misled by a heuristic-driven, fast-and-frugal interpretation: On average, 67% of respondents report the fast-and-frugal conclusion; the overall percentage of correct responses is thus comparatively low. However, the main message from Figure 2 is that a substantial portion of respondents appears to study the data more carefully if the fast-andfrugal conclusion is threatening. We argue that these respondents engage in politically motivated learning. 3. Random guessing. An even more pessimistic view is that respondents are either unable or unwilling to process the information and simply guess their answer randomly. In this mode of reasoning, respondents never engage with the data, and the percentage of correct answers should be lower than under rational learning. However, in this case we would observe no meaningful differences between the experimental conditions because responses are determined randomly; this is not the pattern we find. Respondents might also simply select the first answer category without considering its content. Because we randomize the order of response categories, this is equivalent to random guessing and is excluded by design. 4. Reporting prior beliefs. One further possibility is that respondents disregard the information presented and simply choose the conclusion they want to be true without even consulting the table. These responses would lead us to expect significant differences in the proportion of correct responses by treatment group, as the correct conclusion varies in its direction. However, we should not expect differences in the "raw" response given between the experimental conditions, since this will simply reflect prior attitudes, with respondents selecting their preferred choice. However, our results show that respondents do not blindly choose the category they want to be true. Looking at the "raw" responses, Table S2.1 in the online supporting information shows that 47% of respondents with strong negative priors choose their preferred conclusion in experimental condition A, but 87% did so in condition B. The other two respondent groups show similar substantial differences in the raw responses. This implies that respondents do not always report their preferred conclusion.
The covariance detection task thus very clearly separates politically motivated learning from several other likely forms of reasoning. There is, however, one explanation that provides a similar expectation as politically motivated learning: politically motivated responding.

Politically Motivated Responding
Finally, some of the pattern in the results above could be explained by respondents who answer the survey question in a way that fits their prior beliefs despite thinking that the cross-table shows a different pattern. This politically motivated responding (Sood & Khanna, 2018) refers to the tendency to avoid reporting a conclusion or statement that is threatening to prior beliefs. This behavior has also been labeled expressive responding or partisan cheerleading (Berinsky, 2018;Bullock, Gerber, Hill, & Huber, 2015;Prior, Sood, & Khanna, 2015).
Distinguishing politically motivated learning from politically motivated responding is important because the two patterns of reasoning lead to fundamentally different conclusions concerning information processing and the resulting polarization. Politically motivated learning postulates that prior beliefs affect how deeply new information is processed and thus interpreted in a correct and objective manner. In contrast, politically motivated responding suggests that respondents purposefully answer in line with their prior beliefs, even if they know that this answer is not correct. However, politically motivated responding is observationally equivalent to politically motivated learning, so it is important to find ways to distinguish between the two.
Two types of politically motivated responding are important here. First, respondents may learn the correct implication of the table. However, if the correct conclusion is threatening, then they may choose to report the affirmative conclusion instead. To illustrate this, take a prorefugee respondent assigned the experimental group where the correct conclusion is threatening to her, that is, crime increases in municipalities with refugee housing. If she reports instead that crime increases in municipalities without refugee housing, there are two interpretations. Up to now, we argued that she did not learn the correct conclusion and reported the fast-and-frugal (affirmative) conclusion in good faith. However, she may in fact have learned the correct conclusion, that is, that the table shows that crime increases more in municipalities with refugee housing. But because this conclusion is threatening to her priors, she might instead choose to respond in line with her existing beliefs. This behavior would increase the number of incorrect (and decrease the number of correct) responses in the affirmative experimental group.
Second, respondents may not learn the correct implication of the table. However, if the fast-andfrugal conclusion is threatening, then they may choose to report the affirmative conclusion instead. This is inadvertently the correct conclusion. To illustrate this, take a prorefugee respondent assigned to the experimental group where the correct conclusion is affirmative to her, that is, crime increases in municipalities without refugee housing. If she reports that answer, there are two interpretations. Up to now, we argued that she did learn the correct conclusion and reported the (affirmative) conclusion. However, she may not in fact have learned the correct conclusion. Instead, she may simply have used fast-and-frugal heuristics, and, because the fast-and-frugal conclusion is threatening to her priors, she might instead choose to respond in line with her existing beliefs. By "accident," she gave the correct answer. This behavior would thus increase the number of correct responses in the threatening experimental group.
In sum, politically motivated responding would increase the number of correct responses in the threatening experimental group and decrease the number of correct responses in the affirmative experimental group. In other words, the gap between the two groups should be increased through this type of response behavior. Observationally, we do not know whether the gap we observed in our initial analysis is due to politically motivated learning, politically motivated responding, or both. It is therefore important to assess the consequences of motivated politically responding. Sood and Khanna (2018) address the problem of politically motivated responding in a different context by providing financial incentives, which are likely to increase the accuracy motivation of respondents (Bullock et al., 2015;Prior et al., 2015). Following the reasoning of previous research (Berinsky, 2018;Kahan, 2016b), we see the use of financial incentives as problematic for two reasons: (1) financial incentives have no real-world equivalent and thus only little external validity (Kahan, 2016b), and (2) incentives might induce strategic, incentive-seeking behavior among respondents in general (Berinsky, 2018). In particular, a financial incentive might motivate some respondents who otherwise would report their inference truthfully to adjust their responses simply to get a financial incentive. Differences between the baseline and incentive conditions are then not solely attributable to a decrease in politically motivated responding because they could result from incentive-seeking behavior of other respondents as well. This kind of strategic behavior seems particularly likely among participants of online-access panels who generally are at least partly motivated by financial reward. One solution to this problem is to provide a two-sided incentive structure (Berinsky, 2018). We implemented such a design and motivated a random subset of respondents by reducing the survey duration if they reported a certain conclusion. Respondents seem to have been overwhelmed by this. We describe this approach and the results in Appendix S4 in the online supporting information.
Given our concerns about using financial incentives to increase accuracy motivation, we also propose a different approach. We provided a random subset of respondents with an additional "don't know" option on the response screen. Our identification strategy thus relies on the idea that respondents who would otherwise engage in politically motivated responding use the "don't know" option as a more socially desirable alternative to giving a response they do not think is correct (Krosnick et al., 2002;Krosnick & Presser, 2010). We thus extend the covariance detection task by including an additional experimental group beyond the baseline response format with two response categories, the results of which were described above. Specifically, we add the "don't know" condition, where we provide a third response option. Importantly, respondents had to report their conclusion on a screen after the cross-table, which means they had no way to go back to the task and restudy the information. Differences between the "don't know" and baseline condition can thus be attributed to how people decide to respond and not to differences while processing the table (Sood & Khanna, 2018).
What empirical patterns should we expect when we provide respondents with a face-saving "don't know" option? First, the percentage of correct answers should increase when the fast-andfrugal conclusion is affirmative (and the correct conclusion thus threatening). Politically motivated responding would have led to incorrect responses but is rerouted to the "don't know" option. Second, the percentage of correct answers should decrease when the fast-and-frugal conclusion is threatening. Politically motivated responding would have led to correct responses but is rerouted to the "don't know" option. As a result, we expect that politically motivated learning will be lower when a "don't know" option is present than in the baseline condition.
Overall, we find that 17.41% of 1,439 respondents in the "don't know" condition used this category. This is not surprising given the complexity of the task. Figure 3 shows the percentage of correct answers for the full sample and the three respondent groups in the "don't know" condition. In Table S3.1 in the online supporting information, we show these results as 2×2 tables. The amount of politically motivated learning is again indicated above the difference between the gray and white dots for each group.
We find that our estimate of politically motivated learning for the full sample drops by 5 percentage points, from 21.40 percentage points in the baseline condition to 16.03 percentage points in the don't know group (although the difference between these two estimates is not statistically significant). This drop in politically motivated learning is accounted for by the decline in partisan motivated responding. The main decline is in the proportion of correct responses in the threatening fast-and-frugal condition. This means that the main politically motivated responders saw the threatening fast-and-frugal conclusion and decided to give the (inadvertently correct) affirmative conclusion instead. Again, additional models that control for pretreatment variables support our interpretation (see Tables S3.2 and S3.3 in the online supporting information).
However, it is clear that a substantial effect for politically motivated learning remains even once we account for politically motivated responding. While the gap between the two groups narrows somewhat, it is still substantial.
Focusing on the full sample masks important differences between respondents with different prior beliefs. Among respondents with pro-asylum-seeker beliefs, evidence for politically motivated learning vanishes once we account for politically motivated responding. The effect declines from 10.19 percentage points in the baseline condition to 0 percentage points. Among respondents with weak antirefugee priors, our political-motivated-learning estimate drops from 16.29 to 8.47 percentage points, which is still statistically significant. Among respondents with strong negative priors, the presence of a don't know condition has no effect: Politically motivated learning is almost identical in the baseline and don't know condition. In general, it appears that politically motivated learning is particularly prevalent among those who hold strong anti-asylum attitudes. This implies that careful political engagement is especially one sided for this group, as they are particularly likely to think carefully about complex topics when identity-threatening aspects are at stake.

Summary of Principal Results
Our results show that how individuals process new information on migration is characterized by political biases: How individuals engage with evidence is strongly determined by their prior beliefs on the topic. This type of information-processing behavior is termed "politically motivated learning" and implies that individuals are much more likely to study evidence carefully and try to find on the x-axis and prior attitudes on y-axis. The full sample combines these three subgroups. The treatment is coded as "affirmative" if a respondent partly or strongly agrees with the statement that "asylum seekers increase crime in Germany" and was assigned to Table A; if a respondent partly or strongly disagrees with the statement that "asylum seekers increase crime in Germany," they were assigned to Table B. It is coded as "threatening" in the remaining cases. Horizontal bars are 95% confidence intervals. Politically motivated learning is defined as the difference between the percentage of correct answers in the threatening group relative to the affirmative group and indicated above for each group. counterarguments when this evidence challenges their predispositions. Evidence that at first glance seems to support their individual predispositions remains underexamined.
We also find that politically motivated learning is stronger among those most opposed to migration. We do not want to overinterpret this result. On the one hand, it may indeed indicate a particular strong identity among people with such beliefs. If so, this finding arguably directly relates to classic studies on the "authoritarian personality" (Adorno, 1950) and recent research on asymmetric polarization (Morisi, Jost, & Singh, 2019). Our results could thus indicate that conservative individuals differ in systematic ways from more liberal ones. On the other hand, there are other explanations for the differences in the strength of politically motivated learning. For example, we cannot fully exclude that other characteristics associated with holding strong antimigration views-such as education levels or personality traits-may explain differences in politically motivated learning. In addition, there may be specific characteristics of the current migration debate that mean that a study concluding that refugees do not increase crime is more surprising for many people than the opposite conclusion; this could also explain the patterns we find. More work is clearly needed here, and we remain reluctant to place too much weight on these differences for now.
Importantly, the covariance detection task excludes some key rival explanations for how individuals process information and respond to surveys. In an additional test, we also showed that the patterns of responses we find are not solely due to motivated political responding. However, we do find that motivated responding is one type of behavior exhibited by survey respondents, providing additional support for the results described in Sood and Khanna (2018).

Avenues for Future Research
Based on our findings, there are several potential paths future research could explore. First, future work could examine the effects of modifications to the design. The 2×2 tables we presented allow researchers to manipulate whether information threatens or affirms prior beliefs of respondents. However, it would be useful if future research advanced the covariance detection task by covering different modes of presenting information to respondents. It would be especially insightful to develop a version of the task that includes audiovisual information that resembles information provided by television programs. Such videos could easily be implemented in a web-based survey.
The 2×2 tables are also artificial as they are constructed in such a way that the correct conclusion always contradicts the fast-and-frugal conclusion. This setup allows us to examine when learning takes place, but it is of course rare in real life. Future research should think of ways of modifying this design, for instance by exploring ways to distinguish between consistent and inconsistent fast-andfrugal and correct conclusions in order to examine differences in learning in such settings.
Finally, further modifications could examine how politically motivated learning varies. For instance, one could test whether elite cues and party polarization increase the tendency to engage in such behavior, for example, by priming these considerations before the experiment (Druckman, Peterson, & Slothuus, 2013;Flynn et al., 2017). Similarly, our focus here was on the effect of prior beliefs on politically motivated learning, but future work could examine the moderating effect of other factors such as political sophistication or trust in science. This would be possible even while keeping the design as it is. Last but not least, future work may want to further consider downstream effects on broader political perceptions and on political attitudes, which should both also be influenced by politically motivated learning.
Having said that, these modifications and extensions do come at a cost. For instance, applying our proposed approach of controlling for politically motivated responding to a large sample requires having enough statistical power to be able to detect effects.

Broader Implications for Debates on Migration
In finding evidence for politically motivated learning, our study provides novel insights on information processing and opinion polarization in the current public debate on migration. The overall experimental setup of our task resembled situations in which individuals are confronted with empirical evidence in everyday life. For instance, the media quite frequently report on crime statistics, especially regarding migration. Our results show that these may often be read and processed in such a way that they affirm their prior views. Accordingly, our results show how hard it is for evidencebased information about migration to achieve fair consideration among the public. Moreover, this way of reasoning likely not only fosters political disagreement and polarization but also reinforces these phenomena. Pessimistically, we could conclude that empirical evidence alone will thus not suffice to tackle the challenges related to attitude polarization on the issue of migration.
More optimistically, our findings mean that researchers need to think about what contexts foster such politically motivated learning, but perhaps more importantly how such behavior can be minimized. To reach the goal of such fair consideration of information and facilitating a discussion based on empirical facts and not solely on prior views, it will be necessary to motivate individuals to engage in careful assessments of the evidence. Our results show that identities and attitudes affect individual engagement with factual information. In our study, reasoning was greater for information that seemed threatening to prior views, but this is not a general solution: Framing all new evidence as threatening is not feasible or desirable.
We thus follow Kahan et al. (2017) in arguing that our results show that it is important to construct political environments where political beliefs, values, and identities are deemphasized. Hence, outlets that communicate empirical evidence need first of all to be aware that the processing of information depends on characteristics such as identities and attitudes. Even in the neutral setting of our survey, there was considerable space for bias in learning. It seems reasonable to expect that in the context of a public debate biases will be even more pronounced. To ensure that empirical evidence is considered equally by all involved groups, individuals need to be motivated to engage in cognitively more demanding reasoning. Future research needs to explore such remedies, and this research will hold important consequences for how political and media actors engage with the public. In general, politicians and the media need to think responsibly about how strongly held identities and heated political debates may generate a problematic basis for fact-based discussions.
In addition, we also found evidence for politically motivated responding, not just politically motivated learning. To be clear, such behavior is not normatively better than motivated learning. If people are willing to hide or publicly misrepresent learned facts, then this also does little to further evidence-based debates. However, the diagnosis of the challenge is somewhat different, in that motivated responding at least takes place in the context of actual learning. Future research needs also to be aware of these different types of behavior in the context of general survey responses.

Supporting Information
Additional supporting information may be found in the online version of this article at the publisher's web site: Supplementary Material Figure S1 .1. Introduction to covariance detection task: Original German Version.  Figure S4 .1. Baseline and incentive condition if affirmative conclusion is listed first. Figure S4 .2. Baseline and incentive condition if threatening conclusion is listed first.