SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. Overview of the Panel Design
  4. Survey Experiments
  5. Nonexperimental Analyses
  6. Conclusions
  7. References
  8. Biographies
  9. Supporting Information

In an analysis of the 2012 presidential election, we sought to optimize two key desiderata in capturing campaign effects: establishing causality and measuring dynamic (i.e., intraindividual) change over time. We first report the results of three survey-experiments embedded within a three-wave survey panel design. Each experiment was focused on a substantive area of electoral concern. Our results suggest, among other findings, that retrospective evaluations exerted a stronger influence on vote choice in the referendum (vs. the choice) frame; that among White respondents, racial animosity strongly predicted economic evaluations for knowledgeable Republicans who were led to believe that positive economic developments were the result of actions taken by the Obama administration; and that information-seeking bias is a contingent phenomenon, one depending jointly on the opportunity and motivation to selectively tune in to congenial information. Lastly, we demonstrate how the panel design also allowed us to (1) examine the reliability and stability of a variety of election-related implicit attitudes, and to assess their impact on candidate evaluation; and (2) determine the causal impact of perceptions of candidates’ traits and respondents’ policy preferences on electoral preferences, and vice versa, an area of research long plagued by concerns about endogeneity.

Whether and how political campaigns influence electoral judgment are among the most important questions in the study of mass politics and democracy. The earliest systematic studies of U.S. presidential campaigns, conducted by Lazarsfeld et al. (1948), led to the conclusion that they typically exert only minimal effects on persuasion. Most voters, they argued, possess stable political predispositions that, once activated or reinforced by the campaign, determine the choices that citizens make on Election Day. Together, these predispositions (e.g., partisanship), along with the performance of the economy, largely account for the outcomes of American presidential elections (Campbell, Converse, Miller, & Stokes, 1960; Gelman & King, 1993; Hibbs, Rivers, & Vasilatos, 1982; Sides & Vavreck, 2013). Although subsequent work has substantiated the primacy of these “fundamentals” (e.g., Erikson & Wlezien, 2012), recent studies—employing more powerful designs and statistical models—indicate that campaigns can exert a wide variety of “effects.” For example, campaigns can influence the issue focus of the electorate (priming), alter voters’ conceptualization of an issue (framing), provide information about the candidates’ personalities, performance records, and stands on salient policy issues (learning), change voters’ beliefs and opinions about candidates, issues, and events (persuasion), and inspire citizens to participate in politics (mobilization; for a review, see Brady & Johnston, 2006).

A variety of research designs have been used to study these kinds of campaign effects, including the cross-sectional time-series design (in which a separate sample of respondents is asked the same set of questions in successive elections; e.g., the American National Election Study); the rolling cross-section (in which fresh daily samples are drawn from a population throughout a campaign; for example, the Annenberg National Election Study), field and laboratory experiments (in which the causal impact of treatments is isolated through random assignment; e.g., Ansolabehere & Iyengar, 1995; Lavine & Snyder, 1996), and the panel design (in which the same individuals are reinterviewed in several “waves” over the course of an election; e.g., Patterson, 1980).

In our analysis of the 2012 presidential election, we sought to optimize two key desiderata in capturing campaign effects: establishing causality and measuring dynamic (i.e., intraindividual) change over time. Therefore, we embedded five survey experiments—three of which we present here—within a three-wave panel design (Appleby et al., 2013; Chen, Housholder, Ksiazkiewicz, & Sheagley, 2013; Chen & Mohanty, 2013; Ksiazkiewicz, Vitriol, & Farhart, 2013; Luttig & Callaghan, 2013). Each experiment was dedicated to a substantive area of electoral concern, ranging from framing the presidential race as a referendum on the performance of President Obama versus a choice between Obama and Mitt Romney; racial bias in judgments of the economy; and selective exposure to campaign information. The three-wave panel design allowed our multi-investigator team to examine change over time, as a function of both campaign events (presidential debates) and, as it turned out, important exogenous events (Hurricane Sandy), to separate instability from unreliability, and to study the causal impact of perceptions of candidates’ traits and respondents’ policy preferences on electoral preferences, and vice versa.

In the next section, we describe the three-wave panel design, highlighting the three embedded experiments. We then discuss how the design allowed us to (1) examine the reliability and stability of a variety of election-related implicit attitudes, and to assess their impact on candidate evaluation; and (2) determine the causal impact of perceptions of candidates’ traits and respondents’ policy preferences on electoral preferences, and vice versa.

Overview of the Panel Design

  1. Top of page
  2. Abstract
  3. Overview of the Panel Design
  4. Survey Experiments
  5. Nonexperimental Analyses
  6. Conclusions
  7. References
  8. Biographies
  9. Supporting Information

The study utilized a 3-wave panel design (baseline; pre-election; post-election) using online surveys, with participants recruited from Amazon's Mechanical Turk platform (hereafter, MTurk). The experimental manipulations were embedded in the baseline (Wave 1) and pre-election (Wave 2) waves of the survey. The sample for Wave 1 (N = 1,800) consisted of 927 women and 788 men (7 did not specify a gender). Party identification in the sample skewed Democratic, with 951 Democrats, 450 Republicans, and 220 Independents. The sample is also whiter than the general population, with 84% identifying as White, 7% as Black, and 5% Hispanic. Finally, the sample is fairly well educated: 11% did not attend college, 30% attended some college, and 59% have a college degree.

The MTurk sample was constrained to only include individuals from the United States (based on their logged IP addresses with MTurk). Respondents self-selected into the sample based on their response to our request on MTurk. Although this produces a sample that is not representative of the U.S. population, our primary concern was to examine the impact of our experimental manipulations. For experimental research, MTurk provides quick and high-quality data (Buhrmester, Kwang, & Gosling, 2011; Mason & Suri, 2012). Furthermore, the MTurk procedure provides a closer approximation to a representative sample than many other sources of experimental data (Berinsky, Huber, & Lenz, 2012).

Unfortunately, due to several factors associated with the length and complexity of the study, we experienced relatively high attrition between the first and second waves. While we started with a sample of 1,800 in Wave 1, we only received 1,062 responses in Wave 2, for an attrition rate of around 40%. Looking at demographic predictors of attrition, we found that older and better-educated individuals were more likely to return to our survey, and that Democrats were less likely to return than Independents (but not Republicans). No other demographic factors were significantly associated with attrition. Fortunately, our experimental designs are primarily restricted to a single wave and random assignment always occurred at the beginning of the wave, thus eliminating the influence of systematic attrition on our experimental results. Confident in the quality of the data collected through MTurk, we turn now to a discussion of the individual study waves and our findings.

Wave 1 (Baseline; October 16–22, 2012)

The first wave of the study—which occurred between the first and second presidential debates—assessed respondents’ attitudes and beliefs toward a wide range of objects (candidates, policies, racial groups, the economy). It also assessed a variety of personality traits (including need for cognition [Cacioppo & Petty, 1982], need for closure [Kruglanski & Webster, 1996], and need to evaluate [Jarvis & Petty, 1996]), facets of political engagement (political knowledge and interest, media exposure, past political participation), external political efficacy, perceived partisan polarization, social networks, political predispositions (party identification, ideology), and a variety of demographic variables (including employment and homeowner status, age, race, sex, education, income, zip code, and church attendance). This wave of the panel study also included two experiments (framing and selective exposure; both described later). Additionally, it contained several Implicit Association Tests (IATs) dealing with race and candidate evaluations, both of which are also described later. All items included in this wave (and waves 2 and 3) of the survey are available in the online Appendix.

Wave 2 (Pre-Election; October 31–November 5, 2012)

In addition to reassessing attitudes towards the candidates, policies, and the economy, this wave of the survey (N = 1,026) contained two separate experiments (racialized economic evaluations and candidate traits; the former is described later). We also assessed economic knowledge with a new battery of questions designed to tap familiarity with the U.S. economic system. Wave 2 also employed a unique control design, assigning a third of the sample to the control condition, in which respondents received no manipulations associated with the two experiments. This design allowed us to assess whether changes in attitudes from Wave 1 to Wave 2 were a result of Wave 2 manipulations or exogenous shocks to their attitudes. Like Wave 1, Wave 2 also included several IATs.

Wave 3 (Post-Election; November 14–24, 2012)

This wave of the study (N = 986) assessed respondents’ self-reported electoral choices, as well as third wave measures of their candidate and policy attitudes. We also included measures of authoritarianism, the Big Five personality factors, system justification, support for the Tea Party, evaluations of FEMA's response to Hurricane Sandy, and assessments of the incidents in Benghazi, Libya. Finally, we included IATs in Wave 3, allowing us to assess the stability and reliability of implicit attitudes.

Survey Experiments

  1. Top of page
  2. Abstract
  3. Overview of the Panel Design
  4. Survey Experiments
  5. Nonexperimental Analyses
  6. Conclusions
  7. References
  8. Biographies
  9. Supporting Information

Experiment 1: Choice or Referendum? Framing the 2012 Election

One of the dominant narratives of the 2012 campaign, both in the media and among the candidates, was whether the public perceived the election as a referendum on the performance of President Barack Obama, or as a choice between Obama and his challenger, Mitt Romney. For example, in his acceptance speech at the Democratic National Convention, President Obama stated that “when all is said and done, when you pick up that ballot to vote, you will face the clearest choice of any time in a generation” (Obama, 2012). In contrast, challenger Mitt Romney routinely tried to frame the election as a referendum on President Obama, as in his Florida primary victory speech where he stated that “three years ago this week, a newly elected President Obama faced the American people and he said, “Look, if I can't turn this economy around in three years, I'll be looking at a one-term proposition,’ and we're here to collect!” (Romney, 2012). In the media, this distinction between a choice and referendum election was also taken quite seriously. Former New York Times columnist Nate Silver, for example, argued that these frames would shape the considerations that enter voters’ electoral decisions, with important consequences for overall vote choice (Silver, 2012). In short, the choice versus referendum framing narrative was widely discussed during the 2012 campaign, with candidates and media organizations alike assuming that this distinction would have important consequences for voters’ electoral calculations.

While the choice and referendum frames correspond closely to paradigmatic models of electoral choice—that is, the selection (Downs, 1957) and retrospective (Fiorina, 1981; Key, 1966) models—no experimental work has examined whether campaigns can shape vote decisions by framing elections as either a choice (as the selection model holds) or a referendum (as the retrospective model holds). Thus, the question we address in this experiment is whether exposure to these competing frames has important consequences for the manner in which voters form their candidate appraisals.

The distinction we draw between choice and referendum campaigns is conceptually similar to Vavreck's (2009) typology of “clarifying” and “insurgent” campaigns. Like clarifying campaigns, referendum frames make the election about the past. However, unlike Vavreck (2009), who focuses predominately on clarifying campaigns’ strategic emphases vis-à-vis the economy, we believe that the referendum frame should activate a broader set of retrospective considerations in voters’ minds. Thus, in addition to retrospective economic evaluations, the referendum frame should also make additional evaluations of Obama and his presidency more salient. Therefore, we hypothesize that a broad set of criteria regarding Obama should be given greater emphasis among voters exposed to the referendum (vs. the choice) frame. These criteria include evaluations of President Obama's presidential performance, evaluations of domestic and foreign policies undertaken during Obama's first term (e.g., the Affordable Care Act, the Benghazi attack in Libya), perceptions of Obama's character, and racial predispositions—which recent work has shown are activated by Obama (Kinder & Dale-Riddle, 2012; Tesler, 2012; Tesler & Sears, 2010). In short, because the referendum frame draws attention to both the past and to Obama the individual, evaluations and predispositions central to evaluating either should be activated upon exposure to the referendum narrative that frames the election as a judgment of Obama and his performance.

Like insurgent campaigns, the choice frame draws attention away from retrospective—and toward prospective—evaluations about who will make the better president over the next four years. Consequently, we hypothesize that the choice frame should activate prospective evaluations about the economy and salient domestic and foreign policy attitudes. In sum, we hypothesize that the referendum frame, by making the election about the past and about Obama, will activate a broad set of retrospective evaluations as well as predispositions central to voters’ evaluations of Obama. In contrast, we hypothesize that the choice frame, by providing a forward-looking distinction between the candidates, should activate prospective evaluations and general policy orientations that will be important in the future.

To assess these hypotheses, we conducted an experiment in the first wave of the study. At the beginning of the baseline survey, we randomly assigned respondents to receive one of two frames, the referendum frame or the choice frame. The referendum frame—which held that the election was a judgment on President Obama and his performance as president—was stated as follows:

“The 2012 presidential election is a judgment of incumbent President Barack Obama. The President's policies have significantly impacted the country over the past four years. In this election, Americans will have the opportunity to express their satisfaction with the president's performance in office through their vote.”

The choice frame—which held that the election presented a choice between President Obama and Governor Mitt Romney—was stated as follows:

“The 2012 presidential election is a clear choice between President Barack Obama and Governor Mitt Romney. The candidates have presented different visions for how the country should be governed over the next four years. In this election, Americans will have the opportunity to decide between these options through their vote.”

To estimate the effect of the frames on the considerations activated in vote choice, we ran several logistic regression models. In each model we included a dummy variable capturing the framing manipulation (referendum frame = 0; choice frame = 1). We also included variables hypothesized to exert differential effects across the two frames, as well as an interaction term between choice frame and each variable of interest. In each model we control for a number of stable predispositions (party identification and ideology) and demographic characteristics (age, race, gender, education) common to models of vote choice in presidential elections. In this estimation, the relevant term for assessing our hypotheses is the interaction term. If the frames work by altering the considerations that influence the vote decision, the interaction between the framing manipulation and that consideration should be statistically significant, signifying that that consideration was given more (or less) emphasis by voters exposed to the choice frame. As we did not include a control group in our experiment,1 we can only assess whether a given variable exerted a stronger effect on vote choice in one or the other framing condition. Therefore, we cannot be certain that, for example, the referendum frame activated retrospective considerations as opposed to the choice frame de-activating those considerations. Despite this limitation, we find statistically significant and theoretically interpretable differences in the effect of several considerations across the two frames as hypothesized.

Our most consistent finding is that retrospective evaluations of Obama's first term presidential performance, as well as evaluations of Obama's character and related predispositions, are weighted more heavily among voters exposed to the referendum frame than among those exposed to the choice frame. Specifically, we find that voters’ evaluations of the Affordable Care Act, Obama's handling of events in Benghazi, feeling thermometer evaluations of the candidates, and racial resentment, are all significantly more impactful among voters exposed to the referendum frame than among those exposed to the choice frame. In other words, the interaction term between the choice frame and each of these variables is statistically (or marginally) significant (p < .10), and opposite in direction to the main effect of the variable, signifying that the impact of the variable is diminished in the choice frame compared to the referendum frame. While we cannot rule out the possibility that these effects occurred because the choice frame de-activated these considerations, these results are nevertheless consistent with the hypothesis that the referendum frame activated retrospective evaluations and considerations central to evaluations of Obama.

To interpret the substantive significance of these effects, we calculate the change in predicted probability of an Obama vote across the range of these considerations within the two experimental conditions. We find that in the choice frame, moving from least supportive to most supportive of Obama's handling of Benghazi increases the probability of voting for Obama by .50, while in the referendum frame this same shift increases the probability of voting for Obama by .56. Moving from least supportive to most supportive of the Affordable Care Act increases the probability of voting for Obama by .56 in the choice frame, but by a more substantial shift of .65 in the referendum frame. Similarly, moving from the most racially conservative to the most racially liberal increases the probability of voting for Obama by .17 in the choice frame, with the same shift in racial attitudes increasing support for Obama by .34 in the referendum frame.2 While these shifts are not dramatically large, their potential impact in a close election is noteworthy. In short, subtle changes in campaign narratives have the potential to significantly shape the weight of considerations in voters’ electoral decisions.

Although both retrospective evaluations and evaluations of Obama were frame dependent, we find no evidence that the effect of prospective, forward-looking evaluations differed across the frames. This runs counter to our expectation that these types of evaluations should be activated by the choice frame. These null findings are, however, consistent with a range of research suggesting that voters largely do not vote on the basis of prospective evaluations. For example, Woon (2012) finds that voters are simply more inclined to sanction politicians—to vote retrospectively—than to choose on the basis of future (i.e., prospective) utility (see also Lenz, 2012). Our findings provide converging evidence for this perspective on electoral cognition.

In sum, these experimental findings provide new insight on campaign effects by documenting how the strategic decision to frame the election as either a referendum on the incumbent or a choice between the two candidates shapes voter behavior. While the choice and referendum narratives are prominent in many political campaigns,3 the 2012 presidential election provided a unique opportunity to experimentally assess their impact in a context in which these narratives were used with regularity. As the first political experiment to explore this topic in depth, we found clear evidence that compared to the choice frame, the referendum frame led voters to place greater weight on past evaluations. These findings dovetail with those stressing the limits of voters’ ability (or motivation) to think prospectively.

Experiment 2: Racial Spillover and Economic Evaluations

The influence of race in American politics is inescapable. Scholars have long maintained that certain issues are “race coded” (e.g., welfare and poverty programs; Gilens, 1996, 1999), and that attitudes toward such issues are strongly driven by racial sympathy or animosity (e.g., Kinder & Sanders, 1996). Recent work suggests that racial resentment may play an even more important role in attitude formation when the policy being evaluated is associated with a racially charged public figure, a so-called “spillover effect” (Tesler, 2012; Tesler & Sears, 2010). In particular, Tesler (2012) shows that whites’ attitudes about health care reform during 2009–2010 were strongly driven by racial attitudes following health care's association with President Obama. By contrast, such attitudes were considerably less racialized during the health care debate in 1993–1994 under President Clinton.

In this research, we investigated whether these racial spillover effects could be generalized to other prominent aspects of the political landscape, namely, to evaluations of the macro economy. To test this possibility, we embedded an experimental manipulation in the second wave of the panel study in which we altered the basis of responsibility for economic outcomes. Specifically, we randomly assigned respondents to either a control condition (with no information about the economy), or to one of four experimental conditions (two dealing with the job market and two with the stock market). These four experimental conditions were collapsed to form two comparison conditions, a High and a Low Responsibility condition.

In the High Responsibility condition, respondents were informed that positive economic developments were the result of actions taken by the White House and President Obama. In the Low Responsibility condition, respondents were informed that positive economic developments were the result of actions taken by the European Central Bank (ECB). Respondents were then asked to evaluate the national economy.4

To analyze these data, we ran ordinary least-squares (OLS) regression models on a dependent variable asking respondents to evaluate whether the economy has improved or deteriorated relative to four years ago. Because of the complexity of our hypotheses, we construct three-way interactions between party identification, experimental condition, and racial resentment. Additionally, we control for ideology and a short, five-item economic knowledge scale.5 If, as past work on racial spillover suggests (Tesler, 2012), Obama acts as a natural agent of racialization, then those who were randomly assigned to the High Responsibility condition—in which Obama is directly implicated—should rely more on racial animosity/sympathy in forming their perceptions of economic performance than those assigned to the Low Responsibility or control conditions.

Our findings bear out this hypothesis, but only for one group of partisans. Looking only at white respondents (as Tesler does), racial animosity strongly predicts four-year retrospective economic evaluations in the High Responsibility condition (but not in the Low Responsibility or control conditions)—but only among Republicans. These effects are both statistically and substantively significant. For Republicans in the High Responsibility condition, a move from one standard deviation below the mean to one standard deviation above the mean on racial resentment leads to a 26% decrease in retrospective economic evaluations (p < .001). A similar move for Democrats is both statistically and substantively insignificant, with effects around 2% (p < .63).

Interestingly, it is not simply that Republicans rely on racial resentment more than Democrats. When faced with information not explicitly tying Obama to economic performance, Republicans and Democrats look strikingly similar. While Republicans exhibit more negative economic evaluations overall, racial resentment exerts no effect on those evaluations. The effects are substantively small (7% for Republicans, −3% for Democrats) and insignificant (p < .12 for Republicans, p < .22 for Democrats). These results show that while racial resentment can influence evaluations, their impact may be conditional on racial priming from the environment (see also Mendelberg, 2001).

Experiment 3: Selective Exposure and Media

In this survey experiment, we asked two questions. First, how do voters react to information that challenges their perceptions of economic performance? Second, after receiving such information, how does subsequent access to pro- and counter-attitudinal information affect their subsequent economic perceptions? For example, if a voter who holds negative perceptions is exposed to information that the economy is improving, how is this likely to affect subsequent information seeking behavior? And what role might psychological traits—in particular, strong epistemic needs for certainty—play in this process? Based on prior work, we expect that individuals high in need for closure—a trait that captures this epistemic motivation (Kruglanski, 1989; Webster & Kruglanski, 1994)—should be particularly likely to exhibit motivated reasoning in updating their political beliefs.

According to the motivated reasoning perspective, voters are often driven more by directional goals (i.e., to reach desired conclusions) than by accuracy goals (i.e., to reach judgments in accord with the evidence; Kunda, 1990; Lavine, Johnston, & Steenbergen, 2012; Taber & Lodge, 2006). The result is that voters are often motivated to hold beliefs that dovetail with their political predispositions (e.g., party ID), without regard to whether they are factually accurate. Past research indicates that while voters spend little time and effort critically analyzing pro-attitudinal information, they are apt to critically analyze and discredit information that conflicts with their standing opinions (Ditto & Lopez, 1992; Lord, Ross, & Lepper, 1979; Taber & Lodge, 2006).

Selective exposure, the tendency for voters to selectively seek out information with which they anticipate agreement (and to avoid information with which they disagree), serves as one mechanism underlying motivated reasoning (Stroud, 2008). Past research used source cues (e.g., partisan cues, Fox News) to manipulate anticipated agreement (e.g., Iyengar, Han, Krosnick & Walker, 2008; Lavine et al., 2012). In our experiment, we sought to determine whether access to information with known implications polarize economic perceptions, such that those with positive beliefs about the economy would selectively seek out information suggesting that the economy is improving, and those with negative beliefs about the economy would selectively seek out information suggesting that the economy is deteriorating.

In Wave 1 of the panel study, we embedded an experiment in which respondents read one of two randomly assigned vignettes about the economy,6 one arguing that the economy was improving and one arguing that the economy was deteriorating. Based on respondents’ prior economic beliefs (i.e., before exposure to a vignette), we coded each respondent as having received either pro- or counter-attitudinal information. For example, an individual with positive (negative) perceptions of the economy, who was assigned to the positive (negative) news condition, was coded as receiving pro-attitudinal information, and an individual with positive (negative) perceptions of the economy, who was assigned to the negative (positive) news condition, was coded as receiving counter-attitudinal information. This manipulation was crossed with a three-level between-participants information board manipulation, such that respondents were shown (1) economic headlines only; (2) headlines with attributions to well-known ideological media sources (whose conclusions about economic performance may well be anticipated); or (3) a control condition in which no information board was provided.7

The information board contained 18 economic headlines (6 positive, 6 negative, 6 neutral), and participants were asked to select all stories they would be interested in reading. This allowed us to track both the number of stories and types of sources selected. In line with Iyengar and Hahn (2009), in one condition we attribute these headlines to different news media sources across the ideological spectrum without explicitly measuring perceived ideological bias.8 The distinction between the two information board conditions is critical, as only the condition with known sources attached to the headlines allows respondents to engage in selective exposure. The information board closely mimics the search tactics used by individuals reading online newspapers, as they are given the opportunity to select related stories based on headlines and source. In sum, we created a 2 (exposure to pro- vs. counter-attitudinal economic news) × 3 (information board conditions) between-groups factorial design. Following exposure to a given vignette, we assessed respondents’ updated economic perceptions. Experimental cell sizes ranged from n = 282 to n = 296.

To examine our hypotheses, we constructed OLS regression models with prospective economic evaluations as our dependent variable. Our key independent variable was a three-way interaction between prior economic beliefs, condition assignment, and information board. While our concern is with the effects of pro- or counter-attitudinal information, respondents were assigned to positive and negative information conditions regardless of their prior attitudes. Therefore, we use the interaction of prior economic beliefs and condition assignment to capture whether a respondent received pro- or counter-attitudinal information. These models also included controls for party identification, need for closure, as well as all of the lower order variables from the three-way interaction. Full question wording and order can be found in the online appendix.

Using this design, we examined the influence of (pro- vs. counter-attitudinal) information about the economy on search patterns and updated perceptions, contingent on prior perceptions and political predispositions. We found, unsurprisingly, that information influenced attitudes (e.g., receiving positive news about the economy led people to adjust their attitudes in a positive direction), but that its effect was often conditioned by party identification and prior beliefs. Most importantly, we found that giving participants access to an information board changed the effect of both pro- and counter-attitudinal information on attitudes by increasing the degree of motivated reasoning. Moreover, as we expected, this biased reasoning effect was limited to those scoring high in the need for closure. When those with positive views of the economy were exposed to positive economic news (i.e., pro-attitudinal information), those high in need for closure become significantly more optimistic about the economy—by eight percentage points—when they were exposed to the information board with known ideological sources (promoting selective exposure) than when exposed to either the nonsourced information board or the control condition. Similarly, high need for closure individuals with a negative prior attitude exposed to positive economic news (i.e., counter-attitudinal information) were more than 17 percentage points more negative about the economy in the sourced board condition, compared to the other conditions. These effects are statistically significant at the p < .05 level.

Our results demonstrate that the tendency to seek out information that validates one's initial perceptions of economic performance depends on both opportunity (e.g., an information board that presents headlines with known sources) and individual differences in the desire to reduce uncertainty (e.g., those high in need for closure). Information seeking bias was absent among those assigned to either the control condition or the information board condition without known ideological sources (e.g., Fox News vs. MSNBC), or in any condition among those low in need for closure. These findings build on those suggesting that information seeking bias is a contingent phenomenon, one depending jointly on the opportunity and motivation to selectively tune in to congenial information. Moreover, our results demonstrate that such biases are not limited to opinion-related variables such as policy attitudes or electoral preferences, but pertain also to factual aspects of the political landscape such as the performance of the economy.

Nonexperimental Analyses

  1. Top of page
  2. Abstract
  3. Overview of the Panel Design
  4. Survey Experiments
  5. Nonexperimental Analyses
  6. Conclusions
  7. References
  8. Biographies
  9. Supporting Information

Beyond serving as a platform to conduct several survey experiments, the three-wave panel design allowed us to examine additional issues related to causal inference, including an examination of the reliability and stability of our implicit measures, as well as their downstream consequences on candidate evaluation and turnout. In addition, the panel study provided an opportunity to examine the question of whether policy preferences and perceptions of candidate traits are endogenous to overall candidate evaluations (see Ladd & Lenz, 2008; Lenz, 2009, 2012). Further, we briefly present an examination of each of these issues.

Implicit Attitudes

A major challenge to understanding how citizens assess presidential candidates, and how these assessments translate into political behavior, is the extent to which the relevant political cognitions are consciously accessible (e.g., Borgida & Miller, 2013; Friese, Bluemke, & Wanke, 2007; Friese et al., 2012; Glaser & Finn, 2013). Cognitive processes and constructs, among them attitudinal preferences, can be represented both at the implicit and explicit level (see Evans, 2008, for a general account of dual-process theories). Explicit attitudes are consciously accessible, and can be retrieved and either reported with accuracy or misrepresented by the respondent (see Wittenbrink & Schwarz, 2007). In contrast, implicit attitudes involve unconscious, automatic associations between attitudinal objects, which cannot be measured using traditional survey techniques, but are much less susceptible to respondent misrepresentation (see Ksiazkiewicz & Hedrick, 2013, for a discussion of political science applications of implicit research.) In this section, we investigate the stability, reliability, and unique effects of implicit and explicit political attitudes on vote choice and evaluations of the presidential candidates during the campaign.

In the panel study, we measure and compare four types of implicit/explicit political associations: racial preferences (e.g., Whites relative to Blacks), candidate preferences (e.g., Obama vs. Romney), and the association of each candidate with two distinct traits (e.g., warmth and competence; see Fiske, Cuddy, Glick, & Xu, 2002; Hayes, 2005). In the baseline wave of the panel study (Wave 1), each participant was randomly assigned to complete two IATs (Greenwald, McGhee, & Schwartz, 1998). Participants either completed measures of (1) their implicit attitudes towards the candidates and their implicit attitudes towards Whites and Blacks, or (2) their implicit association between each presidential candidate (Obama/Romney) and the traits of warmth and competence (separately).9 Each participant was assigned to complete the same two IATs at each of the three panel waves.10

Because the IAT procedure was completed using a different format than the remainder of the survey (i.e., a Java applet rather than HTML), not all participants were able to complete the IATs for technical reasons. However, there are no systematic differences between those who completed the IATs and those who did not in terms of party identification, extremity of party identification, interest in politics, or ideology. As with the remainder of the study, attrition was primarily an issue going from Wave 1 to Wave 2. We do have some individuals (N = 97) who were unable to complete an IAT at Wave 1 but were able to complete one or both sets of IATs at later waves (likely because they were on a different computer or had updated their Java platform in the intervening days). Analyses were conducted on all participants for whom data was available.

We address several questions with these data. First, because some participants completed the same IATs across all three waves, we can distinguish the reliability of the IAT procedure from the stability of the underlying implicit constructs over the course of the campaign using a structural equation model (Wiley & Wiley, 1970).11 Reliability indicates the degree to which the measure is capturing true variance in the construct rather than measurement error. Stability indicates the degree to which the construct remains the same or changes from one panel wave to another. Results, presented in Table 1, indicate moderate levels of measurement reliability for all four IATs, with somewhat lower reliability for the candidate IAT. All of these reliabilities are in the normal range reported for IATs in the literature (Lane, Banaji, Nosek, & Greenwald, 2007). High levels of construct stability are found for all four IATs, suggesting that there was little to no change in the underlying implicit associations from wave 1 to wave 3.12 For purposes of comparison, one study finds that implicit racial attitudes have a stability of 0.68 based on a latent variable analysis of four IATs that were administered over a similar time frame to the present study (i.e., biweekly; see Cunningham, Preacher, & Banaji, 2001). Our results suggest that in a campaign context politically-relevant implicit associations, including implicit racial attitudes, may exhibit greater stability than implicit racial attitudes in a noncampaign context.

Table 1. Measures of Implicit Associations
 Mean D-scoresStabilityReliability
IAT typeWave 1Wave 2Wave 3Wave 1 to 3Wave 1
Note
  1. Negative race scores indicate implicit preference for Whites over Blacks. Positive scores for candidate, warmth, and competence indicate positive associations with Obama.

  2. Stability and reliability are calculated using methods from Wiley and Wiley (1970). For stability and reliability, * indicates that the bootstrapped percentile confidence interval overlaps 1.

  3. 3. Estimated reliabilities were significantly greater than 0 at all three waves, except the race IAT at wave 3. Stability estimates from wave 1 to wave 2 and from wave 2 to wave 3 were similar to those presented earlier.

Race−0.42−0.39−0.390.80*0.70
Candidate0.080.06−0.011.18*0.44
N(198)(89)(85)(63)(63)
Warmth0.160.180.120.89*0.75
Competence0.100.140.081.02*0.68
N(316)(204)(192)(79)(79)

Second, prior research suggests that implicit attitudes toward presidential candidates and racial groups can be significant predictors of vote choice (e.g., Greenwald et al., 2009; Roccato & Zogmaister, 2010), even among undecided voters and political independents (Arcuri et al., 2008; Friese, Bluemke, & Wanke, 2007; Friese et al., 2012; Galdi, Arcuri, & Gawronski, 2008). These data will allow us to replicate and extend these findings by examining how each of the four implicit measures compares to its explicit counterpart in predicting vote choice among decided and undecided voters.

To maximize our sample size, we test these relationships with data from Wave 1. We regress each of the implicit measures on the difference score between the candidate feeling thermometers (to capture candidate preference) using OLS regression. We do not find an effect of implicit racial attitudes, regardless of the presence of any controls. Implicit candidate preference is a significant predictor (p < .05), however, even when controlling for symbolic racism, party identification, differenced candidate warmth scores, and differenced candidate competence scores. The model predicts a 19.9 point (10.0%) shift in the differenced feeling thermometer when moving across the observed range of implicit candidate preference. With these same controls, both implicit associations of warmth and implicit associations of competence are significant predictors of candidate preference as well (p < .05 and p < .01, respectively). When both implicit warmth and implicit competence are included in the model, warmth is marginally significant (p < .074) and competence remains significant (p < .05). This latter model predicts a 14.2 point (7.1%) and 17.2 point (8.6%) shift in the differenced feeling thermometer moving across the observed range of implicit warmth and implicit competence, respectively. These results suggest that, even when controlling for relevant explicit factors, implicit candidate preference, implicit associations of warmth, and implicit associations of competence all contribute to explicit candidate preferences.

We model vote choice among those who expressed an intention to vote for either Romney or Obama at Wave 1. Using a logit model, the results follow a similar pattern to those outlined earlier. The vote choice results are somewhat weaker, likely due to the binary nature of the dependent variable. Contrary to the literature on the 2008 U.S. election (e.g., Finn & Glaser, 2010), we find no evidence that implicit racial bias directly affected vote choice in the 2012 U.S. presidential election. We do find marginal support for the Payne et al. (2009) finding that those higher in implicit racial prejudice are more likely to vote for a third party candidate or to abstain altogether, controlling for symbolic racism (p < .082). Using the same controls as with the differenced feeling thermometer models above, we find a significant effect of implicit candidate preference (p < .05) only when differenced warmth scores are excluded from the model (p < .738 when differenced warmth is included). Implicit measures of warmth do not contribute to vote choice above the effect of symbolic racism, differenced competence scores, or differenced explicit warmth scores. Implicitly associating competence with a candidate predicts vote choice, however, even when all controls are included (p < .05). For strong partisans of both parties, vote choice is nearly perfectly predicted by their explicit assessments of candidate competence and warmth with no significant change in the predicted probability of voting for Obama across the range of implicit competence. For independents who see no difference between Romney and Obama in terms of warmth and competence explicitly, the predicted probability of voting for Obama is significantly lower for those who strongly associate competence with Romney at the implicit level than those who strongly associate competence with Obama at the implicit level (25.9% and 99.2%, respectively). This finding is consistent with previous work that suggests that indirect measures like the implicit association test are most useful in predicting voting among undecided voters (e.g., Arcuri et al., 2008). In sum, we find incremental predictive validity for vote choice only for the implicit competence variable when controlling for explicit attitudes. This finding suggests that during the weeks leading up to the presidential election, voters may have formed trait inferences and preferences about presidential candidates that operated outside of their conscious awareness, but in ways that nevertheless influenced voting behavior.

In sum, this project advances our understanding of the stability of implicit and explicit attitudinal constructs, and their relative importance in explaining actual political behavior and judgments, over the course of the final month of the 2012 presidential campaign.

Policy Preferences, Character Traits, and Candidate Evaluations

Outside of experimental settings, scholars have long struggled with questions about the causal nature of observed effects. For example, do people evaluate candidates based on the issue stances of those candidates, or do they adjust their own issue positions to match those of their preferred candidate? Our three-wave panel study provides traction on causal questions of this nature, as we measure respondents’ attitudes at multiple points in time. This allows us to explore how changes in independent variables map onto subsequent changes in outcomes of interest.

While cross-sectional studies often theorize about the structure of causal relationships, they struggle with what Lenz (2012) refers to as “observational equivalence.” That is, it is often difficult to disentangle whether independent variables are serving as causes of dependent variables or vice versa, as all we observe is correlation at a single point in time. Fortunately, Lenz (2012) offers a general approach to identifying causal relationships in panel studies. Following his logic, we predict outcomes at a point in time based on responses observed at a previous point in time. In doing so, we include a lagged measure of our dependent variable and a measure of change from time 1 to time 2 for our independent variable. This allows us to control for the effect of prior attitudes as well as the effect of any change in such attitudes on the final outcome. This approach ensures that problems of reverse causation do not plague our conclusions. Additionally, controlling for lagged values of our dependent measure means that our models are predicting change in our outcomes of interest.

To demonstrate the utility of our panel design, we apply what Lenz (2012) refers to as the “conventional test” of the antecedents of candidate evaluations. Specifically, restricting our analysis to Wave 1 only, we predict candidate evaluations, measured with a 100-point feeling thermometer (technically a difference score between Obama and Romney) as a function of trait evaluations (warmth and competence), preferences for government services, party identification, and ideology. All of these measures are contemporaneous (measured at a single point in time in Wave 1). In this test, we find strong, significant effects for all of our independent variables, which, were we to rely only on the conventional test, would lead us to conclude that the effect of respondents’ policy preferences (qua government services and spending) and candidate trait perceptions determine—that is, act as a cause of—overall candidate evaluations. Holding all other variables at their means, the movement from the minimum to maximum value on these independent variables leads to large percentage changes in feeling thermometer ratings (from a low of 4.1% for the government services item to a high of 49.9% for the warmth measure). All of the variables are significant at the p < .01 level. However, this cross-sectional analysis leaves open the possibility of reverse causation—that is, that candidate evaluations are driving respondents’ policy preferences (via persuasion) and trait perceptions (via cognitive consistency pressures). That is, it may be that respondents’ overall attitudes toward the candidates leads voters to adopt their favored candidates’ positions on policy issues (in some cases, after they come to learn what those preferences are, see Lenz, 2009).

To assess the possibility of reverse causation, we specify a statistical model based on Lenz's (2012) approach to studying relationships with panel data. Our model includes a lag and change variable for our dependent variable, which we predict at Wave 3 based on independent variables measured at Wave 1. Using this procedure, we see that warmth evaluations continue to predict feeling thermometer evaluations (though with less power than before), but competence judgments and preferences for government services are no longer statistically significant. Warmth evaluations now lead to a 17.0% change in the DV (p < .01) while competence evaluations and government service preferences are insignificant (p < .67 and 0.45, respectively). When we reverse the test and predict support for government services based on candidate evaluations, we find that candidate evaluations at Wave 1 are a strong predictor of government services preferences at Wave 3, even when controlling for Wave 1 service preferences. Here, a minimum to maximum movement on candidate evaluations leads to a 19.1% change in preferences for government services (p < .01). This suggests that voters, instead of choosing the candidate that best matches their policy preferences, pick the candidate they feel warmly toward and then align their policy preferences with those of the candidate.

Conclusions

  1. Top of page
  2. Abstract
  3. Overview of the Panel Design
  4. Survey Experiments
  5. Nonexperimental Analyses
  6. Conclusions
  7. References
  8. Biographies
  9. Supporting Information

In our analysis of campaign effects in the 2012 presidential election, we zeroed in on two key considerations: establishing causality and measuring dynamic change over a relatively brief period of time prior to and just after the election. We reported the results of three survey-experiments that were embedded in a three-wave panel design. Our first experiment examined whether the public perceived the election as a referendum on the performance of President Barack Obama, or as a choice between Obama and his challenger, Governor Mitt Romney. As we discussed, given his relatively weak approval numbers, the Obama campaign worked to frame the election as a choice, whereas the Romney campaign worked to frame the election as a referendum on the President's performance over his first four years in office. Our first hypothesis was that a referendum frame would increase the salience of evaluations of Obama's performance as president, and, as a result, make salient the impact of retrospective evaluations on vote choice. Our second hypothesis was that the choice frame—by inducing a comparative perspective—would make policy preferences more salient, and thereby enhance their impact on vote choice. We found strong support for our first (but not our second) hypothesis: compared to the choice frame, the referendum frame enhanced the effect of retrospective evaluations on vote choice.13 These interactive effects were not only statistically significant, but the magnitude of the effect sizes reported was meaningful in terms of their potential impact on the election.

We also found strong support for the hypotheses associated with our survey experiments. In Experiment 2, we investigated whether racial spillover effects (Tesler, 2012; Tesler & Sears, 2010) could be generalized to evaluations of the macro economy. Our findings provided support for this hypothesis: among White respondents, racial animosity strongly predicted economic evaluations among Republicans who were led to believe that positive economic developments were the result of actions taken by the Obama administration (i.e., the high responsibility condition). As expected, no racial spillover was evident in the low responsibility condition, in which responsibility for economic outcomes was attributed to the ECB. As in Experiment 1, these findings enhance our understanding of the causal role of racial spillover in a presidential campaign by extending its purview to judgments of the economy.

In Experiment 3, we asked three questions: how do voters react to information that challenges their perceptions of economic performance? Second, after providing such information, how does subsequent access to pro- and counter-attitudinal information affect their subsequent economic perceptions? And three, what role do individual differences in the strength of needs for epistemic certainty play in the information-seeking process? Most importantly, we found that when participants were given access to information with known ideological biases (e.g., Fox News vs. MSNBC), they selectively chose to seek out stories that bolstered the validity of their prior economic beliefs. Moreover, as predicted, this biased reasoning effect was limited to those scoring high in the need for closure (i.e., those with strong epistemic needs).

The second major focus of our panel study was on measuring dynamic change over time. We chose to examine two questions to illustrate how we gauged intraindividual change over a truncated portion of the presidential campaign. The first question involved an empirical test of the reliability and stability of a variety of election-related implicit attitudes, and to assess their impact on candidate evaluation. The second question entailed an empirical test of the causal impact of perceptions of candidates’ traits and survey respondents’ policy preferences on electoral preferences, as well as testing the reverse causal order. We unambiguously found support for the reliability and stability of implicit political and racial associations. Specifically, moderate levels of measurement reliability and high levels of construct stability were found for all four types of implicit associations, with change evident only in implicit racial attitudes across waves (implicit candidate associations in the campaign context demonstrated greater stability than implicit racial attitudes over a comparable period of time). In addition, we replicated and extended past research on implicit versus explicit political attitudes and electoral choice by showing that (1) Wave 1 implicit candidate preferences were highly predictive of actual vote choice at Wave 3, and (2) that Wave 1 implicit trait judgments were also predictive of actual vote choice.

Finally, we also chose to examine intraindividual change over time by addressing the following question: do voters evaluate candidates based on the issue stances of those candidates, or do they adjust their own issue positions to match those of their preferred candidate? Following Lenz (2012), we were able to test these kinds of causal questions directly. By controlling for lagged values of our dependent measure, we were able to insure that our models were indeed predicting change in our outcomes of interest. At the cross-sectional level, we found strong effects of both policy preferences and trait perceptions on candidate evaluations, suggesting that voters make their electoral decisions based on issue agreement (with the candidates) and on which candidate is perceived to possess more favorable personal qualities. However, when we employed the cross-lagged approach using panel data, we found little evidence that policy preferences and competence perceptions have any causal impact on candidate evaluations. Rather, we found that candidate evaluations are driving respondents’ policy preferences (via persuasion) and trait perceptions (via cognitive consistency pressures). Using the Lenz (2012) panel data-based approach, we are able to establish more confidently a particular causal order—that voters seem to select the candidate they feel warmly toward and then align their policy preferences (and competence perceptions) with those of the candidate.

In sum, our study of the 2012 presidential election uncovered evidence of a variety of electoral dynamics, including the effects of framing, racial spillover, information seeking bias, and persuasion on policy issues. These effects were captured using two methods designed specifically to examine causal impact and overtime change: random experiments and the panel design. At a general level, our findings indicate that electoral judgments in 2012 were jointly driven by dispositional variables—for example, racial attitudes, need for closure—and contextual factors (e.g., retrospective vs. choice framing, variation in the attribution of responsibility for economic outcomes, and access to pro- and counter-attitudinal information about the economy). We believe that our multimethodological design, in which substantive questions of electoral interest were assessed through embedded experiments and careful assessment of change over the last crucial weeks of the election, provides a useful paradigm for understanding the complexity of cognitive, motivational, and political processes that ultimately shape how voters choose their leaders.

  1. 1

    The primary comparison of interest is between the two experimental conditions and thus, to maximize statistical power, we did not include a control condition.

  2. 2

    There is little evidence to suggest that our frames worked through “political learning” (Lenz 2012), whereby voters change their attitudes to align with campaign communications to which they are exposed. Indeed, our frames did not have any direct impact on voters’ evaluations of any consideration. Rather, the frames altered the weight given to the considerations in voters’ electoral calculus.

  3. 3

    See for example, Jacobson's (2009) referendum model of congressional elections.

  4. 4

    We chose to focus on positive economic news for several reasons. First, at the time of the 2012 election, most signs pointed to an improving U.S. economy and, while unemployment numbers could arguably be framed as positive, neutral, or negative, the stock market was unequivocally strong. More importantly, we were interested in whether racial attitudes could prompt individuals to discount positive information about a political actor (Obama) in the face of reliable evidence of economic improvement. While it makes sense that racially intolerant individuals may discount positive information, the evidence is less clear that tolerant individuals would discount negative information. Therefore, the effects of racial attitudes are likely asymmetrical, and we chose to focus on the discounting behavior exhibited by racially intolerant participants.

  5. 5

    All lower-order (two-way) interactions and first-order terms are also included in the model.

  6. 6

    We focused on the economy for two key reasons: first, it represented the most relevant issue during the 2012 presidential campaign and, second, evaluations of subjective economic performance are more likely to change than on positional issues like abortion rights or gun control.

  7. 7

    Both information board conditions included positive and negative headlines.

  8. 8

    We used the following media agencies as sources: Fox News, The National Review, PBS, MSNBC, Wall Street Journal, NPR, New York Times, The Weekly Standard, Time Magazine, ABC, BBC, and CNN.

  9. 9

    The IATs utilized the following items. Race: photos of Black and White faces and standard valence terms. Candidate: headshots of Barack Obama and Mitt Romney and standard valence terms. Warmth: headshots of Barack Obama and Mitt Romney and warmth words from the SCM scale. Competence: headshots of Barack Obama and Mitt Romney and competence words from the SCM scale.

  10. 10

    Due to technical difficulties, more participants were assigned to complete the warmth and competence IATs at Wave 1 than the race and candidate IATs. The response rate on the IATs was also lower than we had expected. Because of these circumstances, we opted to maximize the number of participants completing the warmth and competence IATs at Waves 2 and 3. As a result, only participants who completed the race and candidate IATs at Wave 1 completed them in the later waves. All other participants who completed IATs at Waves 2 and 3 (including those who completed no IAT at Wave 1) completed the warmth and competence IATs.

  11. 11

    We estimate reliability and stability algebraically and we calculate confidence intervals using the bootstrap method (e.g. Lee, Hornik, & Hennessy, 2008).

  12. 12

    Although the estimates of stability are above one for the competence IAT and the candidate IAT, we interpret these as unity since the confidence interval overlaps with one (Prior, 2010). These stability estimates may be inflated if some of the model assumptions are violated. For example, inflated estimates of stability may occur if error variance in the IAT is correlated across panel waves, if the amount of measurement error decreases across panel waves (e.g., if participants get more proficient at IATs over time), or if implicit associations are becoming polarized over the course of the campaign (see Prior, 2010). By collecting more than three waves of IAT data in future studies, researchers would have the ability to test whether estimates of the stability of implicit associations in a campaign environment are being inflated by one or more of these factors in a three-wave model.

  13. 13

    An alternative interpretation of this finding is that the choice frame de-activated retrospective evaluations. As we did not include a control group in the design, we cannot distinguish between these two mediating processes.

References

  1. Top of page
  2. Abstract
  3. Overview of the Panel Design
  4. Survey Experiments
  5. Nonexperimental Analyses
  6. Conclusions
  7. References
  8. Biographies
  9. Supporting Information
  • Ansolabehere, S., & Iyengar, S. (1995). Going negative: How political advertising shrinks and polarizes the electorate. New York, NY: Free Press.
  • Appleby, J., Ekstrom, P., Farhart, C., Kim, H., Rosenthal, A., Sheagley, G., Smith, B., & Williams, A. (2013). Candidate trait evaluations in the 2013 Presidential election. Paper Presented at the CSPP Multi-Investigator Panel Study Symposium, April 26, 2013.
  • Arcuri, L., Castelli, L., Galdi, S., Zogmaister, C., & Amadori, A. (2008). Predicting the vote: Implicit attitudes as predictors of the future behavior of decided and undecided Voters. Political Psychology, 29(3), 369387.
  • Berinsky, A., Huber, G., & Lenz, G. (2012). Evaluating online labor markets for experimental research: Amazon.com's Mechanical Turk. Political Analysis, 20(3), 351368.
  • Borgida, E., & Miller, A. (2013). Implicit and explicit measurement approaches to research on policy implementation: The case of race-based disparities in criminal justice. PS: Political Science & Politics, 46(03), 532536.
  • Brady, H. E., & Johnston, R. (2006). Capturing campaign effects. Ann Arbor, MI: University of Michigan Press.
  • Buhrmester, M., Kwang, T., & Gosling, S. (2011). Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6(1), 35.
  • Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42, 116131.
  • Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American Voter. New York, NY: Wiley.
  • Chen, P., Housholder, E., Ksiazkiewicz, A., & Sheagley, G. (2013). Motivated reasoning, selective exposure, and cognitive style in information search and attitude formation. Paper Presented at the CSPP Multi-Investigator Panel Study Symposium, April 26, 2013.
  • Chen, P., & Mohanty, R. (2013). Racial spillover effects on evaluations of the economy. Paper Presented at the CSPP Multi-Investigator Panel Study Symposium, April 26, 2013.
  • Cunningham, W. A., Preacher, K. J., & Banaji, M. R. (2001). Implicit attitude measures: Consistency, stability, and convergent validity. Psychological Science, 12(2), 163170.
    Direct Link:
  • Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: Use of differential decision criteria for preferred and non-preferred conclusions. Journal of Personality and Social Psychology, 63(4), 568584.
  • Downs, A. (1957). An economic theory of democracy. New York, NY: Harper & Row.
  • Erikson, R. S., & Wlezien, C. (2012). The timeline of presidential elections: How campaigns do (and do not) matter. Chicago, IL: University of Chicago Press.
  • Evans, J. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annual Review of Psychology, 59(1), 255278.
  • Finn, C., & Glaser, J. (2010). Voter affect and the 2008 U.S. presidential election: Hope and race mattered. Analyses of Social Issues and Public Policy, 10, 262275.
  • Fiorina, M. (1981). Retrospective voting in American National Elections. New Haven, CT: Yale University Press.
  • Fiske, S. T., Cuddy, A. J. C., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878902.
  • Friese, M., Smith, C. T., Plischke, T., Bluemke, M, & Nosek, B. A. (2012). Do implicit attitudes predict actual voting behavior particularly for undecided voters? PLoS ONE, 7(8), e44130. doi:10.1371/journal.pone.0044130
  • Friese, M., Bluemke, M., & Wänke, M. (2007). Predicting voting behavior with implicit attitude measures. Experimental Psychology, 54(4), 247255.
  • Galdi, S., Arcuri, L., & Gawronski, B. (2008). Automatic mental associations predict future choices of undecided decision-makers. Science, 321(5892), 11001102.
  • Gelman, A., & King, G. (1993). Why are American presidential election campaign polls so variable when votes are so predictable? British Journal of Political Science, 23(04), 409451.
  • Gilens, M. (1996). “Race Coding” and White Opposition to Welfare. American Political Science Review, 90(3), 593604.
  • Gilens, M. (1999). Why Americans hate welfare. Chicago, IL: University of Chicago Press.
  • Glaser, J., & Finn, C. (2013). How and why implicit attitudes should affect voting. PS: Political Science & Politics, 46(03), 537544.
  • Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74(6), 14641480.
  • Greenwald, A. G., Smith, C. T., Sriram, N., Bar-Anan, Y., & Nosek, B. A. (2009). Implicit race attitudes predicted vote in the 2008 U.S. Presidential Election. Analyses of Social Issues and Public Policy, 9(1), 241253.
  • Hayes, D. (2005). Candidate qualities through a partisan lens: A theory of trait ownership. American Journal of Political Science, 49(4), 908923.
  • Hibbs Jr., D. A., Rivers, R. D., & Vasilatos, N. (1982). The dynamics of political support for American presidents among occupational and partisan groups. American Journal of Political Science, 26(2), 312332.
  • Iyengar, S., & Hahn, K. S. (2009). Red media, blue media: Evidence of ideological selectivity in media use. Journal of Communication, 59(1), 1939.
  • Iyengar, S., Hahn, K. S., Krosnick, J. A. & Walker, J. (2008). Selective exposure to campaign communication: The role of anticipated agreement and issue public membership. The Journal of Politics, 70(1), 186200.
  • Jarvis, W. B. G., & Petty, R. E. (1996). The need to evaluate. Journal of Personality and Social Psychology, 70, 172194.
  • Jacobson, G. C. (2009). The 2008 presidential and congressional elections: Anti-Bush referendum and prospects for the Democratic majority. Political Science Quarterly, 124(1), 130.
  • Key, V. O. (1966). The responsible electorate: Rationality in presidential voting, 1936–1960. Cambridge, MA: Harvard University Press.
  • Kinder, D., & Dale-Riddle, A. (2012). End of race? Obama, 2008, and racial politics in America. New Haven, CT: Yale University Press.
  • Kinder, D., & Sanders, L. (1996). Divided by color: Racial politics and democratic ideals. Chicago, IL: University of Chicago Press.
  • Kruglanski, A. (1989). Lay epistemics and human knowledge: Cognitive and motivational bases. New York, NY: Plenum Press.
  • Kruglanski, A., & Webster, D. (1996). Motivated closing of the mind: “Seizing” and “freezing.” Psychological review (Vol. 103). Washington, DC: American Psychological Association.
  • Ksiazkiewicz, A., & Hedrick, J. (2013). An introduction to implicit attitudes in political science research. PS: Political Science & Politics, 46(03), 525531.
  • Ksiazkiewicz, A., Vitriol, J., & Farhart, C. (2013). Stability, reliability, and unique effects of implicit and explicit political attitudes in the 2012 presidential election. Paper Presented at the CSPP Multi-Investigator Panel Study Symposium, April 26, 2013.
  • Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480498.
  • Ladd, J. M., & Lenz, G. S. (2008). Reassessing the role of anxiety in vote choice. Political Psychology, 29(2), 275296.
  • Lane, K. A., Banaji, M. R., Nosek, B. A., & Greenwald, A. G. (2007). Understanding and using the implicit association test: IV. What we know (so far) about the method. In B. Wittenbrink & N. Schwartz (Eds.), Implicit measures of attitudes (pp. 59102). New York: The Guilford Press.
  • Lavine, H. G., Johnston, C. D., & Steenbergen, M. R. (2012). The ambivalent partisan: How critical loyalty promotes democracy. New York, NY: Oxford University Press.
  • Lavine, H., & Snyder, M. (1996). Cognitive processing and the functional matching effect in persuasion: The mediating role of subjective perceptions of message quality. Journal of Experimental Social Psychology, 32(6), 580604.
  • Lazarsfeld, P., Berelson, B., & Gaudet, H. 1948. The people's choice: How the voter makes up his mind in a presidential campaign. New York, NY: Columbia University Press.
  • Lee, C., Hornik, R., & Hennessy, M. (2008). The reliability and stability of general media exposure measures. Communication Methods and Measures, 2(1–2), 622.
  • Lenz, G. S. (2009). Learning and opinion change, not priming: Reconsidering the priming hypothesis. American Journal of Political Science, 53(4), 821837.
  • Lenz, G. S. (2012). Follow the Leader? How Voters Respond to Politicians’ Performance and Policies. Chicago, IL: University of Chicago Press.
  • Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarizations: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 20982109.
  • Luttig, M., & Callaghan, T. H. (2013). Choice or referendum? Framing the 2012 Presidential election. Paper Presented at the CSPP Multi-Investigator Panel Study Symposium, April 26, 2013.
  • Mason, W., & Suri, S. (2012). Conducting behavioral research on Amazon's Mechanical Turk. Behavior Research Methods, 44, 123.
  • Mendelberg, T. (2001). The race card: Campaign strategy, implicit messages, and the norm of equality. Princeton, NJ: Princeton University Press.
  • Obama, B. (2012). Democratic Nomination Acceptance Address. 2012 Democratic National Convention. Charlotte, NC. Retrieved September, 6, 2012 from http://www.presidency.ucsb.edu/ws/index.php?pid=101968
  • Patterson, T. E. (1980). The mass media election: How Americans choose their president. New York: Praeger.
  • Payne, B. K., Krosnick, J. A., Pasek, J., Lelkes, Y., Akhtar, O., & Tompson, T. (2009). Implicit and explicit prejudice in the 2008 American presidential election. Journal of Experimental Social Psychology, 46(2010), 367374.
  • Prior, M. (2010). You've either got it or you don't? The stability of political interest over the life cycle. The Journal of Politics, 72(03), 747766.
  • Roccato, M., & Zogmaister, C. (2010). Predicting the vote through implicit and explicit attitudes: A field research. Political Psychology, 31(2), 249274.
  • Romney, M. (2012). Florida Republican Primary Victory Address. Tampa, FL. Retrieved January 31, 2012 from http://www.presidency.ucsb.edu/ws/index.php?pid=99159
  • Sides, J., & Vavreck, L. (2013). The gamble: High rollers. Princeton, NJ: Princeton University Press.
  • Silver, N. (2012). In FiveThirtyEight Retrieved November 11, 2013, from http://fivethirtyeight.blogs.nytimes.com/2012/09/07/sept-6-a-referendum-or-a-choice/.
  • Stroud, N. J. (2008). Media use and political predispositions: Revisiting the concept of selective exposure. Political Behavior, 30(3), 341366.
  • Taber, C. S. & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755769.
  • Tesler, M. (2012). The spillover of racialization into health care: How president Obama polarized public opinion by racial attitudes and race. American Journal of Political Science, 56(3), 690704.
  • Tesler, M., & Sears, D. O. (2010). Obama's race: The 2008 election and the dream of a post-racial America. Chicago, IL: University of Chicago Press.
  • Vavreck, L. (2009). The message matters: the economy and presidential campaigns. Princeton, NJ: Princeton University Press.
  • Webster, D. M., & Kruglanski, A. W. (1994). Individual differences in need for cognitive closure. Journal of Personality and Social Psychology, 67(6), 10491062.
  • Wiley, D., & Wiley, J. (1970). The estimation of measurement error in panel data. American Sociological Review, 35(1), 112117.
  • Wittenbrink, B., & Schwarz, N. (Eds.) (2007). Implicit measures of attitudes. New York: The Guilford Press.
  • Woon, J. (2012). Democratic accountability and retrospective voting: A laboratory experiment. American Journal of Political Science, 56(4), 913930.

Biographies

  1. Top of page
  2. Abstract
  3. Overview of the Panel Design
  4. Survey Experiments
  5. Nonexperimental Analyses
  6. Conclusions
  7. References
  8. Biographies
  9. Supporting Information
  • PHILIP G. CHEN is a Ph.D. candidate in the Department of Political Science at the University of Minnesota and an Interdisciplinary Doctoral Fellow for the 2013–2014 academic year with the Center for the Study of Political Psychology. His research interests include public opinion, American political campaigns, and personality influences on political communication receptiveness.

  • JACOB APPLEBY is a Ph.D. student in social psychology at the University of Minnesota. His current research projects concern how social cognitive processes influence intergroup perception and discourse across the ideological spectrum and the role of racial attitudes and predispositions in political and legal contexts.

  • EUGENE BORGIDA is a Professor of Psychology and Law, and Morse-Alumni Distinguished Professor of Psychology at the University of Minnesota. He is also Adjunct Professor of Political Science. Borgida received his B.A. with High Honors in Psychology and Sociology from Wesleyan University, and his Ph.D. in Psychology from the University of Michigan. He is a fellow of the APS and APA, and Past President of the Society for the Psychological Study of Social Issues. Borgida's research interests include social cognition, attitudes and persuasion, psychology and law, and political psychology.

  • TIMOTHY CALLAGHAN is a Ph.D. candidate in the Department of Political Science at the University of Minnesota.

  • PIERCE EKSTROM is a Ph.D. student in the University of Minnesota's Psychology Department. He studies the causes and consequences of political conflict and the implications of conflict for how members of the public evaluate their leaders.

  • CHRISTINA E. FARHART is a Ph.D. student in political science at the University of Minnesota studying American politics, political methodology, and political psychology. Her research generally focuses on political attitudes and decision-making, impacted by situational factors.

  • ELIZABETH HOUSHOLDER is a Ph.D. candidate in the School of Journalism and Mass Communication at the University of Minnesota. Her research interests include political advertising effects and strategic uses of social media for political campaigns.

  • HANNAH KIM is a Ph.D. student in the Department of Political Science at the University of Minnesota. Her research interests include voting behavior, public opinion on the policy making process, and media effects on polarization. She has received a B.A. from Sookmyung Women's University and a M.A. from Korea University in Seoul, South Korea.

  • ALEKSANDER KSIAZKIEWICZ is a Ph.D. candidate in political science at Rice University. He studies the role of genetic and implicit processes in political attitudes and behaviors.

  • HOWARD LAVINE is the Arleen Carlson Professor of Political Science and Psychology at the University of Minnesota. He is the former editor of Political Psychology, the current editor of Advances in Political Psychology, the 2004 recipient of the Erik H. Erikson Award for Early Career Research Achievement in Political Psychology, and the author of The Ambivalent Partisan: How Partisan Loyalty Promotes Democracy (Oxford University Press, 2012). He has published in the American Political Science Review, the American Journal of Political Science, the Journal of Personality and Social Psychology, The New York Times, and elsewhere.

  • MATTHEW D. LUTTIG is a Ph.D. candidate in political science at the University of Minnesota.

  • HANNAH KIM is a Ph.D. student in the Department of Political Science at the University of Minnesota. Her research interests include voting behavior, public opinion on the policy making process, and media effects on polarization. She has received a B.A. from Sookmyung Women's University and a M.A. from Korea University in Seoul, South Korea.

  • AARON ROSENTHAL is a Ph.D. student in political science at the University of Minnesota. His research interests include state and local politics and political participation.

  • GEOFF SHEAGLEY is an Assistant Professor of Political Science at the University of Minnesota, Duluth.

  • BRIANNA A. SMITH is a Ph.D. student in political science at the University of Minnesota. Smith's research interests include decision making processes, political networks, and the political influence of small businesses.

  • JOSEPH VITRIOL is a Ph.D. student in social psychology at the University of Minnesota, Twin-Cities, and has a MA in Forensic Psychology from John Jay College of Criminal Justice, CUNY. His research interests include psychology and law; the structure, function and use of political ideology; implicit and explicit attitudes in legal and political domains; processes of attitude change and resistance to persuasive communication; conspiratorial thinking and political gnosticism; and motivations of prosocial behavior and social activism. Vitriol was awarded the interdepartmental Eva O. Miller Fellowship in 2012.

  • ALLISON WILLIAMS is a doctoral student at the University of Minnesota studying social psychology with an interdisciplinary focus in political psychology. Her current research interests include intergroup relations and attitudes and beliefs about morality, particularly in an electoral context.

Supporting Information

  1. Top of page
  2. Abstract
  3. Overview of the Panel Design
  4. Survey Experiments
  5. Nonexperimental Analyses
  6. Conclusions
  7. References
  8. Biographies
  9. Supporting Information

Disclaimer: Supplementary materials have been peer-reviewed but not copyedited.

FilenameFormatSizeDescription
asap12041-sup-0001-SupMat.docx198K

Online Appendix: WAVES ONE, TWO, AND THREE (Items appear in order that respondents viewed them).

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.