University of Illinois at Urbana–Champaign. We thank Scott Asay, Robert Bloomfield, Tim Brown, Jeffrey Hales, Jim Hunton, Lisa Koonce, Mark Nelson, Nicholas Seybert, Michael Williamson, Holly Yang, and workshop participants at Cornell University for helpful comments on earlier versions of this paper. We thank Jim Hunton for arranging access to the executives who participated in our survey, and the executives who volunteered their participation.
Prior studies document that managers consider a variety of costs and benefits when deciding whether to issue earnings forecasts. Using an abstract experiment and a survey of experienced financial managers, we provide evidence that managerial overconfidence may also contribute to this decision. Our experiment shows that participants engage in self-serving attribution, giving greater weight to internal than external factors as explanations for good performance. This increases confidence in improved future performance, which increases their willingness to issue forecasts. Two facets of the stable individual trait overconfidence, dispositional optimism and miscalibration, also contribute to confidence in improved future performance and willingness to issue forecasts. Consistent with these results, experienced financial manager survey participants believe other managers are likely to overestimate the extent to which they contribute to positive firm performance, and both overoptimism about firm performance and overconfidence in their ability to predict future firm performance contribute to issuance of earnings forecasts.
While prior research has focused exclusively on the rational tradeoff between costs and benefits, we propose that managerial overconfidence may also contribute to managers’ decisions to provide earnings forecasts. Prior research in psychology and finance suggests that senior managers as a group overestimate their ability. The psychology literature also suggests that a tendency to attribute positive outcomes to their own internal characteristics and negative outcomes to external factors (called “self-serving attribution”) may exacerbate this phenomenon over time.1 We specifically examine whether managers engage in self-serving attribution (overestimate the extent to which their internal characteristics are contributing factors to better performance) that increases overconfidence. Since overconfidence can also be a stable individual trait, we extend our analysis to look at whether managers who are generally overconfident, as identified by standard psychometric tests, are more likely to be overconfident in our setting.2 Finally, we investigate whether overconfidence (regardless of its source), increases managers’ willingness to provide earnings forecasts for future periods. This causal chain is illustrated in figure 1.
Testing the causal path described above with archival data or with a highly realistic experiment would be very difficult. Prior archival studies show that cross-firm differences in the forecasting environment and characteristics of the forecasting firm affect the likelihood of issuing earnings forecasts (see Hirst, Koonce, and Venkataraman  for a detailed discussion). Many of these variables could also affect managerial overconfidence. As a consequence, potential problems with omitted variables make it difficult to conclude that overconfidence affects the issuance of earnings forecasts. More importantly, it is difficult to measure the variables in our causal path from archival data. As we discuss in more detail in section 2, self-serving attributions inferred from archived public statements may reflect both actual beliefs and strategic choices. Similarly, overconfidence cannot be directly measured as a mental state or a stable individual trait in archival studies, where it is instead often inferred from the managerial choices that overconfidence is claimed to produce.
We take a dual approach in this study that complements the archival literature. We start with an abstract experiment showing that participants make self-serving attributions to explain their performance, which increases overconfidence. We also show that future forecasting decisions are influenced by both overconfidence stemming from self-serving attribution in our experiment and overconfidence as a stable individual trait. This provides direct tests of our proposed causal relationships. We conclude with a survey confirming that our results from the abstract experiment align with experienced financial and accounting managers’ beliefs about how real-world forecast disclosure decisions are made.
The experimental test of our hypotheses uses a between-subjects design where our participants complete two rounds of a computerized trivia task with one of two levels of difficulty. Our two levels of task difficulty make it either easier or harder to realize good outcomes, leading to either relatively strong or poor performance among participants. This captures the variation in the difficulty of the operating environment that managers face in the real world while controlling for firm and environmental characteristics that are unrelated to our research questions. Our experimental task is also ambiguous enough in its complexity that participants are able to engage in self-serving attribution to explain their performance.
After the initial round of the trivia task is completed, participants learn their scores, provide attributions for their performance, rate the extent to which they believe they did well, and provide estimates of the mix of question difficulty. They then learn the true mix of question difficulty for the upcoming second round, make a private forecast of their second-round performance, and rate how confident they are that their second-round performance will exceed their performance in the first round.
When managers go out on a limb and provide earnings forecasts, the benefits and costs of positive versus negative outcomes are amplified (Tan, Libby, and Hunton , Hutton and Stocken ).3 Payments in the second round of our experiment are designed to capture this element of the forecasting environment. Participants’ final decision at the end of the first round is choosing whether to commit to improving their performance in the second round. Like forecasting managers, those who choose to commit to improving their performance are paid a higher rate per correct trivia answer if their performance in the second round does in fact exceed their performance in the first round, but have their payments reduced if they match or fall short of their first-round performance.4
Self-serving attribution bias involves overestimating the extent to which internal characteristics (skill and effort) versus external characteristics (luck and task difficulty) are contributing factors to better performance. It does not involve misestimating absolute levels of skill, effort, luck, or task difficulty. As a consequence, if our participants’ attributions are unbiased, then first-round performance should have no effect on the relative weighting that they give to internal versus external factors as explanations for their performance. However, we find that favorable perceptions of first-round performance increase self-serving (internal) attributions. More self-serving attributions, as well as greater overconfidence as a stable individual trait, makes participants more confident about improving their future performance. This greater confidence increases the likelihood that they commit to improving in the second round, which amplifies the benefits and costs of positive versus negative performance in the same manner as forecast issuance.
Our survey examines whether our experimental findings match experienced financial and accounting managers’ beliefs about the issuance of earnings forecasts in the real world. We find that our experienced manager participants strongly agree that managers are, in general, overconfident. They further agree that managers are likely to overestimate the extent to which they contribute to positive firm performance, and that both optimism about firm performance and confidence in their ability to predict future performance contribute to the provision of forecasts. They also agree that providing earnings forecasts amplifies the benefits and costs of positive versus negative outcomes, supporting our experimental assumption.
Our results have potentially important implications for managers. Graham, Harvey, and Rajgopal  find that more than two-thirds of executives agree that they are concerned with setting a disclosure precedent that they will be unable to maintain. The experienced manager participants in our survey strongly agree that providing earnings forecasts sets a disclosure precedent that leads to more negative market reactions when actual earnings fall below forecasts. It is therefore problematic for managers if overconfidence during periods of strong performance leads them to provide voluntary disclosures at a higher rate than they would have done otherwise. Even if the forecast itself is not overoptimistic, if the provision of earnings forecasts is sticky, then issuance increases the likelihood that future forecasts will be provided and potentially missed.
Our results also have implications for market participants and regulators. Schrand and Zechman  demonstrate that overconfident managers are more likely to engage in fraud. Thus, overconfident managers who provide voluntary forecasts thinking that they can continue to deliver improved performance in the future may be more likely to resort to earnings management and possibly fraud if at some point they realize they can no longer meet those expectations. This may be particularly problematic during periods of strong macroeconomic or industry performance (i.e., when it is relatively easy to achieve good outcomes), if overconfident managers fail to anticipate that performance will eventually revert back to more normal levels.
Our results may also have implications for other disclosures that involve forward-looking statements. For example, managers’ qualitative discussions in the CEO letter regarding the firms’ future prospects might also be overly optimistic, even if they are difficult to quantify. Similarly, miscalibration may affect managers’ confidence in their ability to estimate the range of future cash flows that might be generated by assets when performing impairment tests that determine ending balance sheet values.
Finally, our results have implications for the general literature on overconfidence in managerial decision making. Billett and Qian  raise the question of whether overconfidence is learned from prior successes or is an endowed, general trait. We attempt to separately assess the general trait and further discriminate between two facets of the trait “overconfidence” (miscalibration and dispositional optimism) often used interchangeably in the behavioral finance literature.
The major limitation of our study is our inability to assess the magnitude of our discovered effects in the real world, where there are a number of reasons why managers might provide earnings forecasts as part of either a short- or long-term guidance policy. The incentive function in our experiment only captures management's incentives to leverage up the effects of their future performance. The magnitude of the other effects, as well as the potential for interactions, must be assessed in future research.
The rest of this paper proceeds as follows. Section 2 of the paper further develops our hypotheses. Section 3 describes the experimental design and results. Section 4 discusses our survey of experienced financial managers, and section 5 concludes.
In this section, we develop four separate hypotheses that can be tested individually, but combine to form the causal path that we predict will lead to greater provision of earnings forecasts when a firm is performing well (see figure 1). Individuals begin a task with some set of prior beliefs about their own internal characteristics as well as the external characteristics of the task environment. If they perform well on the task, it is rational for them to consider that they are somewhat skilled at the task or that they exerted a high degree of effort. However, it is also rational to consider that the task may be easier than they had originally thought or that part of the outcome was the result of luck. In other words, individuals who do well should conclude that both internal and external factors played a role in their performance. Similarly, those who do poorly may adjust their beliefs downward about their skill, effort, and luck, and increase their assessment of the task difficulty, but they should be equally likely as those who do well on the task to rate both internal and external factors as contributors to their performance.
However, attribution is biased and self-serving if those who do well (poorly) give greater weight to their own internal characteristics (environmental factors beyond their control) in explaining their performance. That is, it is self-serving if those who do well attribute their performance to their own skill and effort while those who do poorly attribute their failure to task difficulty or bad luck (Miller and Ross ). The bias has been found to be unusually large (compared to other cognitive biases) and pervasive among most age groups and cultures (Mezulis et al. ).
Complementary studies (see Bettman and Weitz , Baginski, Hassell, and Hillison , Baginski, Hassell, and Kimbrough , Barton and Mercer , and Elliott, Hodge, and Sedor ) have investigated the use of internal versus external attributions in public statements, which may reflect management beliefs or strategic choices by management, as well as by counsel and investor relations personnel who review the public statements (Larcker and Zakolyukina ). Sociolinguistics research suggests that personal pronouns may be used to shift attentional focus, where the presence of personal pronouns allows the provider of a message to associate themselves with positive news, but the absence of personal pronouns distances the messenger from negative news (Gunsch et al. , Tausczik and Pennebaker ). In accounting, Li  demonstrates that public self-attributions, measured by the greater use of first-person pronouns relative to second- and third-person pronouns in the Management's Discussion and Analysis (MD&A), are positively associated with performance and a wide variety of financial policies and disclosure choices, including forecast issuance. However, Li's  study cannot determine whether these findings are the result of a cognitive bias or strategic choice. Our study complements studies of public statements by directly assessing private self-serving attributions, as well as overconfidence as a mental state and a stable individual trait, to test our proposed causal path.
Prior evidence in the management forecasts literature is also consistent with our theory. Using archival data, Miller  finds that voluntary disclosures increase when a firm experiences improving earnings after periods of flat earnings performance, and do not decrease until earnings begin to decline (see also Houston, Lev, and Tucker ). Bergman and Roychowdhury  find that past returns are positively associated with the frequency of long-horizon forecasts in the current quarter, also consistent with past returns increasing managers’ optimism about future performance. However, Miller  and Bergman and Roychowdhury  do not suggest or address managerial overconfidence as a contributor to their findings. Bhojraj, Libby, and Yang  find that guidance issued by inexperienced guiders is optimistically biased. Their findings appear to be consistent with the idea that managers may be overconfident when they first choose to initiate voluntary disclosures. Also, consistent with this notion, Hribar and Yang  find that generally more confident CEOs (identified by a press-based measure) issue more optimistic forecasts. While Hribar and Yang  demonstrate that greater managerial confidence is associated with providing earnings guidance and that this guidance is overoptimistic, they do not test the causal links between past success, self-serving attribution, the general trait of overconfidence, and the provision of voluntary disclosures. Our paper tests these causal relationships. Our first hypothesis is therefore
H1:Higher ratings of first-round performance will be associated with self-serving attribution (participants giving greater weight to internal factors than external factors as explanations for their first-round performance).
In addition, we expect that participants who provide more self-serving (i.e., internal) attributions for favorable first-round performance are also more likely to believe that they will do better in the second round, compared to the first round, than are participants who provide less-biased attributions for first-round performance. When participants perform well during the first round, the fact that they attribute their performance primarily to internal factors should increase their confidence in their second-round performance. For example, those who do well and attribute their performance disproportionately to skill should have enhanced perceptions of their self-efficacy in the task. Similarly, those who do well and attribute their performance disproportionately to their level of effort should assume that they can maintain a high level of effort and achievement. As a result, our second hypothesis is that
H2:Self-serving attribution resulting from favorable perceptions of first-round performance increases confidence that second-round performance will exceed first-round performance.
We predict that self-serving attribution increases confidence that second-round performance will exceed (rather than simply meet) first-round performance because in the second round there is less ambiguity in the difficulty of the task. As discussed in more detail in section 3, both rounds of the trivia task involve a mix of easy, medium, and hard questions. In the first round of the task, those in the low-difficulty (high-difficulty) condition are not told that the majority of their questions are easy (hard). As a consequence of this uncertainty, we expect, and find, that participants’ own estimates of question difficulty in the first round are insufficiently extreme on average (see table 1). In the second round, when participants learn the true mix of question difficulty for the second round, we expect those in the low-difficulty condition (who are also more likely to have done well and engaged in self-serving attribution) to infer that the second round will contain more easy questions than did the first round, and consequently believe that they will do better in the second round than the first. This setup is analogous to the real-world ambiguity that managers face where, as time passes, they learn more about the true ease of the operating environment, after they have already engaged in self-serving attribution in the past. In the high-difficulty condition, we expect a similar pattern, but in the opposite direction, so that participants infer that the second round will contain more hard questions than did the first round and will therefore expect to do worse in the second round than the first.
Table 1. Descriptive Statistics
Mean by Condition
*Response is measured on a 9-point scale, where 1 =“Strongly Disagree” and 9 =“Strongly Agree” with the statement “I did well in the first round.”
†Response is measured on an 11-point scale, where 1 = I am 100% certain that I will do worse in the second round than I did in the first round and 11 = I am 100% certain that I will do better in the second round than I did in the first round.
ATTRIBUTION= Sum of four forced-choice attribution judgments between internal and external explanations for performance (skill vs. luck, difficulty vs. effort, skill vs. difficulty and effort vs. luck). Higher values indicate that an individual puts relatively more weight on internal factors to explain his performance.
MISCALIBRATION= Score on a measure of individual stable miscalibration. Higher values indicate greater confidence.
LOT-R= Score on the LOT-R measure of dispositional optimism. Higher values indicate greater dispositional optimism.
This table provides descriptive statistics for various measures captured in our experiment, and separately presents variable means for both our low- and high-difficulty conditions.
First-Round Score (out of 25)
Forecasted Second-Round Performance (out of 25)
Beliefs about whether they Did Well in the First Round*
Confidence that Second Round score will exceed First Round†
Second-Round Score (out of 25)
Estimate of the # of Hard Questions in Round 1
Estimate of the # of Medium Questions in Round 1
Estimate of the # of Easy Questions in Round 1
Since overconfidence may be a stable trait, which is influenced by environmental factors, we also look at behavior among participants who are generally more confident about positive future outcomes. Specifically, we use psychometric measures to capture two facets of stable individual overconfidence: dispositional optimism and miscalibration.6 Dispositional optimism and miscalibration are often simply labeled as overconfidence in the behavioral finance literature (Skała ). There are reasons to expect that both may be related to participants’ confidence about future performance in our task. Dispositional optimism is more likely to be associated with confidence related to overplacement of the mean of an uncertain outcome. Miscalibration, on the other hand, captures underestimation of the variance of uncertain outcomes (i.e., confidence intervals that are too narrow). We expect that generally more confident participants are more likely to think they will do better in the second round than in the first round. Our third hypothesis is therefore
H3:Participants who are higher in stable individual measures of overconfidence will also be more confident that their second-round performance will exceed first-round performance.
We focus on measuring miscalibration and dispositional optimism given that recent research has investigated how these two traits affect corporate decisions, suggesting that they are of particular interest in economic settings (Ben-David, Graham, and Harvey , Hackbarth ). However, two other biases have been discussed in prior literature and treated as indicators of overconfidence: the Better-Than-Average (BTA) effect and the Illusion of Control (see Skała  for a review). It is not clear whether these two facets reflect stable individual traits or are only context-specific.7 Furthermore, we are not aware of any psychometric measures for testing the existence of BTA and the Illusion of Control as individual traits, which precludes us from investigating them in our setting.
Finally, we expect that participants who are more confident about their second-round performance will be more likely to commit to doing better in the second round, where commitment results in higher payouts so long as second-round performance exceeds first-round performance, but penalizes participants if their performance does not improve. This decision in our task is analogous to real-world forecast decisions, where those who are more confident in their ability to deliver stronger performance should be more likely to provide forecasts as a signal of expected improved performance. Thus, our final hypothesis is that
H4:Participants that are more confident in their second-round performance will be more likely to commit to doing better in the second round than in the first round.
While we expect H4 to hold, there is some uncertainty surrounding our prediction. Prior literature offers inconclusive evidence as to whether individuals actually believe and act on their self-serving attributions when there are economic consequences. In other words, individuals may provide self-serving attributions due to self-presentation concerns, or the desire to present a certain image of themselves to others (Shepperd, Malone, and Sweeny ). If that is the case, participants may provide self-serving attributions for their performance and state that they are confident in future performance (because these statements are costless and enhance the image they present to others), but choose not to commit to better second-round performance since it involves actual economic costs should they fall short.
3. Experiment and Results
Fifty-seven participants are recruited from an MBA course at Cornell University. Libby, Bloomfield, and Nelson  suggest that MBA students can be used as participants when their knowledge is sufficient for the task. MBA students have sufficient knowledge to complete our experiment and to understand the incentives associated with the forecasting task that we give them. Furthermore, given that we are interested in a pervasive psychological phenomenon, we believe that our results would be unlikely to differ if we were to use experienced financial managers as participants. Of the 57 participants recruited, 4 participants’ responses after the first round of trivia are lost due to computer malfunctions. An additional 6 participants failed to provide responses on all of the measures necessary for our analysis, so we are left with a final sample of 47 participants. On average, participants are 27 years old and have 4.71 years of full-time work experience. The average number of accounting and finance courses that they have completed is 1.70 and 1.74, respectively. Approximately 43% of participants are female.
The experiment consists of a 1 × 2 between-subjects design carried out over two sessions. We begin by manipulating the mix of question difficulty in a trivia task that participants face (throughout section 3 of the paper, refer to figure 2 for a depiction of the experimental timeline). In the low-difficulty condition, each of the two rounds of 25 trivia questions consists of 15 easy, 5 medium, and 5 hard questions in terms of difficulty.8 In the high-difficulty condition, the mix is reversed, so that 15 of the questions are hard, while 5 are of medium difficulty, and 5 are easy. As discussed above, this captures the variation in the difficulty of the operating environment that managers face in the real world and produces natural variation in performance.9 The low-difficulty (high-difficulty) condition should lead to relatively strong (weak) performance among participants in our experiment, which, in turn, increases the likelihood that they will engage in self-serving attribution to explain their performance.
Trivia questions are chosen from the board game Trivial Pursuit (Hasbro ), which has three levels of question difficulty labeled as “easy,”“medium,” or “hard.”10 Although the game has six categories of trivia, we use only the Science & Nature, Arts & Literature, and Geography and History categories (excluding Sports & Leisure and Entertainment) to increase the likelihood that questions are equally challenging for domestic and international participants, as well as participants of both genders.11 For each question, participants are presented with the correct answer along with three incorrect answers to choose from. We use this multiple-choice format rather than allowing for free responses in order to facilitate computerized scoring of performance.12
Our experimental task is designed to be straightforward, yet capture key attributes of managers’ forecasting environment. As in a normal business setting, participants perform a task where the difficulty of the task and their ability to perform the task are uncertain. Higher performance in the task increases compensation. Participants receive feedback on their performance in the first period, which is relevant to assessing task difficulty and their own ability, but the difficulty of the task and their ability level is still uncertain.
Participants begin by answering the first round of 25 trivia questions. For each correct answer in the first round, participants are paid $2.13 Again, in the low-difficulty (high-difficulty) condition, 15 of the questions are easy (hard), 5 are medium, and 5 are hard (easy). Participants are never informed of the actual mix of question difficulty in the first round. This design choice, along with the mix of question difficulty itself, captures some of the ambiguity that managers face in their real-world forecasting environment. Increased ambiguity gives managers more flexibility to engage in the self-serving attribution bias that affects confidence in future performance. Upon completing the first round, participants learn the raw number of questions answered correctly (but not the mix of question difficulty). They then answer the following question: “The questions you received were a mix of easy, medium, and hard questions. How many of the questions that you answered do you think came from the easy, medium, or hard category?” Responses are given by filling in numbers for each difficulty level so that they sum to 25, or the total number of questions in the first round. This measure allows us to gauge participants’ beliefs about the difficulty of the task that they faced in the first round.
3.3.1. Measuring Self-Serving Attribution Bias.
Participants then answer questions designed to measure self-serving attribution bias for their first-round performance. Similar to Scapinello , we elicit four responses from each participant designed to measure relative weighting on internal attributions (i.e., skill and effort) versus external attributions (i.e., luck and difficulty) for first-round performance. Participants provide ratings on four 9-point bipolar scales, where each of the four bipolar scales that they respond to pits an internal attribution against an external attribution as an explanation for their performance (i.e., Skill vs. Luck, Difficulty vs. Effort, Skill vs. Difficulty, and Effort vs. Luck). For example, in the Difficulty versus Effort scale, participants rate to what extent their performance was the result of whether they “tried hard” (effort) versus “the difficulty of the task” (difficulty). Endpoints of the scales are balanced so that internal attributions are at the bottom of the scale for two judgments and at the top of the scale for the other two judgments. We then code responses such that a higher value indicates greater internal attribution, and then combine their scaled responses to measure whether participants place more weight on internal versus external factors as explanations for their performance. This measure represents our ATTRIBUTION variable. If they are biased, those who believe that they performed relatively well (poorly) should be more likely to attribute their performance to internal (external) factors. Furthermore, those who perform well and attribute the performance to internal factors should be more confident about their future performance.
We also ask participants to respond to a 9-point scale asking to what extent they agree with the statement “I did well in the first round of trivia” (1 =“Strongly Disagree,” 9 =“Strongly Agree”). This represents our DIDWELL variable, and we expect that those who are in the low-difficulty condition are likely to achieve higher scores, and thus are more likely to agree with the statement indicating that they did well in the first round.
3.3.2. Measuring Confidence in Future Performance.
Participants are next told that they will answer another round of 25 questions, but that this time they know in advance the mix of question difficulty. For this round, the mix of difficulty is the same as in the first round, but participants are not aware that the difficulty of the two rounds is equivalent because they never learn what the mix of difficulty was in the first round. They are told that, as in the first round, they will be paid $2 for each correct response in the second round. During this part of the experiment, we measure truthful private forecasts to gauge individuals’ beliefs about their future performance.14 Asking them to provide truthful private forecasts encourages participants to think specifically about how they expect to perform before making the decision about whether to commit to improving, as described below.
To capture our CONFIDENCE variable, we ask participants to indicate where they fall on a 9-point scale with endpoints, 1 =“I am 100% certain that I will do worse in the second round than I did in the first round” versus 9 =“I am 100% certain that I will do better in the second round than I did in the first round.” This scaled response allows us to more clearly see differences between the low- and high-difficulty conditions than does the binary decision (discussed below) of whether or not to commit to improving performance. As discussed previously, we expect that both self-serving attributions for favorable performance in the task as well as measures of stable individual overconfidence will be positively related to our measure of confidence in second-round performance.
3.3.3. The Decision to Commit to Improving.
Finally, we tell participants that they have an additional opportunity to increase or decrease their payout if they choose to commit to doing better in the second round than in the first round. Those who choose not to commit keep the payout discussed above. Those who do commit to doing better have a slightly altered version of the payout function that amplifies both the benefits as well as the costs associated with actual outcomes. Specifically, those who commit to doing better earn $2.50 ($1.50) per correct trivia question in the second round if they do (do not) beat their first-round performance.15 In other words, choosing to commit increases compensation if second-round performance is higher than first-round performance, but reduces compensation otherwise.
To aid them in their decision, participants are shown what their payouts will be under various levels of performance should they choose to commit versus not commit (see appendix B for an example). As mentioned previously, we expect that the overconfidence stemming from both self-serving attribution bias as well as stable individual characteristics will make participants more likely to commit to this higher level of performance. Once the forecasts or beliefs of all participants have been measured, they complete their second round of 25 trivia questions.
3.3.4. Measuring the Individual Trait “Overconfidence.”
After completing the main experimental task, participants are asked to return one week later for their payment. They are (truthfully) told that the delay is necessary in order for their payment to be calculated and prepared. Upon returning, we elicit from them a measure of the trait individual miscalibration and a measure of dispositional optimism.
Participants are asked to provide responses to 10 numerical questions designed to capture their general level of miscalibration (i.e., as a stable trait as opposed to as a result of our experimental manipulations). Questions are taken from the miscalibration quiz in Nofsinger . Participants are asked to provide a lower and an upper limit for each question such that their subjective confidence that the interval actually contains the true value is equal to 90%. The exact wording of instructions is similar to other studies measuring overconfidence (e.g., Russo and Schoemaker ) but includes some additional clarifying comments (Cesarini, Sandewall, and Johannesson ). See appendix C for a list of the questions asked of participants to capture measures of our MISCALIBRATION variable. To measure optimism, we use Scheier, Carver, and Bridges’s  LOT-R. The LOT-R measure captures dispositional optimism and asks participants to what extent they agree with statements like, “I’m always optimistic about my future.” See appendix D for the task used to measure optimism.16
3.4 results of the experiment
Table 1 shows descriptive statistics for scores in each round, participants’ beliefs about how well they did, question difficulty estimates by condition, self-serving attribution scores, and also confirms that there were no significant differences between conditions for our measures of stable traits: miscalibration and dispositional optimism.
Results of manipulation checks confirm that participants understood the task that they faced. Specifically, 98% correctly indicated that they would answer a mix of easy, medium, and hard trivia questions, and 87% correctly indicated that they would receive $2 per correct response in the first round.
3.4.1. Tests of Hypotheses.
We first confirm that our manipulated low-difficulty and high-difficulty conditions had their expected effect on participants’ performance and beliefs about their performance. We find that participants in the low-difficulty condition scored significantly higher in both rounds of trivia than did those in the high-difficulty condition, and were also more likely to think that they did well in the first round (p < 0.001, one-tailed, for all three comparisons).
Our first hypothesis is that those who believe they did better in the first round will attribute their performance more to internal factors than those who do relatively poorly, as a result of self-serving attribution bias. Panel A of table 2 shows evidence that is consistent with H1. Specifically, a higher value for ATTRIBUTION indicates that a participant places relatively more weight on internal explanations for first-round performance, and ratings of the extent to which they did well (DIDWELL) are significantly positively associated with ATTRIBUTION (p= 0.019, one-tailed).17 We also include CONDITION as a control variable in case it has unintended effects on participants’ attributions above and beyond its influence through the beliefs about performance captured by DIDWELL (although the CONDITION variable is not significant in the analysis).
Table 2. Tests of Hypotheses
Panel A: Test of H1
Test of the effect of first-round performance on biased self-serving ATTRIBUTION
R2= 11.10%R2(adj) = 7.05%
Panel B: Test of H2
Test of the effect of biased self-serving attributions on CONFIDENCE in future performance
R2= 29.66%R2(adj) = 24.75%
Panel C: Test of H3
Correlation between CONFIDENCE (with respect to our experiment) and Individual Measures of Confidence and Attribution Style (p-values in italics)
Panel D: Test of H3, Continued
Test of the effect of stable individual overconfidence on CONFIDENCE in future performance in our experimental setting
This table summarizes results of our four hypotheses. The variable(s) of interest for a given hypothesis are indicated with directional predictions shown under the “expectation” heading in each panel.
*One-tailed test of directional hypothesis, n= 47.
R2= 38.73%%R2(adj) = 31.26%
Panel E: Test of H4
Test of the effect of CONFIDENCE on the decision to COMMIT to improving second-round performance in our experimental setting
R2 (U) = 28.68%
DIDWELL= Response on a 9-point scale asking participants to what extent they agree with the statement “I did well in the first round.” Higher scores indicate that they felt they did well in the first round.
CONDITION= 1 if a manager was in the low-difficulty condition, and 0 otherwise.
ATTRIBUTION= Sum of four forced-choice attribution judgments between internal and external explanations for performance (skill vs. luck, difficulty vs. effort, skill vs. difficulty and effort vs. luck). Higher values indicate that an individual puts relatively more weight on internal factors to explain his performance.
MISCALIBRATION= Score on a measure of individual stable miscalibration. Higher values = greater confidence.
LOT-R= Score on the LOT-R measure of dispositional optimism. Higher values = greater dispositional optimism.
CONFIDENCE= Response to the question, “Please indicate where you fall on the following scale: 1 = I am 100% confident that I will do worse in the second round than in the first round vs. 11 = I am 100% confident that I will do better in the second round than in the first round.”
COMMIT= 1 if they are willing to commit to doing better in the second round than in the first round, and 0 if they are not.
Our second hypothesis is that participants’ self-serving attributions for favorable performance should be significantly positively associated with confidence that second-round performance will exceed first-round performance. Panel B of table 2 shows that this hypothesis is supported. We use a 9-point scale with endpoints, 1 =“I am 100% certain that I will do worse in the second round than I did in the first round” versus 9 =“I am 100% certain that I will do better in the second round than I did in the first round” to capture confidence (CONFIDENCE). As indicated in the table, ATTRIBUTION is significantly positively associated with our CONFIDENCE measure (p= 0.006, one-tailed), even after controlling for participants’ perceptions about their first-round performance (DIDWELL) and experimental condition (CONDITION). As shown in panel B of table 2, the coefficient on DIDWELL is negative and significant (p= 0.007) in our test of H2. This is consistent with the idea that believing that they did well has two effects on participants. On the one hand, believing that they did well should increase self-serving attribution. On the other hand, believing that they did well makes it less likely that future performance will be even better. In our test of H2, our ATTRIBUTION measure captures the former effect, whereas the latter (which has a negative effect on confidence in improved future performance) is captured by the DIDWELL variable.
Our third hypothesis is that participants who are higher in stable individual measures of confidence in positive future outcomes should also be more confident that their second-round performance will exceed first-round performance. Panel C of table 2 shows the correlation (and significance) between our CONFIDENCE measure, our measure of miscalibration (MISCALIBRATION), and our measure of dispositional optimism (LOT-R). Panel C of table 2 indicates that the two personal trait measures are not significantly correlated with each other (p= 0.952, two-tailed), consistent with the idea that they represent separate constructs. Thus, we include both in our regression analysis testing H3. We expect MISCALIBRATION to be significantly associated with CONFIDENCE as it captures underestimation of the variance of uncertain outcomes (i.e., confidence intervals that are too narrow). We expect LOT-R to be significantly associated with CONFIDENCE because it captures overplacement of a mean (i.e., an expectation that is too optimistic). Panel D of table 2 shows that both the MISCALIBRATION and LOT-R measures have significantly positive incremental effects on CONFIDENCE in a regression including the DIDWELL, CONDITION, and ATTRIBUTION variables previously considered (p= 0.033 and p= 0.064, respectively, one-tailed).18 As discussed in our test of H2, as expected, the coefficient on DIDWELL is again negative and significant in our test of H3 (p= 0.004).
Our last hypothesis is that, regardless of its source, participants who are more confident that second-round performance will exceed first-round performance should be more likely to commit to improving their performance, which alters their payout function. Panel E of table 2 shows that this hypothesis is strongly supported, as the coefficient on CONFIDENCE is positive and significant (p= 0.001, one-tailed). In our test of H4, we omit variables included in our tests of H1 through H3. We do this because these variables represent earlier steps in our causal path (figure 1), and their effect on the decision to commit to improving should operate through their influence on CONFIDENCE. Nevertheless, when we include CONDITION, DIDWELL, ATTRIBUTION, MISCALIBRATION, and LOT-R in a regression of COMMIT on CONFIDENCE, the coefficient on CONFIDENCE is significant (p= 0.004, one-tailed, untabulated), whereas the coefficients on all other variables remain insignificant (p-value = 0.0913 for ATTRIBUTION, all other p-values >0.10, untabulated).
Combined, the results presented in table 2 confirm that self-serving attribution in our task, and overconfidence stemming from stable traits, both increase the likelihood that participants commit to improving future performance, through their effect on confidence in future performance (Kenny, Kashy, and Bolger , p. 260). Consistent with the idea that those who choose to commit are more likely to be overoptimistic, we also find that they are not significantly more likely to actually improve their performance in the second round, as compared to those who choose not to commit to improving (p= 0.126, one-tailed, untabulated). Furthermore, when we compare the forecasted performance to actual second-round performance, we find that those who choose to commit have significantly more optimistic forecast errors than those who do not commit (p= 0.059, one-tailed, untabulated).
3.4.2. Additional Analyses.
The literature on self-serving attribution bias suggests that positive (negative) performance will be associated with internal (external) attributions, although our study is focused primarily on internal attributions for positive performance. To further support that our measured variable ATTRIBUTION is a proxy for self-serving attribution, we confirm that DIDWELL and ATTRIBUTION are strongly positively correlated (ρ= 0.324, p= 0.013, untabulated). We also conduct additional tests after removing participants whose responses do not match our definition of self-serving attribution (i.e., those who appear to provide internal attribution for poor performance or external attribution for good performance). We separately test H1 through H4 after: (1) removing one participant who provides a rating of his own performance in the bottom 10th percentile but an internal attribution in the top 10th percentile, and (2) removing three participants who provide ratings of their own performance in the bottom 25th percentile but an internal attribution in the top 25th percentile. Given that these participants’ responses are inconsistent with our definition of self-serving attribution and work against finding our predicted results, as expected, their exclusion strengthens the significance of our findings.
We also examine whether individuals’ beliefs about performance (DIDWELL) or actual performance is a stronger driver of ATTRIBUTION. We find that participants’ beliefs about how well they did (DIDWELL) are significantly positively correlated with their actual scores in the first round of the trivia task (ρ= 0.688, p < 0.001, untabulated). However, further supporting the idea that individuals’beliefs are what drive their attributions, we do not find that actual first-round scores are significantly positively correlated with our ATTRIBUTION measure (p < 0.186, untabulated). Thus, it appears that actual performance must increase participants’beliefs that they did well for them to desire to explain their good performance with a self-serving internal attribution. Although we do not find that actual first-round scores are significantly positively correlated with the ATTRIBUTION measure in our experiment, we believe that the link between actual firm performance and attribution bias is likely to be stronger in the real world. In our setting, the meaning of a given raw score is relatively ambiguous, and can be interpreted idiosyncratically by different participants. For instance, some would believe a raw score of 17 is an indication that they did very well, while others may be disappointed by the very same score. Firm managers in the real world have a much clearer understanding of what a given level of performance means (e.g., relative to analysts’ expectations or their own internal expectations). As a result, a given level of firm performance in a real-world setting is likely to be more strongly associated with both beliefs about doing well and self-serving attributions than in our relatively ambiguous experimental setting.
Finally, we conduct two untabulated analyses to help rule out that our DIDWELL measure is simply capturing participants’ high opinions of themselves, independent of actual performance. We find that our DIDWELL measure is not significantly correlated with either our MISCALIBRATION measure of stable individual overconfidence (p= 0.866) or our LOT-R measure of dispositional optimism (p= 0.599). Similarly, we find that our ATTRIBUTION measure is not significantly correlated with either our MISCALIBRATION measure of stable individual overconfidence (p= 0.781) or our LOT-R measure of dispositional optimism (p= 0.338). Combined, we believe these results support our claim that actual performance can lead to learned overconfidence, and that the overconfidence associated with our measure of self-serving attribution stems from our experimental manipulations.
4. Survey of Experienced Financial Managers
Participants in our survey are 109 experienced financial and accounting managers enrolled in a professional training course held by an international consulting firm. On average, experienced manager participants have 14.71 years of work experience. In terms of job title, 76 participants classify themselves as financial managers, while 33 participants classify themselves as accounting managers. On average, experienced manager participants report that approximately 41% of their job responsibilities involve some aspect of financial reporting, and approximately 11% of their job responsibilities involve some aspect of developing earnings forecasts. Forty-three percent indicate that they have direct involvement, and 57% indicate that they have indirect involvement in choices, judgments, and estimates required to prepare financial statements. Twenty-five percent indicate that they have indirect involvement in preparing projections of future results and explanations for past performance for analysts and investors.19 The experienced manager participants work in a broad set of industries, including wholesale trade, retail trade, services, and finance, insurance, and real estate. Relative to other companies in their industry, 57% of participants classify their company as “very large”; 23% classify their company as “large”; and 10%, 8%, and 2% classify their company as “medium,”“small,” and “very small,” respectively. Approximately 78% of participants are male.20
The financial and accounting manager participants are recruited from a professional training course on new and revised securitization and accounting rules and laws. The survey takes place in seven sessions, each held at a different major city across the United States. Before the sessions begin, a facilitator asks if they would volunteer to complete a survey questionnaire. All participants agreed and responded to the survey. The survey begins by asking participants to respond to an initial set of questions aimed at capturing the causal path described in figure 1. These questions are couched in terms of their experiences with other managers. We do not ask directly about their own behavior in order to alleviate self-presentation concerns and social-desirability bias in responses (Sapsford ). Our next set of questions asks for their beliefs about market reactions to earnings forecasts to verify that the assumptions underlying the design of our experiment are representative of experienced financial managers’ beliefs.
4.3 questions and results
Table 3 reports responses to the survey questions aimed at testing whether experienced financial managers’ beliefs are consistent with the causal path described in figure 1. These responses are elicited with 9-point Likert-type scales.21 We ask participants to indicate whether they believe that other managers, in their own minds, generally underestimate or overestimate the extent to which they contributed to the improved performance of the firm, and also to indicate whether they believe other managers are generally underconfident or overconfident. As expected, our experienced manager participants strongly agree that managers generally overestimate their contribution to performance (mean = 7.835, where 1 =“generally underestimate,” 9 =“generally overestimate,”p < 0.001). Our experienced manager participants also agree that other managers are generally overconfident (mean = 8.211, where 1 =“generally underconfident,” 9 =“generally overconfident,”p < 0.001). We next ask participants to indicate whether they believe managers are more or less likely to issue public earnings forecasts when they are either more optimistic about future earnings or more confident in their ability to predict future earnings. They indicate that forecasts are much more likely to be issued when managers are either more optimistic about future earnings (mean = 8.495, p < 0.001), or more confident in their ability to predict the firm's future (mean = 8.422, p < 0.001, where 1 =“much less likely” and 9 =“much more likely”). Combined, these responses support the ideas represented in figure 1 that managers overestimate the extent to which they contribute to firm performance when the economy is doing well, and that the associated optimism and confidence in predicting future performance increases the likelihood that managers issue earnings forecasts.
Table 3. Results from Survey of Experienced Managers; Managers’ Disclosure Decisions
Scale Value (1)
Scale Value (9)
This table presents responses from 109 experienced managers who participated in our survey. Respondents indicated their beliefs in response to a variety of statements regarding specific circumstances surrounding earnings guidance. A response value of 5 indicates a neutral response. The t-statistics marked with a * indicate a p-value of <0.001, two-tailed, when testing whether the mean response is significantly different from a neutral value of 5.
When the economy is doing well, it is easier for firms to report improved earnings. Do you believe that other managers, in their own minds, generally overestimate or underestimate the extent to which they contributed to the improved performance?
On average, do you believe that other managers are generally overconfident or underconfident in their managerial abilities?
When other managers are generally more optimistic about future earnings, do you believe that it makes them more or less likely to begin to issue public earnings guidance?
Much less likely.
Much more likely.
When other managers are generally more confident in their ability to predict their firm's future, do you believe that it make them more or less likely to begin to issue public earnings guidance?
Much less likely.
Much more likely.
To capture their beliefs about market reactions to earnings forecasts, we present experienced manager participants with two sets of scenarios. For each scenario, experienced manager participants indicate which of two firms they believe will have the higher stock price following the actual earnings announcement. Table 4 presents results from our two scenario-based questions. In the first scenario, Company A forecasts earnings-per-share (EPS) of $0.48, Company B provides no forecast, and both companies announce actual earnings of $0.46 (i.e., only Company A falls short of its own forecasted EPS). A total of 95.4% of experienced manager participants believe that Company B will have the higher valuation after the actual earnings announcement, while only 4.6% believe that Company A will have the higher valuation. They are significantly more likely than chance to indicate that Company B will have the higher valuation (p-value < 0.001). In the second scenario, Company A provides a forecast of $0.44, Company B provides no forecast, and both companies announce actual earnings of $0.46 (i.e., only Company A exceeds its own forecasted EPS). A total of 97.2% of experienced manager participants believe Company A will have the higher valuation after the actual earnings announcement, while only 2.8% believe that Company B will have the higher valuation. For this scenario, they are significantly more likely than chance to indicate that Company A will have the higher valuation (p-value <0.001).22
Table 4. Results from Survey of Experienced Managers; Scenario Questions
% Choosing Company A
% Choosing Company B
This table presents responses from 109 experienced managers who participated in our survey. Respondents read two different scenarios and, for each scenario, indicated which of two companies they expected to have a higher valuation following the actual earnings announcement. Values marked with a * indicate a p-value of <0.001, two-tailed, for the test that the proportion of participants choosing each company is significantly different from chance (i.e., 50%).
Imagine that two companies, A and B, are identical in all ways except the following:
• Company A forecasted earnings per share of $.48 and announced actual earnings of $.46.
• Company B did not forecast earnings per share and announced actual earnings of $.46.
Which company do you believe will have a higher current stock price after the actual earnings announcement?
Imagine that two companies, A and B, are identical in all ways except the following:
• Company A forecasted earnings per share of $.44 and announced actual earnings of $.46.
• Company B did not forecast earnings per share and announced actual earnings of $.46.
Which company do you believe will have a higher current stock price after the actual earnings announcement?
Table 5 presents responses to additional questions capturing beliefs about market reactions to managers’ earnings guidance decisions. We ask our experienced manager participants to what extent they agree with the statement that issuing forecasts makes reactions to both positive and negative news more extreme. Consistent with our expectations, they strongly agree that issuing earnings forecasts makes reactions to both positive and negative earnings news more extreme (mean = 8.000, 1 =“strongly disagree,” 9 =“strongly agree,”p < 0.001). We also ask them to what extent they agree with the statements that issuing forecasts sets a disclosure precedent, which produces a larger negative effect when: (1) earnings fall below those forecasts or (2) earnings forecasts cease. Our experienced manager participants agree that issuing earnings forecasts sets a disclosure precedent that will lead to a larger negative reaction when actual reported earnings fall below a previously issued forecast (mean = 8.349, where 1 =“strongly disagree,” 9 =“strongly agree,”p < 0.001). However, counter to our expectations and to arguments in prior literature, we find that they slightly disagree that issuing forecasts sets a disclosure precedent that will lead to a larger negative market reaction when firms cease to issue earnings forecasts (mean = 4.524, where 1 =“strongly disagree,” 9 =“strongly agree,”p < 0.001). This result may have been due to the fact that our question does not provide context for stopping earnings forecasts. A better question may have been to ask whether they agree that the market will view it as a signal of bad earnings news when forecasts cease. Taken together, these results support the assumption in our experiment that the issuance of forecasts amplifies the benefits and costs of positive and negative outcomes.
Table 5. Results from Survey of Experienced Managers; Market Reactions to Managers’ Disclosure Decisions
Scale Value (1)
Scale Value (9)
This table presents responses from 109 experienced managers who participated in our survey. Respondents indicated their agreement with a variety of statements regarding market reactions to specific circumstances surrounding earnings guidance. A response value of 5 indicates a neutral response. The t-statistics marked with a * indicate a p-value of <0.001, two-tailed, when testing whether the mean response is significantly different from a neutral value of 5.
Please indicate the extent to which you agree with the following statement: “Issuing earnings guidance sets a disclosure precedent which produces a large negative effect when guidance ceases.”
Please indicate the extent to which you agree with the following statement: “Issuing earnings guidance sets a disclosure precedent which produces a larger negative effect when actual reported earnings are below the previously issued guidance.”
Please indicate the extent to which you agree with the following statement: “Issuing earnings guidance makes reactions to both positive and negative earnings news more extreme”
One question that arises is why managers would be more likely to issue forecasts when they are overconfident, despite the fact that our survey evidence suggests that they are clearly aware that the bias exists. Research in psychology suggests that individuals often exhibit a “bias blind spot,” whereby they believe themselves to be less susceptible to cognitive and motivational biases than others (Pronin, Lin, and Ross , Pronin, Gilovich, and Ross ). In our setting, managers believe that others are likely to be overconfident, but may not believe that they personally exhibit the tendency.
Prior research has established that managers may provide earnings forecasts for a variety of reasons. Through the use of an abstract experiment and a survey of experienced financial managers, this paper provides evidence that at least one additional factor in the decision to provide forecasts may be the managerial overconfidence that results when positive past performance leads managers to engage in self-serving attribution to explain their positive performance. In turn, this self-serving attribution can lead managers to be too confident that future performance will improve. While managerial overconfidence might be influenced by environmental factors, it can similarly be a stable individual trait. As a result, our experiment uses established psychometric measures to explore two facets of stable individual overconfidence, miscalibration and dispositional optimism, and shows that both are associated with confidence in improved performance. In our experiment, we also find that confidence in improved future performance increases the likelihood that participants commit to improving, regardless of whether this confidence stems from self-serving attribution in our task or from stable traits.
Our use of an abstract experiment allows us to isolate our variables of interest while controlling for firm and environmental characteristics that are unrelated to our research questions. More importantly, an experiment allows us to capture self-serving attributions and psychometric measures of stable individual overconfidence that are not available in archival data. However, to confirm that the results of our abstract experiment are likely to generalize to real-world earnings forecast decisions, we also conduct a survey of experienced financial managers.
Responses to our survey confirm that our results from the abstract experiment align with experienced financial and accounting managers’ beliefs about how real-world earnings forecast decisions are made. Experienced managers who participate in our survey believe that other managers overestimate the extent to which they contribute to strong firm performance, that other managers are overconfident, and that both optimism about firm performance and confidence in their ability to predict future firm performance may also contribute to provision of forecasts.
Altogether, our results suggest that managers may provide forecasts at a higher rate when a firm is performing well, regardless of whether the performance is driven by internal factors or external factors such as strong macroeconomic or industry performance. Furthermore, our results support the idea that the decision to provide earnings forecasts as a result of overconfidence is potentially costly. Experienced manager participants in our survey believe that forecasts exacerbate market reactions to both positive and negative earnings news, and that providing an earnings forecast sets a disclosure precedent that leads to more negative market reactions when actual earnings fall below forecasts. Future research could investigate methods of reducing this potentially costly behavior among managers. Kadous, Krische, and Sedor  find that generating counterexplanations for management's performance expectations can reduce forecast optimism among analysts. This suggests that managers’ overconfidence may also be reduced if explicit consideration of counterexplanations reduces self-serving attributions for firm performance.
Our findings contribute to the more general literature on overconfidence in managerial decision-making. Evidence of managerial overconfidence is widespread. Some have argued that managerial overconfidence is beneficial in that it reduces agency costs by reducing conservatism and underinvestment. Larkin and Leider  point to the prevalence of nonlinear compensation schemes, and show that they are potentially useful for attracting overconfident employees. Thus, the widespread existence of nonlinear compensation schemes suggests that overconfidence in managers is a desirable trait (otherwise the compensation schemes would not be as common). Others have argued that managerial overconfidence is detrimental, in that it leads managers to believe their firms are undervalued such that they prefer to use internal rather than external financing sources (see Baker, Ruback, and Wurgler  for arguments on both sides of the debate). Although managerial overconfidence is widespread, not much is known about whether it is learned from prior success or whether it is a more stable behavioral trait (Billett and Qian ). Our study not only investigates a specific setting where managerial overconfidence may be detrimental, but also suggests that overconfidence can stem from both environmental sources and from stable individual traits.
Our findings also have potentially important implications for managers, market participants, and regulators. If managers are more likely to provide earnings forecasts during periods of overconfidence, this suggests they are also particularly likely to miss their own forecasts during these periods, consistent with Hribar and Yang . At best, our results suggest that this may result in a more negative market reaction than they would have otherwise experienced without the issuance of a forecast. At worst, managers may engage in earnings manipulations to meet the expectations set by their own forecasts, particularly if overconfidence makes them more likely to believe that they can cover their actions by delivering improved future performance (Schrand and Zechman ).
Finally, our findings also suggest that overconfidence induced by attribution bias as well as individual traits may explain part of the overall leveraging up of the real economy. Consistent with this idea, Ben-David, Graham, and Harvey  find that overconfidence in managers is associated with greater use of leverage in a firm. Like increases in financial leverage and decisions to provide public forecasts, the choice to commit in our experiment amplifies the benefits and negative consequences of positive versus negative outcomes. Combined, these results suggest that overconfidence induced during the good years of the financial bubble may have contributed to the excessive leverage by households, businesses, universities, and other institutions during that period. Future research might investigate this possibility, and look at how strong macroeconomic or industry conditions affect investors' attributions for the performance of their portfolio, their confidence in future investment performance, and actual future investment decisions.
Sample Questions from the Trivia Task, by Level of Difficulty
What are the smallest particles of a computerized image known as?
If your new best friend is a limnologist, what does he study?
c) Lakes and Rivers
Answer: Lakes and Rivers
Found in the constellation Draco, what was the North Star 5,000 years ago, and will be again 21,000 years from now?
d) Alpha Cephei
For each of the following questions, provide a low and high estimate such that you are 90 percent certain that the correct answer will fall within these limits. You should aim to have 90 percent hits and 10 percent misses. In other words, you should expect that the interval you provide contains the correct answer in 9 out of the 10 questions. Pay close attention to the units! For example, distinguish between millions and billions.
The questions above allow us to capture a relatively stable measure of individual miscalibration. Since the task asks managers to provide 90% confidence intervals for each question, well-calibrated individuals should provide only one interval (out of 10) that does not include the true answer to the question. As overconfidence increases, individuals provide more intervals that are too narrow (i.e., do not include the true answer).
1) 250,000 lbs.
4) 206 bones
8) 4,000 miles
9) 1,044 miles per hour
1) What is the average weight of the adult blue whale, in pounds?
2) In what year was the Mona Lisa painted by Leonardo da Vinci?
3) How many independent countries were there at the end of the year 2000?
4) How many bones are in the human body?
5) How many total combatants were killed in World War I?
6) What is the air distance, in miles, between Paris, France and Sydney, Australia?
7) How many books were in the Library of Congress at the end of the year 2000?
8) How long, in miles, is the Amazon river?
9) How fast does the earth spin (in miles per hour) at the equator?
10) How many transistors are in the Pentium III computer processor?
LOT-R Optimism Measure
The task above is Scheier, Carver, and Bridges's  LOT-R test of generalized dispositional optimism. Items 2, 5, 6, and 8 are fillers. Responses to “scored” items are coded with a number from 0 to 4 so that high values imply optimism. Thus, the highest possible score on this optimism measure is 24.
In response to the following 10 questions, please be as honest and accurate as you can throughout. Try not to let your response to one statement influence your responses to other statements. There are no “correct” or “incorrect” answers. Answer according to your own feelings rather than how you think “most people” would answer. Enter the number corresponding to your answer in the blanks, where
1 = I agree a lot
2 = I agree a little
3 = I neither agree nor disagree
4 = I disagree a little
5 = I disagree a lot
1) In uncertain times, I usually expect the best——
2) It's easy for me to relax——
3) If something can go wrong for me, it will——
4) I’m always optimistic about my future——
5) I enjoy my friends a lot——
6) It's important for me to keep busy——
7) I hardly ever expect things to go my way——
8) I don't get upset too easily——
9) I rarely count on good things happening to me——
10) Overall, I expect more good things to happen to me than bad——
See Koonce, Seybert, and Smith  for a review of biases in causal reasoning that could affect both diagnosis of past outcomes and prediction of future outcomes in voluntary disclosure decisions.
This idea is further supported by experienced manager participants in our follow-up survey, who strongly agree with this claim.
An alternative would be to ask participants to provide their own forecast for second-round performance, and then base their pay on whether the private forecast is met. However, tying compensation to a private forecast would be more consistent with a study focused on forecasting decisions with a strategic meet-or-beat component, which is not our intention.
It is common in the psychology literature to test the effect of or control for variables related to individuals’ knowledge, attitudes, traits, or abilities. In recent accounting studies, Seybert  finds that high self-monitors are more likely to overinvest in projects that they initiated than low self-monitors, and Luft and Shields  offer an excellent discussion of prior work showing how individual characteristics affect managers’ consideration of relevant costs in resource allocation decisions.
For example, Thompson, Armstrong, and Thomas  review the Illusion of Control literature and describe how it has been shown to vary in different settings (e.g., with mood, task familiarity, task involvement), suggesting that it may be context-specific. Similarly, Moore  reviews the Better-Than-Average literature and discusses the fact that the effects are generally isolated to common behaviors and activities.
While it would be possible to use only easy and hard questions, the addition of medium-difficulty questions captures some of the ambiguity that managers face in their real-world forecasting environment (see appendix A for sample questions from each level of difficulty). Increased ambiguity about their environment gives managers more flexibility in terms of how they interpret the difficulty of a task and more opportunity to exhibit both misestimation of task difficulty as well as the self-serving attribution bias that we argue drives confidence in future performance.
Rather than manipulating task difficulty, Hales and Kachelmeier  use a sports trivia task to show that widespread variation in performance can simultaneously cause both better-than-average and worse-than-average bias among participants within a task.
To choose appropriate questions for the experiment, we first compile a bank of 180 potential questions and then ask four independent raters to go through and make their best determination as to whether each question was “easy,”“medium,” or “hard.” These raters are able to look at all trivia questions simultaneously. Questions where the majority of raters are not able to agree on the difficulty of the question are thrown out and the final set of questions is randomly selected from those questions that remain.
Untabulated analyses confirm that there were no significant differences in either task performance or self-serving attributions among participants of differing nationality or gender.
We do not give participants any instructions on how to interpret the labels “easy,”“medium,” and “hard.” However, they are aware that there are three distinct difficulty levels in the task, and that they should therefore be able to make relative assessments about the questions they are answering. Furthermore, they should be able to infer that most participants are likely to find “easy” questions to be easier to answer than “hard” questions.
All dollar amounts described here are denoted in laboratory dollars unless stated otherwise. Laboratory dollar earnings are converted to U.S. dollar winnings upon completion of the experiment. Participants do not know in advance the exchange rate between the two currencies but do know that earning more laboratory dollars will always lead to a greater U.S. dollar payout. The conversion rate was set separately for the low-difficulty and high-difficulty conditions so that average compensation would be roughly equal in both conditions (approximately US$15), otherwise those randomly assigned to the low-difficulty condition would be at an advantage in terms of experimental earnings as a result of their relatively easier task.
Specifically, we give them the opportunity to win an additional bonus for accurately forecasting their second-round performance. The bonus for forecasting accuracy is $10 –$1 ×|forecast – actual performance|. In other words, the bonus is $10 for a completely accurate forecast (i.e., when forecast – actual performance = 0) and drops to zero if the forecast over- or underestimates actual performance by 10 or more questions. Note that participants are always better off if they accurately forecast their second-round performance as opposed to underestimating performance (i.e., issuing a “beatable” forecast). This fact is emphasized on the computer screen when participants make their private forecast. Once a given forecast has been provided, they are always better off doing as well as they can.
As noted earlier, our payout function does not capture other incentives for initiation of forecasts that have been discussed in the literature. It is important to emphasize that we make the compensation associated with committing contingent on performing better in the second round than in the first round, rather than performing at or better than the level of the truthful private forecast of performance. Tying compensation to whether actual second round performance exceeds a truthful private forecast would be more consistent with a study focused on forecasting decisions with a strategic meet-or-beat component, which is not our intention.
We also ask participants to complete an Attributional Style Questionnaire (ASQ) to assess “habitual tendencies in the attribution of causes” (Peterson et al. ). The ASQ asks individuals to consider both good and bad hypothetical events, provide a potential major cause for the event, and then rate the cause along attributional dimensions. Collapsed ratings allow us to develop measures of composite style for attributing the drivers of both good and bad events. We use this measure of attributional style to confirm that responses in our experiment are driven by our manipulations, rather than global attribution tendencies. Results show that responses on the ASQ are not associated with self-serving attribution in our experiment, confidence in future performance, or with commitment decisions. Thus, we do not discuss this measure any further in the paper.
See the “Additional Analysis” section below for more discussion on the validity of our measure for self-serving attribution.
Note that CONDITION is highly significant in both CONFIDENCE regressions (as seen in panels B and D of table 2). This is unsurprising if we believe that, after the first round of trivia, participants are likely to assume some “average” mix of question difficulty (e.g., 8 easy/9 medium/8 hard). When they hear that the second round is going to be 15 easy/5 medium/5 hard (5 easy/5 medium/15 hard) in the low-difficulty (high-difficulty) condition, this is likely to have a direct and positive (negative) impact on their expectations for future performance. In the real world, the effect of the ease of operating environment on future confidence may be weaker since managers do not typically receive such explicit information on the difficulty of the operating environment that they face.
Responses to our survey questions do not differ between those who indicate no involvement and those who indicate direct or indirect involvement in preparing projections of future performance and explanations for past performance for analysts and investors.
Two demographic characteristics (industry membership and involvement in the preparation of financial statements) are significantly correlated with responses to one of our survey questions. We discuss these associations in our results section, but they do not change the substance of our conclusions.
For the survey questions with 9-point Likert-type scale responses, we conduct t-tests to determine whether the mean response to each question is significantly different from the mid-point of “5.”
This is the only question where managers’ responses were significantly correlated with any demographic variables. Specifically, managers were relatively more likely to indicate that Company B would have the higher valuation if they either: (1) worked in the wholesale trade industry or (2) had direct rather than indirect involvement in the preparation of financial statements. However, within each of these subgroups, managers were still significantly more likely to respond that Company A would have the higher valuation than Company B.