### Abstract

- Top of page
- Abstract
- 1. Introduction
- 2. A benchmark contest
- Discussion
- 3. The effect of a growing prize
- 4. The effect of more players
- 5. What if players are more sophisticated?
- 6. Conclusion
- References
- Appendix

Contests over the scope and strength of regulation and governance are commonplace – and commonly repeated. The same players vie for the same government prize year after year: for example, environmental standards, government contracts, research grants, and public good provision. The open question is whether more rents are dissipated in repeated regulatory contests than onetime competitions. This question matters for regulation and governance because societies should design policies to waste the fewest scarce resources. According to some, the answer is “no”, but others say “yes”– more resources are wasted when people compete repeatedly for the same government prize. Herein, we use two game theoretic equilibrium concepts to help untangle the answer. Our results suggest non-myopic contestants are more likely to behave as partners than rivals – provided the context is relatively sterile. Several common complications help break up the tacit partnership, including a disparity in relative ability, a shrinking prize, and additional players.

### 1. Introduction

- Top of page
- Abstract
- 1. Introduction
- 2. A benchmark contest
- Discussion
- 3. The effect of a growing prize
- 4. The effect of more players
- 5. What if players are more sophisticated?
- 6. Conclusion
- References
- Appendix

Political influence activities and other processes of regulation and governance span a wide range of situations that social scientists have modeled as contests. One example is campaign expenditures, in which rival candidates for political office expend valuable resources to compete for votes (Congleton 1986; Konrad 2007). Another example is lobbying by corporations, trade groups, or consumer groups for particular legislation or regulatory treatment (Tullock 1967; Krueger 1974; Posner 1975; Nitzan 1994; Konrad 2007). Here the rivalry can be between a constituent and the government, or between constituents. Contest models have also been applied to governmental regime changes (Schnytzer 1994). Rival firms compete for government contracts in the defense industry, academic researchers develop competing proposals for government research funding, and contractors compete to be selected for public works projects. A government, if not benign, can also compete with its citizens to appropriate their output (Konrad 2002; Dixit 2007).

Many of these activities are ongoing or recurrent, driven by periodic elections, annual government budgets, regularly scheduled legislative sessions, or other exogenous cycles. In addition, the value of the “prize” in such contests will change systematically over time as, for example, the aggregate economy exhibits trend growth, new technology creates an enhanced prize or renders some industry obsolete, or a non-renewable resource is depleted. The wide range of these contests and the substantial aggregate resources affected by them make this aspect of regulation and governance an important issue to study. Even applications that are relatively minor at the individual level can be cumulatively significant if sufficiently numerous (Hahn & Cecot 2007).

While the potential importance of repeated contests for issues of governance has been acknowledged,^{1} insight has been limited and mixed regarding how these dynamic interactions affect economic efficiency as measured by individual and aggregate effort expended relative to the size of the reward people are competing over – defined as *rent dissipation*: how much of the rent do people waste fighting over who gets it? The open question is whether repeated contests waste more resources than one-time competition. Tullock (1988), for instance, conjectured that repeated play contests might induce more wasteful effort than the static counterparts, based on notions of entry deterrence and incentives for cost reducing innovation. He did not, however, explicitly examine the direct impact of repeated play on contestants’ decisions.^{2}

This paper reconsiders behavior in a repeated contest, and discusses the efficiency consequences on regulation and governance.^{3} We define and compare how much effort is expended by players under two key equilibrium concepts, Friedman’s (1971) classic trigger strategy and the Rotemberg and Saloner (1986) equilibrium. These equilibrium concepts, while technical, provide insight into strategic interactions and governance, allowing us to capture the efficiency consequences of moving from a utopian baseline of no wasteful competition for government rents to the more realistic cases in which period games are linked by a shrinking or expanding prize, and players have a continuous response to the threat of punishment. In some applications, effort is productive rather than wasteful, a detail not addressed in previous contest studies.

We begin with a baseline case, a repeated contest over a fixed prize with no wasteful competition, an outcome consistent with the folk theorem and Linster (1994). Within this benchmark model, we find greater productivity of effort serves to reinforce this cooperation. Also a more able player will cooperate under a wider range of parameter values than his opponent. Adding more players decreases the likelihood that cooperation will be sustainable.

Repeated contests, however, often involve a prize that is growing, shrinking, or exhibiting some other trend or cyclical behavior over time.^{4} We show that an expanding prize encourages tacit cooperation. A shrinking prize undermines this cooperation – which lends support for Tullock’s original conjecture of more rent-seeking, and contrasts with prior studies of repeated contests with fixed prizes.

In the more sophisticated Rotemberg–Saloner equilibrium, aggregate effort increases with more players, eventually approaching complete dissipation of the prize. But changes in the discount rate and the number of players can either increase or decrease individual effort, depending on parameter values. These results, interpreted within applications to regulation and governance, enrich our understanding of challenges to better, fairer, and more efficient systems of governance in the pervasive case of ongoing or recurrent interactions between lawmakers and interest groups, politicians and voters, regulators and regulated entities, or multiple regulators. While the central analysis is explanatory, we conclude with a normative discussion of mechanisms by which benevolent governments might use our findings to improve contest outcomes.

Natural applications of our analysis to regulation and governance include several that are regulatory in the sense of steering events and behavior, while some are more broadly governance related in the sense of providing, distributing, or running for public office. Importantly, the regulatory power to enforce rules can be exercised with some discretion, creating scope for contest like behavior by regulated entities. This short-run opportunity is additional to the long-run opportunity for regulated parties to influence future changes in the rules themselves.^{5}

The next section presents a benchmark contest with full cooperation, showing that greater productivity of effort makes cooperation more likely but unequal ability across players makes cooperation by the less able player less likely. Section 3 then shows that a growing prize makes cooperation more likely, while Section 4 shows that more players makes cooperation less likely and has opposing effects on individual and aggregate effort when cooperation breaks down. Section 5 presents the more sophisticated Rotemberg–Saloner equilibrium, showing that varying discount rates and numbers of players have more complex effects on the outcome than in the simple trigger equilibrium, and Section 6 concludes.

### 2. A benchmark contest

- Top of page
- Abstract
- 1. Introduction
- 2. A benchmark contest
- Discussion
- 3. The effect of a growing prize
- 4. The effect of more players
- 5. What if players are more sophisticated?
- 6. Conclusion
- References
- Appendix

We model an infinitely repeated contest between two players, 1 and 2, who expend effort, *x*_{1} and *x*_{2}, to earn a share of a divisible prize, *G*.^{6} Following Tullock (1980) and others, we assume a logit payoff function, which we interpret as a share function rather than the traditional probability of success function, though the results admit either interpretation.^{7} Let player 1’s share of the prize be

- (1)

where *α* is the ability parameter reflecting the relative strength of the players, and *γ* parameterizes the productivity of effort. Without loss of generality, assume player 1 is the contest favorite – the person whose share of the prize exceeds one half at the static Nash equilibrium (*α* > 1) – and player 2 is the underdog (Dixit 1987). Although symmetric contests do exist – and we later explore some properties of symmetric contests – many contests arising in regulation and governance exhibit asymmetry in either productivity of the contestants, or underdog status of the contestants, or both. In political campaigns, conventional wisdom and historical experience suggest that an incumbent has a natural advantage over a challenger (i.e. the incumbent is the favorite). When a policy issue such as environmental regulation pits large corporate interests, such as logging companies or petroleum companies, against consumer groups that may be more locally limited and less organized, conventional wisdom may perceive the consumer groups as the underdog – inherently disadvantaged in their ability to provide long-term or decisive influence in the collective policy-making process – and such groups may also be less productive in their efforts due to inferior experience or organization. In some judicial settings such as antitrust litigation, this bias may be reversed in the case of a judge with a personal animus toward large corporations. In bidding for government procurement contracts, a new entrant may function as an underdog relative to established firms with a history of prior contracts. While the distinction between lower productivity and underdog status may seem murky in general discussions of such examples, the analytical model is able to quantify that distinction sharply.

We assume *G *is the same for both players and is known with certainty by both players. This assumption is common in previous contest models, including the dynamic analyses of Leininger and Yang (1994), Linster (1994), Itaya and Sano (2003), and others, and is realistic for some applications – especially those involving a pre-announced monetary payoff, such as some research grants. Other important contests involve prizes that are valued unequally by the various contestants, or prizes about which the contestants have incomplete and asymmetric information. Static analyses of information contests have been provided by Hurley and Shogren (1998a,b), Feess *et al. *(2002) for the all-pay auction application, Wärneryd (2003), and Fu (2006). Because of these prior studies, and because of our focus on repeated contests, we defer additional research on governance contests with asymmetric information for the future.

Net payoffs to players 1 and 2 are π_{1} = *G*p(*x*_{1}; *x*_{2}, *α*, *γ*) −*x*_{1} and π_{2} = *G*[1 − p(*x*_{2}; x_{1}, *α*, *γ*)] −*x*_{2}. If the players behave according to the static Nash equilibrium, standard calculations give the following symmetric effort levels (see for example Shogren & Baik 1992):

- (2)

Player 1 and 2’s net payoffs are and Equation 2 is an increasing function of the exponent *γ*, so players devote more effort to rent-seeking when that effort is more productive.^{8} Similarly, Equation 2 is a decreasing function of *α* whenever *α* > 1, indicating that greater ability of the favorite (i.e. greater disparity between favorite and underdog) reduces both players’ effort.^{9} The Appendix derives the parameter restrictions implied by the second-order conditions.

Now consider the infinitely repeated contest in which the payoff in each period is discounted at the fixed, exogenous rate *r*. The discount rate, a standard feature of multiperiod decision models, reflects the empirical fact that people and institutions behave as though they accord less weight to consequences that occur in the distant future. Such behavior may reflect some combination of time preference (a payoff today is preferred to the same payoff tomorrow) and risk aversion (a delayed payoff might not materialize as expected). A positive discount rate is also a technical requirement to permit solutions to decision problems involving the infinitely distant future. As noted below, *r *may vary across players. In practice, the discount rate may vary over time as well, but we adopt here the standard assumption that *r *is fixed through time for analytical convenience, as also assumed in Cairns (1989), Linster (1994), Wirl (1994), Konrad (2001), Itaya and Sano (2003), and others.^{10} Relaxing this assumption would not qualitatively alter the results. The cooperative outcome is *x*_{1} = *x*_{2} = 0, leading to a payoff for player 1 and 2 in each period of *α**G*/(1 + *α*) and *G*/(1 + *α*).^{11} The discounted present values of these payoffs for players 1 and 2 over the infinite horizon are:

- (3)

- (4)

Initially, we use Friedman’s (1971) trigger strategy to characterize the contest, as in Linster (1994)– each forward looking player plays the cooperative strategy unless some other player deviates, in which case all players then revert to the static Nash strategy forever.^{12} The trigger equilibrium is appealing for analytical purposes because it is subgame perfect.

We apply the trigger concept by calculating the payoff over the infinite horizon to a deviation by either player. Starting from the cooperative outcome, either player receives the entire prize if he alone invests an infinitesimal amount of effort, ε. The present value of deviation for either player is the one-period payoff followed by a discounted infinite sequence of static Nash punishment phase payoffs:

- (5)

- (6)

Equations 3 and 5 imply that the favorite (player 1) cooperates if PV_{1}(C) ≥ PV_{1}(D), or if:

- (7)

Similarly, the underdog (player 2) cooperates as long as PV_{2}(C) ≥ PV_{2}(D), or if:

- (8)

Equations 7 and 8 express the familiar result that, in a trigger strategy with the threat of perpetual Nash punishment, cooperative actions will be sustained as a non-cooperative equilibrium as long as the future is not too steeply discounted – a larger discount rate makes the future punishment phase relatively less severe without diminishing the immediate payoff to a one-period deviation.^{13}Table 1 summarizes the payoffs of these various strategies.

For *α* > 1, the threshold discount rate is smaller for the underdog (Eqn 8) than for the favorite (Eqn 7).^{14} This means that the underdog will defect first if discount rates rise from an initially low level. The intuition behind this result is that the single-period defection outcome is the same to whichever player defects unilaterally, but the subsequent loss (i.e. the difference between the cooperative outcome and the Nash outcome) is proportional to *α*. The underdog has less to lose by defecting. This results can be summarized as follows.

**Proposition 1. ***An underdog is more likely to defect because he has relatively less to lose for any given discount rate than the favorite.*

Moreover, the threshold discount rate is a decreasing function of *α* in Equation 8 but an increasing function of *α* in Equation 7. This means that greater disparity of ability between the favorite and the underdog will reduce the overall range of discount rates under which cooperation can be sustained. The right-hand sides of Equations 7 and 8 are also increasing functions of the productivity parameter, *γ*. With higher productivity, effort is greater in the static Nash equilibrium so the punishment phase of the trigger strategy is more severe. Greater productivity provides a more effective deterrent to defection, and cooperation is sustainable over a wider range of discount rates. Proposition 2 summarizes these benchmark results.

**Proposition 2. ***In the trigger strategy framework, as long as the future is not too steeply discounted, a favorite and an underdog expend no effort in the infinitely repeated contest. Greater productivity of effort expands the range of parameter values under which cooperation will be sustained, while greater disparity of ability shrinks the range of parameter values under which cooperation will be sustained.*

As long as both restrictions, Equations 7 and 8, are satisfied, the infinitely repeated contest can be played with no wasteful competition in the trigger strategy framework, unlike the static Nash outcome which entails some degree of wasteful competition. This result is consistent with the findings of Cairns (1989), Wirl (1994), and Linster (1994), and demonstrates that Tullock’s (1988) dynamic conjecture abstracts from significant offsetting factors. In addition, unlike these earlier studies, our result formally characterizes the impact of the productivity of effort and of players’ relative ability on the sustainability of cooperation.

### 3. The effect of a growing prize

- Top of page
- Abstract
- 1. Introduction
- 2. A benchmark contest
- Discussion
- 3. The effect of a growing prize
- 4. The effect of more players
- 5. What if players are more sophisticated?
- 6. Conclusion
- References
- Appendix

Consider now a repeated contest in which the prize either grows or shrinks (also see Konrad’s [2001] contest model of repeated expropriation risk in transition economies). Most contests arising in regulation and governance feature a prize with some systematic pattern of change over time, due to such factors as trend growth, resource depletion, technological change, business cycles, or other factors exogenous to the contest. First, we explore how a growing prize affects the behavior of the favorite and the underdog. Define *G *as the prize in the current period, and *θ* < *r *as the per-period growth rate of the prize, which can be either positive or negative. The prize in period *t *equals *G*(1 + *θ*)^{t}. The static Nash equilibrium effort level in period *t *is *x*_{1}(*t*) = *x*_{2}(*t*) = *αγ**G*(1 + *θ*)^{t}/(1 + *α*)^{2}, with net payoffs to player 1 and 2 equaling *α**G*(1 + *α*−*γ*)(1 + *θ*)^{t}/(1 + *α*)^{2} and *G*(1 + *α*−*αγ*)(1 + *θ*)^{t}/(1 + *α*)^{2}.

For now, we retain the trigger strategy as our dynamic equilibrium concept and solve for the associated levels of effort under the growth assumption introduced above. Proceeding as before, we find:

- (9)

- (10)

- (11)

- (12)

Player 1 cooperates if and only if PV_{1}(C; *θ*) ≥ PV_{1}(D; *θ*); player 2 cooperates if and only if PV_{2}(C; *θ*) ≥ PV_{2}(D; *θ*). In the limit as ε 0, these restrictions imply limiting discount rates of:

- (13)

- (14)

Comparing conditions 13 and 14, we again find that the underdog will cooperate under a narrower range of discount rates than the favorite. As in the no-growth prize case, the underdog is more likely to defect than the favorite.

To see how growth in the prize affects effort, subtract the right-hand side of Equation 8 from the right-hand side of Equation 14 yielding *θ*[1 + *γ*/(1 + *α*)], which takes the sign of *θ*. This says that cooperation is sustained under a wider range of discount rates in the trigger equilibrium when the prize is expanding, that is, *θ* > 0. An expanding prize makes the short-term gains from defection relatively less important than future payoffs; this promotes cooperation and reduces wasteful competition. In the limit as *θ* *r*, expression 14 approaches the condition *γ*(1 + *r*)/(1 + *α*) ≥ 0, which is satisfied whenever *γ*, *r*, and *α* are all nonnegative. This says that sufficiently rapid growth of the prize will always suffice to sustain cooperation as a trigger equilibrium.

Conversely, a shrinking prize (*θ* < 0) narrows the range of discount rates capable of sustaining cooperation in the trigger equilibrium. A shrinking prize reduces future payoffs and the potential loss from punishment; this encourages defection and more rent dissipation. Previous studies have noted that any positive level of rent seeking will reduce growth;^{15} combined with the result here, the implication is that rent seeking can precipitate a vicious circle in which reduced growth induces unproductive effort, which further reduces growth etc. For example, trade policy, education, and R&D are three activities promoted and subsidized by government funding and are empirically associated with greater productivity and faster economic growth. To the extent that resources allocated to these activities are diverted into lobbying for a greater local share of government funding, the productive uses of those funds are undermined and subsequent growth would suffer (see for example Bhagwati 1982, on trade; Krueger 1990, on development; and Panagariya 2002 on protectionism). According to the results here, fiercer competition over the resultant shrinking pie may ensue.

For all *θ* < −*γ*/(1 + *α* + *γ*), the right-hand side of Equation 14 is negative, so that cooperation never occurs for nonnegative discount rates. This says that cooperation never occurs in a trigger equilibrium if the prize is shrinking too fast. Proposition 3 summarizes these results.

**Proposition 3**. *Suppose the prize changes as the contest is repeated. Then in the trigger equilibrium: (i) the underdog is more likely to defect than the favorite; (ii) a growing prize *(θ > 0) *increases the range of discount rates capable of sustaining cooperation, and sufficiently rapid growth guarantees cooperation; and (iii) a shrinking prize *(θ < 0) *narrows the range of discount rates capable of sustaining cooperation, while sufficiently rapid shrinkage guarantees defection.*

Though many studies have explored the adverse impact of unproductive effort on growth (see footnote 6), only Cairns (1989) has explored how growth affects effort. Our findings in Proposition 3 support the robustness of Cairns’s basic result.

### 4. The effect of more players

- Top of page
- Abstract
- 1. Introduction
- 2. A benchmark contest
- Discussion
- 3. The effect of a growing prize
- 4. The effect of more players
- 5. What if players are more sophisticated?
- 6. Conclusion
- References
- Appendix

Repeated contests frequently involve more than two players, especially if the prize is large. Examples include political campaigns involving more than two parties, government contracts facing more than two bidders, international coordination of banking regulation among more than two nations, and any other matter of regulation or governance with more than two independent stakeholders. We now consider more players in the contest and, for simplicity, assume they all have equal ability, that is *α* = 1.^{16} The share of the prize accruing to player *i *is:

- (15)

with net payoff *p*_{i}G −*x*_{i}and a static Nash effort level of *x*_{i} = *γ*(*n *− 1)*G*/*n*^{2}. Given a constant prize, the iterated cooperative solution yields *x*_{i} = 0 for all *i*, with a present value net payoff of:

- (16)

for all *i*. An epsilon deviation followed by the perpetual Nash punishment phase (the trigger strategy) yields a present value payoff of:

- (17)

By the same reasoning as above, cooperation is sustainable against deviation if and only if:

- (18)

Again more productivity, *γ*, sustains cooperation under a wider range of discount rates; that is, the right-hand side of Equation 18 is an increasing function of *γ*. Condition 18 also implies that the maximum discount rate consistent with cooperation in the iterated contest with a trigger strategy is a *decreasing *function of *n*. With more players, each player must value the future more to sustain cooperation. Increasing the number of players undermines cooperation in the repeated contest because – with more players – each player receives a smaller share of the prize in the cooperative outcome, but not a smaller one-period payoff to deviation.^{17} Although each player’s payoff in the static Nash punishment phase also declines when there are more players, non-cooperative deviation becomes more tempting. And as the number of players grows without bound, *n* ∞, the discount rate consistent with sustained cooperation drops to zero and the static Nash punishment phase dominates.

While these results characterizing the effect of *n *may be understood in a comparative statics sense, comparing one contest with *n*^{1} players versus another with *n*^{2} players, there are also important situations in which the number of players in a given contest can change over time. For example, a third political party may enter or withdraw from a particular election race; bidders may enter or withdraw from competition for a government contract or grant; new firms or consumer groups may lobby for or against environmental regulation. In general, all the examples we consider can experience changes in the number of contestants. It is useful to understand the outcomes of repeated contests when the number of players changes.

The effect of *n *on the static Nash equilibrium effort is:^{18}

- (19)

which is negative for all *n* > 2. More players *decreases *individual effort in the static Nash contest; in the limit as *n* ∞, individual effort falls to zero.

Aggregate effort, however, increases with more players. Aggregate effort equals *γ**G*(1 − 1/*n*), and ∂*nx*_{i}/∂*n* = *γ**G*/*n*^{2} > 0. More players leads to more wasted resources in the static Nash equilibrium, approaching *γ**G* in the limit with infinitely many players. Wasteful competition among many players can more than exhaust the entire prize if the productivity coefficient exceeds unity, *γ* > 1. This result lends formal support to the potential validity of a concern about “ruinous competition” that businessmen sometimes voice. For example, bankers frequently complain that they cannot charge interest rates on certain loans that are high enough to offset expected losses due to default, due to lower rates charged by competitors; undercutting a rival’s rates can attract new borrowers in the short run, but may leave banks unable to cover their loan losses if a recession occurs. The bankers’ argument implicitly assumes competition is harmful when the prize stays the same or decreases. The ruinous competition argument, however, only holds here given the common property nature of the prize in this contest. With more contestants, each gets a smaller slice of a fixed economic prize, which induces greater effort to achieve a smaller share. In other situations, however, more competition through entry may trigger greater innovation and productivity so that the prize could increase with greater competition.

These findings also suggest a definition of free entry static Nash equilibrium structure in contests in which no other returns to the players exist other than their net share of the prize. The *number of players that totally exhausts the prize in aggregate equals n* *=* *γ**/(**γ*− 1). For example, if *γ* = 1.2, six players will exhaust the prize (*n* = 6); for *γ* = 1.5, only three players are needed.

These results contrast with the spirit of conventional economic theory, which identifies important benefits of strong competition in the absence of rent seeking activity. The key distinction is whether competitors can divert resources from productive uses to a purely reallocative contest. In all such cases, the neoclassical economic paradigm does not address key aspects of incentives and behavior, and the old adage “too many cooks spoil the broth” applies – provided the broth is only in one pot. One lesson for public policy is that this result restates the classic economic principle that one must understand when a collective action creates wealth versus transferring wealth. Reallocative competition can offset the benefits of productive competition, so that optimal pro-competitive policy would need to recognize a distinction between these two dimensions of competition.

Now suppose the prize grows at a rate *θ* < *r *per period. The static Nash effort level then equals *γ**G*(*n *− 1)(1 + *θ*)^{t}/*n*^{2}, and the present values of cooperation and defection are:

- (20)

- (21)

In the limit as ε 0, cooperation is sustained if and only if:

- (22)

which exceeds the right-hand side of the no-growth condition, Equation 18, by *θ*(*n* + *γ*)/*n*, an amount that takes the sign of *θ*. As in the two-player asymmetric case, growth of the prize facilitates cooperation, whereas a shrinking prize is more prone to wasteful competition. As *θ* *r*, Equation 22 becomes the condition *γ*(1 + *r*)/*n* ≥ 0, which is satisfied whenever *γ* and *n *are positive and *r *is nonnegative. Again, sufficiently rapid growth will always sustain cooperation as a trigger equilibrium. Proposition 4 summarizes the *n*-players’ results.

**Proposition 4**. *In an *n*-player infinitely repeated contest, more players: (i) make cooperation less sustainable; (ii) decrease individual effort in the Nash punishment phase; but (iii) increase aggregate effort in the Nash punishment phase. Again, an expanding prize facilitates cooperation and a shrinking prize does not.*

### 5. What if players are more sophisticated?

- Top of page
- Abstract
- 1. Introduction
- 2. A benchmark contest
- Discussion
- 3. The effect of a growing prize
- 4. The effect of more players
- 5. What if players are more sophisticated?
- 6. Conclusion
- References
- Appendix

The trigger equilibrium has a shortcoming – new parameter values can induce behavioral responses at odds with our intuitive understanding of market reactions. For example, a small increase in the discount rate, *r*, beyond the ceiling in conditions 8 or 18 triggers an all or nothing response in players’ willingness to cooperate. Rotemberg and Saloner (1986) attempt to overcome this problem with an alternative solution concept that does not exhibit this discontinuous behavior. Instead, players adopt the most cooperative outcome that is sustainable against the threat of a permanent static Nash punishment phase. We now address how this alternative supergame strategy affects effort in our infinitely repeated contest.

Using the same payoff structure as before, and assuming equal ability across players, we seek the minimum nonnegative value of *x*_{i}, call it *x*_{i-min}, yielding a present value payoff PV_{i}(C) at least as great as the present value of defection PV_{i}(D):

- (23)

If condition 18 is not met, condition 23 implies *x*_{i-min} > 0 and then optimal defection goes beyond the simple ε-defection strategy – the choice to defect is determined as an explicit function of the parameters (the size of the prize, the number of players, the discount rate, and ability). Specifically, the defecting player chooses *x*_{di}to maximize the net present value of defection:

- (24)

where *p* = *x*_{d}^{γ}/[(*n*− 1)*x*_{i-min}^{γ} + *x*_{d}^{γ}], yielding the first-order condition:

- (25)

Solve for *x*_{d}as a function of *x*_{i-min}, and substitute back into condition 23 to solve for *x*_{i-min}as a function of the other parameters. Since no analytical solution to Equation 25 exists except for a few particular values of *γ*, we work with the case *γ* = 1. Then Equation 25 implies:

- (26)

The second-order condition is −2*Gx*_{i-min}(*n *− 1)/{(1 + *r*)[(*n *− 1)*x*_{i-min} + *x*_{d}]}, which is negative for all *n* > 1 with other parameters positive, so Equation 26 maximizes a defector’s benefit.

Substituting Equations (24) and (26) into (23) produces an expression quadratic in the square root of *x*_{i-min}. The two roots define a range of *x*_{i}within which condition 23 is satisfied (as an equality for *n* = 1, and as a strict inequality for *n* > 1, for all *r* > 0). The smaller root is the solution to our problem, which after some algebra can be expressed as:

- (27)

Equation 27 quantifies each player’s effort in the symmetric Rotemberg–Saloner equilibrium – but only for *r* > 1/*n*, since (by condition [18]) *x*_{i-min} = 0 is sustainable against defection for all *r* < 1/*n*. Because *x*_{i-min}is linear in *G*, the proportion of the prize dissipated is independent of the size of the prize, just as in the static Nash and trigger equilibria.

Note that *x*_{i-min} 0 as *n* ∞, implying that full cooperation is consistent with the Rotemberg–Saloner equilibrium as the number of contestants grows large. This same conclusion also follows in the trigger equilibrium (Eqn 18) since any positive *r *will exceed 1/*n *for *n *sufficiently large. The prediction that contest efficiency increases with unbounded numbers of players in an infinitely repeated contest appears quite robust with respect to the particular equilibrium concept employed. Contrary to standard economic notions of competition and cooperation, however, these outcomes are more cooperative – not more competitive – with unbounded numbers of players. In applications involving all-pay auctions in which the auctioneer may wish to design the contest so as to maximize revenue, this result provides an important caveat to the standard static result that expected revenue is an increasing function of the number of bidders. Below, Proposition 6 addresses contest efficiency in the more complex case of finite numbers of players.

In addition, *x*_{i-min} = 0 at *r* = 1/*n*, or at *n* = 1 because a monopolist receives the full prize without expending effort. For all finite *n* > 1 and all *r* > 1/*n*, players expend some effort (*x*_{i-min} > 0). But the Nash equilibrium effort minus the right-hand side of Equation 27 equals 4(*n *− 1)*rG*/[*n*(1 + *rn*)^{2}] > 0 for all *n* > 1, implying that individual effort (and aggregate effort) is less in the Rotemberg–Saloner equilibrium than in the static Nash equilibrium, for any given number of players. As in the trigger equilibrium, this result is consistent with those of Cairns (1989) and Wirl (1994) but contrasts with Tullock’s (1988) conjecture.

We next characterize how the discount rate and number of players, *r *and *n*, affect effort:

- (28)

- (29)

First consider the discount rate, Equation 28. For all *n* > 1, Equation 28 takes the sign of (*r *− 1/*n*), which is positive for sufficiently many players (*n* > 1/*r*). But for *n* < 1/*r*, recall that *x*_{i} = 0 is sustainable as a non-cooperative equilibrium, in which case ∂*x*_{i}/∂*r* = 0. Therefore, the outcome ∂*x*_{i-min}/∂*r* < 0 is never observed. Expression (28) describes a particular pattern of effort when the discount rate changes during a contest, and this pattern is affected by the number of players. For many applications, players’ discount rates would tend to adjust with the interest rates observed in financial markets; those rates have historically varied over the business cycle, being lower in recessions and higher at the peak of expansionary phases. Strikingly, high discount rates stimulate more wasteful effort when there are more than a few players.

**Proposition 5. ***In the Rotemberg–Saloner equilibrium, a greater discount rate has no effect on individual effort if* r < 1/n*, but increases both net effort and aggregate effort for all* r > 1/n.

At least two natural examples of differing discount rates arise in governance and public policy. First, corporate stockholders exhibit preferences consistent with steeper discounting of the future than environmentalists; stockholders can be more concerned with short-term profits, while environmentalists are focusing on decades ahead. Second, driven by the need to win the next election, some politicians may tend to advocate excessively myopic policies (i.e. behave in accordance with higher discount rates than other agents in society). This bias has been widely recognized in various countries – especially with regard to monetary policy, as historical rates of inflation have tended to be systematically lower in nations with politically independent central banks (see for example Brumm [2000, 2002, 2006]; Issing [2006]). For that reason, the US Congress established the structure of the Federal Reserve in a way designed to insulate monetary policy from such myopic bias.

Now consider how the number of players affects effort. Although expression 29 takes the opposite sign as the term in square brackets in the numerator, the results are rather entwined. For all sufficiently large *n*, Equation 29 is negative – more players reduces individual effort. As the discount rate approaches zero, *r* 0, the numerator approaches −*G*(*n *− 2) so (29) is negative for all *n* > 2. But the opposite sign can hold for small *n *and larger discount rates. Since the bracketed terms are cubic in *r*, Equation 29 changes sign at the three roots given by *r* = 1/*n *and *r* = [2(*n *− 1) ± (5*n*^{2}− 12*n* + 8)^{½}]/[*n*(*n *− 2)]. Recall that the cooperative outcome *x*_{i-min} = 0 is sustained as a non-cooperative equilibrium for all *r* < 1/*n*, in which case ∂*x*_{i}/∂*n* = 0. Because [2(*n *− 1) − (5*n*^{2}− 12*n* + 8)^{½}]/[*n*(*n *− 2)] < 0 < 1/*n*, the smallest root of Equation 29 is irrelevant to our problem, as is the range *n* < 2. We characterize the sign of ∂*x*_{i}/∂*n *for three regions:

In other words, for a discount rate less than 50% and starting at *n* = 2, the entry of successive players will first have no effect on individual effort (up to *n* = 1/*r*), then increase it, and eventually decrease it. If the discount rate exceeds 50%, *r* > ½, successive entry first increases and then decreases individual effort.

Aggregate effort varies with *n *as:

- (30)

which has positive roots at *n* = 1/*r *and *n* = [2 + (5 + 4/*r*)^{½}]/(*r* + 4), takes negative values for *n *∈ ([2 + (5 + 4/*r*)^{½}]/(*r* + 4), 1/*r*), and takes positive values for *n* > 1/*r *or *n* < [2 + (5 + 4/*r*)^{½}]/(*r* + 4). Again recalling that *x*_{i-min} = 0 for all *r* < 1/*n *in equilibrium, we ignore the range *n* < 1/r in signing ∂*nx*_{i-min}/∂*n*. We also ignore the range *n* < 1. But, for *r *∈ (0, 1), we have [2 + (5 + 4/*r*)^{½}]/(*r* + 4) < 1/*r*. For all *n* ≥ max {1, 1/*r*}, we conclude that ∂*nx*_{i-min}/∂*n* > 0 and the successive entry of additional players in the Rotemberg–Saloner equilibrium will increase the aggregate effort. Although we do not model endogenous entry explicitly, this result has a contrasting flavor to Tullock’s (1988) analysis, in which entry deterrence was associated with higher rent seeking expenditures.

Iterated application of l’Hopital’s rule to *nx*_{i-min}using Equation 27 establishes that, as *n* ∞, aggregate effort approaches the prize, *G*, in the symmetric Rotemberg–Saloner equilibrium, just as in the static Nash equilibrium with *γ* = 1. But for all positive finite *n *and *r*, the aggregate rent seeking investment implied by Equation 27 is strictly less than *G*. These results are summarized in proposition 6.

**Proposition 6**. *In the Rotemberg–Saloner equilibrium, the successive entry of additional players: (i) will not affect individual effort (up to *n = 1/r*), then increases it, and eventually decreases it, for a discount rate less than 50% and starting at *n = 2*; (ii) first increases, then decreases, individual effort if the discount rate falls between 50% and 100%; and (iii) increases aggregate effort, but only to a level strictly less than the prize for all positive finite *n *and *r.

The nonmonotonic effect of *n *on individual effort in the Rotemberg–Saloner equilibrium reflects the influence of nonlinearities in the number of players on the tradeoffs between short-run defection and long-run punishment. Intuitively, more players initially make defection by any one player more attractive, because the defector’s benefit is a relatively large share of the prize rather than the smaller fraction 1/*n *of the prize. This logic suggests a larger effort *x*_{i-min}is needed to discourage defection when *n *is larger. The marginal impact of one more player on this comparison, however, is weaker for very large *n *(tending to zero in the limit as *n* ∞) so the intuitive logic fails for sufficiently large *n*. These conflicting considerations are technically reflected in the detail that *n *enters Equation 27 in three places: twice in the numerator, indicating an increase in the minimum effort needed to discourage defection for a given discount rate (by expanding the relative benefit of defection by one player); and once in the denominator, as part of a quadratic term involving a cross-product with the discount rate (yielding a partially offsetting effect that tends to reduce equilibrium effort). A related technical point is that a defector’s choice of effort, Equation 26, is itself a nonlinear function of *n*, which further complicates the effort *x*_{i-min}needed to discourage defection.

The trigger strategy and Rotemberg–Saloner equilibria studied here are not the only possible dynamic equilibria. Characterizing repeated contests under other solution concepts requires a more general method of parameterizing the equilibrium conduct of the players, and could be pursued in future research.

### 6. Conclusion

- Top of page
- Abstract
- 1. Introduction
- 2. A benchmark contest
- Discussion
- 3. The effect of a growing prize
- 4. The effect of more players
- 5. What if players are more sophisticated?
- 6. Conclusion
- References
- Appendix

Regulation and governance involve repeated play rather than a one-shot interaction among the stakeholders. Two players who compete time after time for a government grant become either bitter rivals or unspoken partners. A rivalry can lead to complete dissipation of the prize or worse, while a partnership preserves more (perhaps all) of the prize to be shared. Our results suggest non-myopic contestants are more likely to behave as partners than rivals, provided their contest environment is relatively sterile. Several common complications, however, serve to break up the tacit partnership. These deal breakers include a disparity in relative ability, a shrinking prize, and additional players. Empirical estimates of rent seeking could be improved by incorporating the relative importance of these behavioral underpinnings that define many of our social, economic, and political institutions of resource allocation.

As governments control the rules of regulatory and governance contests, it is useful to consider how they might use our results to improve the outcomes. Some parameters that we consider affect equilibrium effort in repeated contests can be directly influenced by government – at least in some settings – such as the number of players, the size or growth of the prize, “favorite” or “underdog” status of a player, and possibly a player’s productivity. For example, governments normally define eligibility requirements for participation in particular contests, and those requirements affect the number of eligible participants. Such a structural element, or “externally imposed architecture,” was shown here to affect the incentives perceived by the various contestants, in a twist on the notion proposed by Braithwaite *et al. *(2007, p. 5).

Alternatively, governments can define the rules of a contest in a way that creates favorites and underdogs. For example, formal or informal regional quotas alter the chances that one contestant will win a research grant or a military defence contract. Such strategies would correspond to the “external incentives” discussed by Braithwaite *et al. *(2007). Either approach allows a regulatory or governmental body to regulate for results by regulating the system rather than by prescribing actions, thereby accomplishing the shift in emphasis proposed by May (2007). Further analysis could explore the ability of regulated contestants to influence the selection of these structural parameters, which would open up new dimensions of possible regulatory capture. The importance of the particular equilibrium concept adopted (consciously or unconsciously) by contestants can make it crucial for a rule maker to identify the relevant equilibrium to foresee the impact of the rule or standard selected, an empirical question that affects the specific outcome and, more generally, the degree of regulatory accountability.

In terms of the taxonomy of May (2007), the applications of contests we consider can include all three types of regulatory regimes: rule setting, process setting, and goal setting. Ongoing contests may also influence the degree or uniformity of enforcement, once the rules are in place. One key feature distinguishes us from May: we model an *endogenous *process by which multiple entities, including those affected by the outcomes, influence the regulatory or governance outcomes. In practice, the process of regulation or governance is nearly always endogenous to some degree, especially in the long run, making it vital to recognize the effects of that endogeneity for both descriptive and normative purposes.

Other parameters, such as the discount rate of each contestant, were shown here to influence effort and cooperation in a dynamic setting, yet generally cannot be influenced by government in any obvious way. Even here, awareness by regulators and other government officials of the role of the discount rate, combined with exogenous factors that may cause a shift in the apparent time preferences or risk aversion of stakeholders, can help to design appropriate contest structures or at least to understand the likely pattern of outcomes. For example, procedural and technological shifts have sometimes altered the empirical volatility of the macroeconomy, shifting the level of risk perceived by the public and potentially altering their behavior in ways corresponding to a shift in the discount rate.

These applications extend to a dynamic setting the strategies for influencing effort previously identified for static (non-repeated) contests (Singh & Wittman 1998; Shaffer 2006). When targeting such parameters strategically, a government or regulatory agency would want to think about whether or not more effort is good or bad in a particular contest. Rent seeking effort has traditionally been interpreted as wasteful in standard contests, but exceptions clearly exist. If the contest is an all-pay auction and the government wishes to maximize its revenue, then more effort is better. Likewise, more effort to develop research proposals will be likely to result in better research, so more effort is better in contests for research grants. Similarly, more (and more thoughtful) external comments on proposed regulatory changes are believed to improve regulatory outcomes in the US banking industry (Kroszner 2007). Regardless, Proposition 6 establishes the novel result that more bidders are not always better in a repeated all-pay auction or in other repeated contests given the common property nature of the prize.

Our analysis complements and extends previous studies by focusing on repeated contests without an end point, which is a realistic description of some important interactions between government and the private sector, among interest groups competing for governmental response, or among multiple government agencies; and by exploring a different variety of solution concepts than other studies that would otherwise be closely related to ours. The rich set of factors identified as influencing contest effort, the important role of the particular equilibrium concept describing contestants’ behavior, and the surprising nature of some of the results all render empirical testing of the predictions an important area for future research. Because it is typically impossible to quantify participants’ discount rates, relative productivity, or favorite/underdog status in particular contests, such extensions may be initially best suited to controlled laboratory experiments (see for instance Shogren & Baik 1991).